Translated ['src/README.md', 'src/banners/hacktricks-training.md', 'src/

This commit is contained in:
Translator
2024-12-31 20:18:58 +00:00
parent 820dd99aed
commit 931ae54e5f
245 changed files with 9984 additions and 12710 deletions

View File

@@ -6,43 +6,42 @@
## Basic Methodology
Each cloud has its own peculiarities but in general there are a few **common things a pentester should check** when testing a cloud environment:
Ogni cloud ha le proprie peculiarità, ma in generale ci sono alcune **cose comuni che un pentester dovrebbe controllare** quando testa un ambiente cloud:
- **Benchmark checks**
- This will help you **understand the size** of the environment and **services used**
- It will allow you also to find some **quick misconfigurations** as you can perform most of this tests with **automated tools**
- **Services Enumeration**
- You probably won't find much more misconfigurations here if you performed correctly the benchmark tests, but you might find some that weren't being looked for in the benchmark test.
- This will allow you to know **what is exactly being used** in the cloud env
- This will help a lot in the next steps
- **Check exposed assets**
- This can be done during the previous section, you need to **find out everything that is potentially exposed** to the Internet somehow and how can it be accessed.
- Here I'm taking **manually exposed infrastructure** like instances with web pages or other ports being exposed, and also about other **cloud managed services that can be configured** to be exposed (such as DBs or buckets)
- Then you should check **if that resource can be exposed or not** (confidential information? vulnerabilities? misconfigurations in the exposed service?)
- **Check permissions**
- Here you should **find out all the permissions of each role/user** inside the cloud and how are they used
- Too **many highly privileged** (control everything) accounts? Generated keys not used?... Most of these check should have been done in the benchmark tests already
- If the client is using OpenID or SAML or other **federation** you might need to ask them for further **information** about **how is being each role assigned** (it's not the same that the admin role is assigned to 1 user or to 100)
- It's **not enough to find** which users has **admin** permissions "\*:\*". There are a lot of **other permissions** that depending on the services used can be very **sensitive**.
- Moreover, there are **potential privesc** ways to follow abusing permissions. All this things should be taken into account and **as much privesc paths as possible** should be reported.
- **Check Integrations**
- It's highly probably that **integrations with other clouds or SaaS** are being used inside the cloud env.
- For **integrations of the cloud you are auditing** with other platform you should notify **who has access to (ab)use that integration** and you should ask **how sensitive** is the action being performed.\
For example, who can write in an AWS bucket where GCP is getting data from (ask how sensitive is the action in GCP treating that data).
- For **integrations inside the cloud you are auditing** from external platforms, you should ask **who has access externally to (ab)use that integration** and check how is that data being used.\
For example, if a service is using a Docker image hosted in GCR, you should ask who has access to modify that and which sensitive info and access will get that image when executed inside an AWS cloud.
- **Controlli di benchmark**
- Questo ti aiuterà a **comprendere la dimensione** dell'ambiente e i **servizi utilizzati**
- Ti permetterà anche di trovare alcune **misconfigurazioni rapide** poiché puoi eseguire la maggior parte di questi test con **strumenti automatizzati**
- **Enumerazione dei servizi**
- Probabilmente non troverai molte altre misconfigurazioni qui se hai eseguito correttamente i test di benchmark, ma potresti trovare alcune che non erano state cercate nel test di benchmark.
- Questo ti permetterà di sapere **cosa viene esattamente utilizzato** nell'ambiente cloud
- Questo aiuterà molto nei passaggi successivi
- **Controlla le risorse esposte**
- Questo può essere fatto durante la sezione precedente, devi **scoprire tutto ciò che è potenzialmente esposto** a Internet in qualche modo e come può essere accessibile.
- Qui sto considerando **infrastrutture esposte manualmente** come istanze con pagine web o altre porte esposte, e anche altri **servizi gestiti dal cloud che possono essere configurati** per essere esposti (come DB o bucket)
- Poi dovresti controllare **se quella risorsa può essere esposta o meno** (informazioni riservate? vulnerabilità? misconfigurazioni nel servizio esposto?)
- **Controlla i permessi**
- Qui dovresti **scoprire tutti i permessi di ciascun ruolo/utente** all'interno del cloud e come vengono utilizzati
- Troppi account **altamente privilegiati** (controllano tutto)? Chiavi generate non utilizzate?... La maggior parte di questi controlli dovrebbe già essere stata eseguita nei test di benchmark
- Se il cliente sta utilizzando OpenID o SAML o altra **federazione**, potresti dover chiedere ulteriori **informazioni** su **come viene assegnato ciascun ruolo** (non è la stessa cosa che il ruolo di admin sia assegnato a 1 utente o a 100)
- Non è **sufficiente trovare** quali utenti hanno permessi **admin** "\*:\*". Ci sono molti **altri permessi** che a seconda dei servizi utilizzati possono essere molto **sensibili**.
- Inoltre, ci sono **potenziali modi di privesc** da seguire abusando dei permessi. Tutte queste cose dovrebbero essere prese in considerazione e **dovrebbero essere segnalati quanti più percorsi di privesc possibile**.
- **Controlla le integrazioni**
- È altamente probabile che **le integrazioni con altri cloud o SaaS** siano utilizzate all'interno dell'ambiente cloud.
- Per **le integrazioni del cloud che stai auditando** con altre piattaforme, dovresti notificare **chi ha accesso a (ab)usare quell'integrazione** e dovresti chiedere **quanto è sensibile** l'azione eseguita.\
Ad esempio, chi può scrivere in un bucket AWS da cui GCP sta estraendo dati (chiedi quanto è sensibile l'azione in GCP trattando quei dati).
- Per **le integrazioni all'interno del cloud che stai auditando** da piattaforme esterne, dovresti chiedere **chi ha accesso esternamente a (ab)usare quell'integrazione** e controllare come vengono utilizzati quei dati.\
Ad esempio, se un servizio sta utilizzando un'immagine Docker ospitata in GCR, dovresti chiedere chi ha accesso a modificarla e quali informazioni sensibili e accesso avrà quell'immagine quando eseguita all'interno di un cloud AWS.
## Multi-Cloud tools
There are several tools that can be used to test different cloud environments. The installation steps and links are going to be indicated in this section.
Ci sono diversi strumenti che possono essere utilizzati per testare diversi ambienti cloud. I passaggi di installazione e i link saranno indicati in questa sezione.
### [PurplePanda](https://github.com/carlospolop/purplepanda)
A tool to **identify bad configurations and privesc path in clouds and across clouds/SaaS.**
Uno strumento per **identificare cattive configurazioni e percorsi di privesc nei cloud e tra cloud/SaaS.**
{{#tabs }}
{{#tab name="Install" }}
```bash
# You need to install and run neo4j also
git clone https://github.com/carlospolop/PurplePanda
@@ -54,29 +53,25 @@ export PURPLEPANDA_NEO4J_URL="bolt://neo4j@localhost:7687"
export PURPLEPANDA_PWD="neo4j_pwd_4_purplepanda"
python3 main.py -h # Get help
```
{{#endtab }}
{{#tab name="GCP" }}
```bash
export GOOGLE_DISCOVERY=$(echo 'google:
- file_path: ""
- file_path: ""
service_account_id: "some-sa-email@sidentifier.iam.gserviceaccount.com"' | base64)
service_account_id: "some-sa-email@sidentifier.iam.gserviceaccount.com"' | base64)
python3 main.py -a -p google #Get basic info of the account to check it's correctly configured
python3 main.py -e -p google #Enumerate the env
```
{{#endtab }}
{{#endtabs }}
### [Prowler](https://github.com/prowler-cloud/prowler)
It supports **AWS, GCP & Azure**. Check how to configure each provider in [https://docs.prowler.cloud/en/latest/#aws](https://docs.prowler.cloud/en/latest/#aws)
Supporta **AWS, GCP & Azure**. Controlla come configurare ogni fornitore in [https://docs.prowler.cloud/en/latest/#aws](https://docs.prowler.cloud/en/latest/#aws)
```bash
# Install
pip install prowler
@@ -91,14 +86,12 @@ prowler aws --profile custom-profile [-M csv json json-asff html]
prowler <provider> --list-checks
prowler <provider> --list-services
```
### [CloudSploit](https://github.com/aquasecurity/cloudsploit)
AWS, Azure, Github, Google, Oracle, Alibaba
{{#tabs }}
{{#tab name="Install" }}
{{#tab name="Installa" }}
```bash
# Install
git clone https://github.com/aquasecurity/cloudsploit.git
@@ -107,16 +100,13 @@ npm install
./index.js -h
## Docker instructions in github
```
{{#endtab }}
{{#tab name="GCP" }}
```bash
## You need to have creds for a service account and set them in config.js file
./index.js --cloud google --config </abs/path/to/config.js>
```
{{#endtab }}
{{#endtabs }}
@@ -125,8 +115,7 @@ npm install
AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure
{{#tabs }}
{{#tab name="Install" }}
{{#tab name="Installa" }}
```bash
mkdir scout; cd scout
virtualenv -p python3 venv
@@ -135,24 +124,21 @@ pip install scoutsuite
scout --help
## Using Docker: https://github.com/nccgroup/ScoutSuite/wiki/Docker-Image
```
{{#endtab }}
{{#tab name="GCP" }}
```bash
scout gcp --report-dir /tmp/gcp --user-account --all-projects
## use "--service-account KEY_FILE" instead of "--user-account" to use a service account
SCOUT_FOLDER_REPORT="/tmp"
for pid in $(gcloud projects list --format="value(projectId)"); do
echo "================================================"
echo "Checking $pid"
mkdir "$SCOUT_FOLDER_REPORT/$pid"
scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-browser --user-account --project-id "$pid"
echo "================================================"
echo "Checking $pid"
mkdir "$SCOUT_FOLDER_REPORT/$pid"
scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-browser --user-account --project-id "$pid"
done
```
{{#endtab }}
{{#endtabs }}
@@ -160,17 +146,14 @@ done
{{#tabs }}
{{#tab name="Install" }}
Download and install Steampipe ([https://steampipe.io/downloads](https://steampipe.io/downloads)). Or use Brew:
Scarica e installa Steampipe ([https://steampipe.io/downloads](https://steampipe.io/downloads)). Oppure usa Brew:
```
brew tap turbot/tap
brew install steampipe
```
{{#endtab }}
{{#tab name="GCP" }}
```bash
# Install gcp plugin
steampipe plugin install gcp
@@ -183,13 +166,11 @@ steampipe dashboard
# To run all the checks from rhe cli
steampipe check all
```
<details>
<summary>Check all Projects</summary>
In order to check all the projects you need to generate the `gcp.spc` file indicating all the projects to test. You can just follow the indications from the following script
<summary>Controlla tutti i progetti</summary>
Per controllare tutti i progetti, è necessario generare il file `gcp.spc` indicando tutti i progetti da testare. Puoi semplicemente seguire le indicazioni dello script seguente.
```bash
FILEPATH="/tmp/gcp.spc"
rm -rf "$FILEPATH" 2>/dev/null
@@ -197,32 +178,30 @@ rm -rf "$FILEPATH" 2>/dev/null
# Generate a json like object for each project
for pid in $(gcloud projects list --format="value(projectId)"); do
echo "connection \"gcp_$(echo -n $pid | tr "-" "_" )\" {
plugin = \"gcp\"
project = \"$pid\"
plugin = \"gcp\"
project = \"$pid\"
}" >> "$FILEPATH"
done
# Generate the aggragator to call
echo 'connection "gcp_all" {
plugin = "gcp"
type = "aggregator"
connections = ["gcp_*"]
plugin = "gcp"
type = "aggregator"
connections = ["gcp_*"]
}' >> "$FILEPATH"
echo "Copy $FILEPATH in ~/.steampipe/config/gcp.spc if it was correctly generated"
```
</details>
To check **other GCP insights** (useful for enumerating services) use: [https://github.com/turbot/steampipe-mod-gcp-insights](https://github.com/turbot/steampipe-mod-gcp-insights)
Per controllare **altri approfondimenti GCP** (utili per enumerare i servizi) usa: [https://github.com/turbot/steampipe-mod-gcp-insights](https://github.com/turbot/steampipe-mod-gcp-insights)
To check Terraform GCP code: [https://github.com/turbot/steampipe-mod-terraform-gcp-compliance](https://github.com/turbot/steampipe-mod-terraform-gcp-compliance)
Per controllare il codice Terraform GCP: [https://github.com/turbot/steampipe-mod-terraform-gcp-compliance](https://github.com/turbot/steampipe-mod-terraform-gcp-compliance)
More GCP plugins of Steampipe: [https://github.com/turbot?q=gcp](https://github.com/turbot?q=gcp)
Altri plugin GCP di Steampipe: [https://github.com/turbot?q=gcp](https://github.com/turbot?q=gcp)
{{#endtab }}
{{#tab name="AWS" }}
```bash
# Install aws plugin
steampipe plugin install aws
@@ -246,29 +225,27 @@ cd steampipe-mod-aws-compliance
steampipe dashboard # To see results in browser
steampipe check all --export=/tmp/output4.json
```
Per controllare il codice Terraform AWS: [https://github.com/turbot/steampipe-mod-terraform-aws-compliance](https://github.com/turbot/steampipe-mod-terraform-aws-compliance)
To check Terraform AWS code: [https://github.com/turbot/steampipe-mod-terraform-aws-compliance](https://github.com/turbot/steampipe-mod-terraform-aws-compliance)
More AWS plugins of Steampipe: [https://github.com/orgs/turbot/repositories?q=aws](https://github.com/orgs/turbot/repositories?q=aws)
Altri plugin AWS di Steampipe: [https://github.com/orgs/turbot/repositories?q=aws](https://github.com/orgs/turbot/repositories?q=aws)
{{#endtab }}
{{#endtabs }}
### [~~cs-suite~~](https://github.com/SecurityFTW/cs-suite)
AWS, GCP, Azure, DigitalOcean.\
It requires python2.7 and looks unmaintained.
Richiede python2.7 e sembra non essere mantenuto.
### Nessus
Nessus has an _**Audit Cloud Infrastructure**_ scan supporting: AWS, Azure, Office 365, Rackspace, Salesforce. Some extra configurations in **Azure** are needed to obtain a **Client Id**.
Nessus ha una scansione _**Audit Cloud Infrastructure**_ che supporta: AWS, Azure, Office 365, Rackspace, Salesforce. Alcune configurazioni extra in **Azure** sono necessarie per ottenere un **Client Id**.
### [**cloudlist**](https://github.com/projectdiscovery/cloudlist)
Cloudlist is a **multi-cloud tool for getting Assets** (Hostnames, IP Addresses) from Cloud Providers.
Cloudlist è uno strumento **multi-cloud per ottenere Assets** (Nomi host, Indirizzi IP) dai fornitori di cloud.
{{#tabs }}
{{#tab name="Cloudlist" }}
```bash
cd /tmp
wget https://github.com/projectdiscovery/cloudlist/releases/latest/download/cloudlist_1.0.1_macOS_arm64.zip
@@ -276,46 +253,40 @@ unzip cloudlist_1.0.1_macOS_arm64.zip
chmod +x cloudlist
sudo mv cloudlist /usr/local/bin
```
{{#endtab }}
{{#tab name="Second Tab" }}
```bash
## For GCP it requires service account JSON credentials
cloudlist -config </path/to/config>
```
{{#endtab }}
{{#endtabs }}
### [**cartography**](https://github.com/lyft/cartography)
Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a Neo4j database.
Cartography è uno strumento Python che consolida le risorse infrastrutturali e le relazioni tra di esse in una vista grafica intuitiva alimentata da un database Neo4j.
{{#tabs }}
{{#tab name="Install" }}
```bash
# Installation
docker image pull ghcr.io/lyft/cartography
docker run --platform linux/amd64 ghcr.io/lyft/cartography cartography --help
## Install a Neo4j DB version 3.5.*
```
{{#endtab }}
{{#tab name="GCP" }}
```bash
docker run --platform linux/amd64 \
--volume "$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \
-e GOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \
-e NEO4j_PASSWORD="s3cr3t" \
ghcr.io/lyft/cartography \
--neo4j-uri bolt://host.docker.internal:7687 \
--neo4j-password-env-var NEO4j_PASSWORD \
--neo4j-user neo4j
--volume "$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \
-e GOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \
-e NEO4j_PASSWORD="s3cr3t" \
ghcr.io/lyft/cartography \
--neo4j-uri bolt://host.docker.internal:7687 \
--neo4j-password-env-var NEO4j_PASSWORD \
--neo4j-user neo4j
# It only checks for a few services inside GCP (https://lyft.github.io/cartography/modules/gcp/index.html)
@@ -326,17 +297,15 @@ docker run --platform linux/amd64 \
## Google Kubernetes Engine
### If you can run starbase or purplepanda you will get more info
```
{{#endtab }}
{{#endtabs }}
### [**starbase**](https://github.com/JupiterOne/starbase)
Starbase collects assets and relationships from services and systems including cloud infrastructure, SaaS applications, security controls, and more into an intuitive graph view backed by the Neo4j database.
Starbase raccoglie asset e relazioni da servizi e sistemi, inclusi infrastrutture cloud, applicazioni SaaS, controlli di sicurezza e altro, in una vista grafica intuitiva supportata dal database Neo4j.
{{#tabs }}
{{#tab name="Install" }}
```bash
# You are going to need Node version 14, so install nvm following https://tecadmin.net/install-nvm-macos-with-homebrew/
npm install --global yarn
@@ -359,44 +328,40 @@ docker build --no-cache -t starbase:latest .
docker-compose run starbase setup
docker-compose run starbase run
```
{{#endtab }}
{{#tab name="GCP" }}
```yaml
## Config for GCP
### Check out: https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md
### It requires service account credentials
integrations:
- name: graph-google-cloud
instanceId: testInstanceId
directory: ./.integrations/graph-google-cloud
gitRemoteUrl: https://github.com/JupiterOne/graph-google-cloud.git
config:
SERVICE_ACCOUNT_KEY_FILE: "{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}"
PROJECT_ID: ""
FOLDER_ID: ""
ORGANIZATION_ID: ""
CONFIGURE_ORGANIZATION_PROJECTS: false
- name: graph-google-cloud
instanceId: testInstanceId
directory: ./.integrations/graph-google-cloud
gitRemoteUrl: https://github.com/JupiterOne/graph-google-cloud.git
config:
SERVICE_ACCOUNT_KEY_FILE: "{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}"
PROJECT_ID: ""
FOLDER_ID: ""
ORGANIZATION_ID: ""
CONFIGURE_ORGANIZATION_PROJECTS: false
storage:
engine: neo4j
config:
username: neo4j
password: s3cr3t
uri: bolt://localhost:7687
#Consider using host.docker.internal if from docker
engine: neo4j
config:
username: neo4j
password: s3cr3t
uri: bolt://localhost:7687
#Consider using host.docker.internal if from docker
```
{{#endtab }}
{{#endtabs }}
### [**SkyArk**](https://github.com/cyberark/SkyArk)
Discover the most privileged users in the scanned AWS or Azure environment, including the AWS Shadow Admins. It uses powershell.
Scopri gli utenti più privilegiati nell'ambiente AWS o Azure scansionato, inclusi gli AWS Shadow Admins. Utilizza PowerShell.
```powershell
Import-Module .\SkyArk.ps1 -force
Start-AzureStealth
@@ -405,18 +370,17 @@ Start-AzureStealth
IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/cyberark/SkyArk/master/AzureStealth/AzureStealth.ps1')
Scan-AzureAdmins
```
### [Cloud Brute](https://github.com/0xsha/CloudBrute)
A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode).
Uno strumento per trovare l'infrastruttura, i file e le app di un'azienda (obiettivo) sui principali fornitori di cloud (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode).
### [CloudFox](https://github.com/BishopFox/cloudfox)
- CloudFox is a tool to find exploitable attack paths in cloud infrastructure (currently only AWS & Azure supported with GCP upcoming).
- It is an enumeration tool which is intended to compliment manual pentesting.
- It doesn't create or modify any data within the cloud environment.
- CloudFox è uno strumento per trovare percorsi di attacco sfruttabili nell'infrastruttura cloud (attualmente supportato solo AWS e Azure, con GCP in arrivo).
- È uno strumento di enumerazione che è destinato a completare il pentesting manuale.
- Non crea né modifica alcun dato all'interno dell'ambiente cloud.
### More lists of cloud security tools
### Altre liste di strumenti di sicurezza cloud
- [https://github.com/RyanJarv/awesome-cloud-sec](https://github.com/RyanJarv/awesome-cloud-sec)
@@ -448,14 +412,10 @@ azure-security/
### Attack Graph
[**Stormspotter** ](https://github.com/Azure/Stormspotter)creates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work.
[**Stormspotter** ](https://github.com/Azure/Stormspotter) crea un “grafico di attacco” delle risorse in un abbonamento Azure. Consente ai team rossi e ai pentester di visualizzare la superficie di attacco e le opportunità di pivot all'interno di un tenant, e potenzia i tuoi difensori per orientarsi rapidamente e dare priorità al lavoro di risposta agli incidenti.
### Office365
You need **Global Admin** or at least **Global Admin Reader** (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features **via the web application**.
Hai bisogno di **Global Admin** o almeno di **Global Admin Reader** (ma nota che Global Admin Reader è un po' limitato). Tuttavia, queste limitazioni appaiono in alcuni moduli PS e possono essere aggirate accedendo alle funzionalità **tramite l'applicazione web**.
{{#include ../banners/hacktricks-training.md}}