mirror of
https://github.com/HackTricks-wiki/hacktricks-cloud.git
synced 2025-12-31 23:15:48 -08:00
Translated ['src/README.md', 'src/banners/hacktricks-training.md', 'src/
This commit is contained in:
@@ -4,45 +4,44 @@
|
||||
|
||||
<figure><img src="../images/CLOUD-logo-letters.svg" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
## Basic Methodology
|
||||
## Grundlegende Methodik
|
||||
|
||||
Each cloud has its own peculiarities but in general there are a few **common things a pentester should check** when testing a cloud environment:
|
||||
Jede Cloud hat ihre eigenen Besonderheiten, aber im Allgemeinen gibt es einige **gemeinsame Dinge, die ein Pentester überprüfen sollte**, wenn er eine Cloud-Umgebung testet:
|
||||
|
||||
- **Benchmark checks**
|
||||
- This will help you **understand the size** of the environment and **services used**
|
||||
- It will allow you also to find some **quick misconfigurations** as you can perform most of this tests with **automated tools**
|
||||
- **Services Enumeration**
|
||||
- You probably won't find much more misconfigurations here if you performed correctly the benchmark tests, but you might find some that weren't being looked for in the benchmark test.
|
||||
- This will allow you to know **what is exactly being used** in the cloud env
|
||||
- This will help a lot in the next steps
|
||||
- **Check exposed assets**
|
||||
- This can be done during the previous section, you need to **find out everything that is potentially exposed** to the Internet somehow and how can it be accessed.
|
||||
- Here I'm taking **manually exposed infrastructure** like instances with web pages or other ports being exposed, and also about other **cloud managed services that can be configured** to be exposed (such as DBs or buckets)
|
||||
- Then you should check **if that resource can be exposed or not** (confidential information? vulnerabilities? misconfigurations in the exposed service?)
|
||||
- **Check permissions**
|
||||
- Here you should **find out all the permissions of each role/user** inside the cloud and how are they used
|
||||
- Too **many highly privileged** (control everything) accounts? Generated keys not used?... Most of these check should have been done in the benchmark tests already
|
||||
- If the client is using OpenID or SAML or other **federation** you might need to ask them for further **information** about **how is being each role assigned** (it's not the same that the admin role is assigned to 1 user or to 100)
|
||||
- It's **not enough to find** which users has **admin** permissions "\*:\*". There are a lot of **other permissions** that depending on the services used can be very **sensitive**.
|
||||
- Moreover, there are **potential privesc** ways to follow abusing permissions. All this things should be taken into account and **as much privesc paths as possible** should be reported.
|
||||
- **Check Integrations**
|
||||
- It's highly probably that **integrations with other clouds or SaaS** are being used inside the cloud env.
|
||||
- For **integrations of the cloud you are auditing** with other platform you should notify **who has access to (ab)use that integration** and you should ask **how sensitive** is the action being performed.\
|
||||
For example, who can write in an AWS bucket where GCP is getting data from (ask how sensitive is the action in GCP treating that data).
|
||||
- For **integrations inside the cloud you are auditing** from external platforms, you should ask **who has access externally to (ab)use that integration** and check how is that data being used.\
|
||||
For example, if a service is using a Docker image hosted in GCR, you should ask who has access to modify that and which sensitive info and access will get that image when executed inside an AWS cloud.
|
||||
- **Benchmark-Überprüfungen**
|
||||
- Dies wird Ihnen helfen, **die Größe** der Umgebung und **die verwendeten Dienste** zu **verstehen**.
|
||||
- Es wird Ihnen auch ermöglichen, einige **schnelle Fehlkonfigurationen** zu finden, da Sie die meisten dieser Tests mit **automatisierten Tools** durchführen können.
|
||||
- **Dienstenumeration**
|
||||
- Sie werden hier wahrscheinlich nicht viel mehr Fehlkonfigurationen finden, wenn Sie die Benchmark-Tests korrekt durchgeführt haben, aber Sie könnten einige finden, die im Benchmark-Test nicht gesucht wurden.
|
||||
- Dies wird Ihnen ermöglichen, **genau zu wissen, was verwendet wird** in der Cloud-Umgebung.
|
||||
- Dies wird in den nächsten Schritten sehr hilfreich sein.
|
||||
- **Überprüfen Sie exponierte Ressourcen**
|
||||
- Dies kann während des vorherigen Abschnitts erfolgen, Sie müssen **alles herausfinden, was potenziell exponiert ist** und wie darauf zugegriffen werden kann.
|
||||
- Hier spreche ich von **manuell exponierter Infrastruktur**, wie Instanzen mit Webseiten oder anderen exponierten Ports, sowie von anderen **cloudverwalteten Diensten, die konfiguriert werden können**, um exponiert zu sein (wie DBs oder Buckets).
|
||||
- Dann sollten Sie überprüfen, **ob diese Ressource exponiert werden kann oder nicht** (vertrauliche Informationen? Schwachstellen? Fehlkonfigurationen im exponierten Dienst?).
|
||||
- **Überprüfen Sie Berechtigungen**
|
||||
- Hier sollten Sie **alle Berechtigungen jeder Rolle/Jedes Benutzers** in der Cloud herausfinden und wie sie verwendet werden.
|
||||
- Zu **viele hochprivilegierte** (alles kontrollieren) Konten? Generierte Schlüssel, die nicht verwendet werden?... Die meisten dieser Überprüfungen sollten bereits in den Benchmark-Tests durchgeführt worden sein.
|
||||
- Wenn der Kunde OpenID oder SAML oder eine andere **Föderation** verwendet, müssen Sie möglicherweise nach weiteren **Informationen** fragen, wie **jede Rolle zugewiesen wird** (es ist nicht dasselbe, ob die Admin-Rolle einem Benutzer oder 100 Benutzern zugewiesen ist).
|
||||
- Es ist **nicht genug zu finden**, welche Benutzer **Admin**-Berechtigungen "\*:\*" haben. Es gibt viele **andere Berechtigungen**, die je nach den verwendeten Diensten sehr **sensibel** sein können.
|
||||
- Darüber hinaus gibt es **potenzielle Privilegieneskalations**-Wege, die durch den Missbrauch von Berechtigungen verfolgt werden können. All diese Dinge sollten berücksichtigt werden und **so viele Privilegieneskalationspfade wie möglich** sollten gemeldet werden.
|
||||
- **Überprüfen Sie Integrationen**
|
||||
- Es ist sehr wahrscheinlich, dass **Integrationen mit anderen Clouds oder SaaS** innerhalb der Cloud-Umgebung verwendet werden.
|
||||
- Für **Integrationen der Cloud, die Sie auditieren**, mit anderen Plattformen sollten Sie **benachrichtigen, wer Zugang hat, um diese Integration (miss)zu verwenden**, und Sie sollten fragen, **wie sensibel** die durchgeführte Aktion ist.\
|
||||
Zum Beispiel, wer kann in einen AWS-Bucket schreiben, aus dem GCP Daten bezieht (fragen Sie, wie sensibel die Aktion in GCP im Umgang mit diesen Daten ist).
|
||||
- Für **Integrationen innerhalb der Cloud, die Sie auditieren**, von externen Plattformen sollten Sie fragen, **wer externen Zugang hat, um diese Integration (miss)zu verwenden**, und überprüfen, wie diese Daten verwendet werden.\
|
||||
Zum Beispiel, wenn ein Dienst ein Docker-Image verwendet, das in GCR gehostet wird, sollten Sie fragen, wer Zugang hat, um das zu ändern, und welche sensiblen Informationen und Zugriffe dieses Image erhält, wenn es in einer AWS-Cloud ausgeführt wird.
|
||||
|
||||
## Multi-Cloud tools
|
||||
## Multi-Cloud-Tools
|
||||
|
||||
There are several tools that can be used to test different cloud environments. The installation steps and links are going to be indicated in this section.
|
||||
Es gibt mehrere Tools, die verwendet werden können, um verschiedene Cloud-Umgebungen zu testen. Die Installationsschritte und Links werden in diesem Abschnitt angegeben.
|
||||
|
||||
### [PurplePanda](https://github.com/carlospolop/purplepanda)
|
||||
|
||||
A tool to **identify bad configurations and privesc path in clouds and across clouds/SaaS.**
|
||||
Ein Tool, um **schlechte Konfigurationen und Privilegieneskalationspfade in Clouds und über Clouds/SaaS hinweg zu identifizieren.**
|
||||
|
||||
{{#tabs }}
|
||||
{{#tab name="Install" }}
|
||||
|
||||
```bash
|
||||
# You need to install and run neo4j also
|
||||
git clone https://github.com/carlospolop/PurplePanda
|
||||
@@ -54,29 +53,25 @@ export PURPLEPANDA_NEO4J_URL="bolt://neo4j@localhost:7687"
|
||||
export PURPLEPANDA_PWD="neo4j_pwd_4_purplepanda"
|
||||
python3 main.py -h # Get help
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="GCP" }}
|
||||
|
||||
```bash
|
||||
export GOOGLE_DISCOVERY=$(echo 'google:
|
||||
- file_path: ""
|
||||
|
||||
- file_path: ""
|
||||
service_account_id: "some-sa-email@sidentifier.iam.gserviceaccount.com"' | base64)
|
||||
service_account_id: "some-sa-email@sidentifier.iam.gserviceaccount.com"' | base64)
|
||||
|
||||
python3 main.py -a -p google #Get basic info of the account to check it's correctly configured
|
||||
python3 main.py -e -p google #Enumerate the env
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
{{#endtabs }}
|
||||
|
||||
### [Prowler](https://github.com/prowler-cloud/prowler)
|
||||
|
||||
It supports **AWS, GCP & Azure**. Check how to configure each provider in [https://docs.prowler.cloud/en/latest/#aws](https://docs.prowler.cloud/en/latest/#aws)
|
||||
|
||||
Es unterstützt **AWS, GCP & Azure**. Überprüfen Sie, wie Sie jeden Anbieter unter [https://docs.prowler.cloud/en/latest/#aws](https://docs.prowler.cloud/en/latest/#aws) konfigurieren.
|
||||
```bash
|
||||
# Install
|
||||
pip install prowler
|
||||
@@ -91,14 +86,12 @@ prowler aws --profile custom-profile [-M csv json json-asff html]
|
||||
prowler <provider> --list-checks
|
||||
prowler <provider> --list-services
|
||||
```
|
||||
|
||||
### [CloudSploit](https://github.com/aquasecurity/cloudsploit)
|
||||
|
||||
AWS, Azure, Github, Google, Oracle, Alibaba
|
||||
|
||||
{{#tabs }}
|
||||
{{#tab name="Install" }}
|
||||
|
||||
```bash
|
||||
# Install
|
||||
git clone https://github.com/aquasecurity/cloudsploit.git
|
||||
@@ -107,16 +100,13 @@ npm install
|
||||
./index.js -h
|
||||
## Docker instructions in github
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="GCP" }}
|
||||
|
||||
```bash
|
||||
## You need to have creds for a service account and set them in config.js file
|
||||
./index.js --cloud google --config </abs/path/to/config.js>
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
{{#endtabs }}
|
||||
|
||||
@@ -126,7 +116,6 @@ AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure
|
||||
|
||||
{{#tabs }}
|
||||
{{#tab name="Install" }}
|
||||
|
||||
```bash
|
||||
mkdir scout; cd scout
|
||||
virtualenv -p python3 venv
|
||||
@@ -135,24 +124,21 @@ pip install scoutsuite
|
||||
scout --help
|
||||
## Using Docker: https://github.com/nccgroup/ScoutSuite/wiki/Docker-Image
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="GCP" }}
|
||||
|
||||
```bash
|
||||
scout gcp --report-dir /tmp/gcp --user-account --all-projects
|
||||
## use "--service-account KEY_FILE" instead of "--user-account" to use a service account
|
||||
|
||||
SCOUT_FOLDER_REPORT="/tmp"
|
||||
for pid in $(gcloud projects list --format="value(projectId)"); do
|
||||
echo "================================================"
|
||||
echo "Checking $pid"
|
||||
mkdir "$SCOUT_FOLDER_REPORT/$pid"
|
||||
scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-browser --user-account --project-id "$pid"
|
||||
echo "================================================"
|
||||
echo "Checking $pid"
|
||||
mkdir "$SCOUT_FOLDER_REPORT/$pid"
|
||||
scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-browser --user-account --project-id "$pid"
|
||||
done
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
{{#endtabs }}
|
||||
|
||||
@@ -160,17 +146,14 @@ done
|
||||
|
||||
{{#tabs }}
|
||||
{{#tab name="Install" }}
|
||||
Download and install Steampipe ([https://steampipe.io/downloads](https://steampipe.io/downloads)). Or use Brew:
|
||||
|
||||
Laden Sie Steampipe herunter und installieren Sie es ([https://steampipe.io/downloads](https://steampipe.io/downloads)). Oder verwenden Sie Brew:
|
||||
```
|
||||
brew tap turbot/tap
|
||||
brew install steampipe
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="GCP" }}
|
||||
|
||||
```bash
|
||||
# Install gcp plugin
|
||||
steampipe plugin install gcp
|
||||
@@ -183,13 +166,11 @@ steampipe dashboard
|
||||
# To run all the checks from rhe cli
|
||||
steampipe check all
|
||||
```
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Check all Projects</summary>
|
||||
|
||||
In order to check all the projects you need to generate the `gcp.spc` file indicating all the projects to test. You can just follow the indications from the following script
|
||||
<summary>Alle Projekte überprüfen</summary>
|
||||
|
||||
Um alle Projekte zu überprüfen, müssen Sie die Datei `gcp.spc` generieren, die alle zu testenden Projekte angibt. Sie können einfach den Anweisungen des folgenden Skripts folgen.
|
||||
```bash
|
||||
FILEPATH="/tmp/gcp.spc"
|
||||
rm -rf "$FILEPATH" 2>/dev/null
|
||||
@@ -197,32 +178,30 @@ rm -rf "$FILEPATH" 2>/dev/null
|
||||
# Generate a json like object for each project
|
||||
for pid in $(gcloud projects list --format="value(projectId)"); do
|
||||
echo "connection \"gcp_$(echo -n $pid | tr "-" "_" )\" {
|
||||
plugin = \"gcp\"
|
||||
project = \"$pid\"
|
||||
plugin = \"gcp\"
|
||||
project = \"$pid\"
|
||||
}" >> "$FILEPATH"
|
||||
done
|
||||
|
||||
# Generate the aggragator to call
|
||||
echo 'connection "gcp_all" {
|
||||
plugin = "gcp"
|
||||
type = "aggregator"
|
||||
connections = ["gcp_*"]
|
||||
plugin = "gcp"
|
||||
type = "aggregator"
|
||||
connections = ["gcp_*"]
|
||||
}' >> "$FILEPATH"
|
||||
|
||||
echo "Copy $FILEPATH in ~/.steampipe/config/gcp.spc if it was correctly generated"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
To check **other GCP insights** (useful for enumerating services) use: [https://github.com/turbot/steampipe-mod-gcp-insights](https://github.com/turbot/steampipe-mod-gcp-insights)
|
||||
Um **andere GCP-Insights** (nützlich zur Enumeration von Diensten) zu überprüfen, verwenden Sie: [https://github.com/turbot/steampipe-mod-gcp-insights](https://github.com/turbot/steampipe-mod-gcp-insights)
|
||||
|
||||
To check Terraform GCP code: [https://github.com/turbot/steampipe-mod-terraform-gcp-compliance](https://github.com/turbot/steampipe-mod-terraform-gcp-compliance)
|
||||
Um Terraform GCP-Code zu überprüfen: [https://github.com/turbot/steampipe-mod-terraform-gcp-compliance](https://github.com/turbot/steampipe-mod-terraform-gcp-compliance)
|
||||
|
||||
More GCP plugins of Steampipe: [https://github.com/turbot?q=gcp](https://github.com/turbot?q=gcp)
|
||||
Weitere GCP-Plugins von Steampipe: [https://github.com/turbot?q=gcp](https://github.com/turbot?q=gcp)
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="AWS" }}
|
||||
|
||||
```bash
|
||||
# Install aws plugin
|
||||
steampipe plugin install aws
|
||||
@@ -246,29 +225,27 @@ cd steampipe-mod-aws-compliance
|
||||
steampipe dashboard # To see results in browser
|
||||
steampipe check all --export=/tmp/output4.json
|
||||
```
|
||||
Um Terraform AWS-Code zu überprüfen: [https://github.com/turbot/steampipe-mod-terraform-aws-compliance](https://github.com/turbot/steampipe-mod-terraform-aws-compliance)
|
||||
|
||||
To check Terraform AWS code: [https://github.com/turbot/steampipe-mod-terraform-aws-compliance](https://github.com/turbot/steampipe-mod-terraform-aws-compliance)
|
||||
|
||||
More AWS plugins of Steampipe: [https://github.com/orgs/turbot/repositories?q=aws](https://github.com/orgs/turbot/repositories?q=aws)
|
||||
Weitere AWS-Plugins von Steampipe: [https://github.com/orgs/turbot/repositories?q=aws](https://github.com/orgs/turbot/repositories?q=aws)
|
||||
{{#endtab }}
|
||||
{{#endtabs }}
|
||||
|
||||
### [~~cs-suite~~](https://github.com/SecurityFTW/cs-suite)
|
||||
|
||||
AWS, GCP, Azure, DigitalOcean.\
|
||||
It requires python2.7 and looks unmaintained.
|
||||
Es erfordert python2.7 und sieht unwartbar aus.
|
||||
|
||||
### Nessus
|
||||
|
||||
Nessus has an _**Audit Cloud Infrastructure**_ scan supporting: AWS, Azure, Office 365, Rackspace, Salesforce. Some extra configurations in **Azure** are needed to obtain a **Client Id**.
|
||||
Nessus hat einen _**Audit Cloud Infrastructure**_-Scan, der unterstützt: AWS, Azure, Office 365, Rackspace, Salesforce. Einige zusätzliche Konfigurationen in **Azure** sind erforderlich, um eine **Client Id** zu erhalten.
|
||||
|
||||
### [**cloudlist**](https://github.com/projectdiscovery/cloudlist)
|
||||
|
||||
Cloudlist is a **multi-cloud tool for getting Assets** (Hostnames, IP Addresses) from Cloud Providers.
|
||||
Cloudlist ist ein **Multi-Cloud-Tool zum Abrufen von Assets** (Hostnamen, IP-Adressen) von Cloud-Anbietern.
|
||||
|
||||
{{#tabs }}
|
||||
{{#tab name="Cloudlist" }}
|
||||
|
||||
```bash
|
||||
cd /tmp
|
||||
wget https://github.com/projectdiscovery/cloudlist/releases/latest/download/cloudlist_1.0.1_macOS_arm64.zip
|
||||
@@ -276,46 +253,40 @@ unzip cloudlist_1.0.1_macOS_arm64.zip
|
||||
chmod +x cloudlist
|
||||
sudo mv cloudlist /usr/local/bin
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="Second Tab" }}
|
||||
|
||||
{{#tab name="Zweiter Tab" }}
|
||||
```bash
|
||||
## For GCP it requires service account JSON credentials
|
||||
cloudlist -config </path/to/config>
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
{{#endtabs }}
|
||||
|
||||
### [**cartography**](https://github.com/lyft/cartography)
|
||||
|
||||
Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a Neo4j database.
|
||||
Cartography ist ein Python-Tool, das Infrastrukturressourcen und die Beziehungen zwischen ihnen in einer intuitiven grafischen Ansicht konsolidiert, die von einer Neo4j-Datenbank unterstützt wird.
|
||||
|
||||
{{#tabs }}
|
||||
{{#tab name="Install" }}
|
||||
|
||||
```bash
|
||||
# Installation
|
||||
docker image pull ghcr.io/lyft/cartography
|
||||
docker run --platform linux/amd64 ghcr.io/lyft/cartography cartography --help
|
||||
## Install a Neo4j DB version 3.5.*
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="GCP" }}
|
||||
|
||||
```bash
|
||||
docker run --platform linux/amd64 \
|
||||
--volume "$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \
|
||||
-e GOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \
|
||||
-e NEO4j_PASSWORD="s3cr3t" \
|
||||
ghcr.io/lyft/cartography \
|
||||
--neo4j-uri bolt://host.docker.internal:7687 \
|
||||
--neo4j-password-env-var NEO4j_PASSWORD \
|
||||
--neo4j-user neo4j
|
||||
--volume "$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \
|
||||
-e GOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \
|
||||
-e NEO4j_PASSWORD="s3cr3t" \
|
||||
ghcr.io/lyft/cartography \
|
||||
--neo4j-uri bolt://host.docker.internal:7687 \
|
||||
--neo4j-password-env-var NEO4j_PASSWORD \
|
||||
--neo4j-user neo4j
|
||||
|
||||
|
||||
# It only checks for a few services inside GCP (https://lyft.github.io/cartography/modules/gcp/index.html)
|
||||
@@ -326,17 +297,15 @@ docker run --platform linux/amd64 \
|
||||
## Google Kubernetes Engine
|
||||
### If you can run starbase or purplepanda you will get more info
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
{{#endtabs }}
|
||||
|
||||
### [**starbase**](https://github.com/JupiterOne/starbase)
|
||||
|
||||
Starbase collects assets and relationships from services and systems including cloud infrastructure, SaaS applications, security controls, and more into an intuitive graph view backed by the Neo4j database.
|
||||
Starbase sammelt Assets und Beziehungen von Diensten und Systemen, einschließlich Cloud-Infrastruktur, SaaS-Anwendungen, Sicherheitskontrollen und mehr, in einer intuitiven grafischen Ansicht, die von der Neo4j-Datenbank unterstützt wird.
|
||||
|
||||
{{#tabs }}
|
||||
{{#tab name="Install" }}
|
||||
|
||||
```bash
|
||||
# You are going to need Node version 14, so install nvm following https://tecadmin.net/install-nvm-macos-with-homebrew/
|
||||
npm install --global yarn
|
||||
@@ -359,44 +328,40 @@ docker build --no-cache -t starbase:latest .
|
||||
docker-compose run starbase setup
|
||||
docker-compose run starbase run
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
|
||||
{{#tab name="GCP" }}
|
||||
|
||||
```yaml
|
||||
## Config for GCP
|
||||
### Check out: https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md
|
||||
### It requires service account credentials
|
||||
|
||||
integrations:
|
||||
- name: graph-google-cloud
|
||||
instanceId: testInstanceId
|
||||
directory: ./.integrations/graph-google-cloud
|
||||
gitRemoteUrl: https://github.com/JupiterOne/graph-google-cloud.git
|
||||
config:
|
||||
SERVICE_ACCOUNT_KEY_FILE: "{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}"
|
||||
PROJECT_ID: ""
|
||||
FOLDER_ID: ""
|
||||
ORGANIZATION_ID: ""
|
||||
CONFIGURE_ORGANIZATION_PROJECTS: false
|
||||
- name: graph-google-cloud
|
||||
instanceId: testInstanceId
|
||||
directory: ./.integrations/graph-google-cloud
|
||||
gitRemoteUrl: https://github.com/JupiterOne/graph-google-cloud.git
|
||||
config:
|
||||
SERVICE_ACCOUNT_KEY_FILE: "{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}"
|
||||
PROJECT_ID: ""
|
||||
FOLDER_ID: ""
|
||||
ORGANIZATION_ID: ""
|
||||
CONFIGURE_ORGANIZATION_PROJECTS: false
|
||||
|
||||
storage:
|
||||
engine: neo4j
|
||||
config:
|
||||
username: neo4j
|
||||
password: s3cr3t
|
||||
uri: bolt://localhost:7687
|
||||
#Consider using host.docker.internal if from docker
|
||||
engine: neo4j
|
||||
config:
|
||||
username: neo4j
|
||||
password: s3cr3t
|
||||
uri: bolt://localhost:7687
|
||||
#Consider using host.docker.internal if from docker
|
||||
```
|
||||
|
||||
{{#endtab }}
|
||||
{{#endtabs }}
|
||||
|
||||
### [**SkyArk**](https://github.com/cyberark/SkyArk)
|
||||
|
||||
Discover the most privileged users in the scanned AWS or Azure environment, including the AWS Shadow Admins. It uses powershell.
|
||||
|
||||
Entdecken Sie die privilegiertesten Benutzer in der gescannten AWS- oder Azure-Umgebung, einschließlich der AWS Shadow Admins. Es verwendet PowerShell.
|
||||
```powershell
|
||||
Import-Module .\SkyArk.ps1 -force
|
||||
Start-AzureStealth
|
||||
@@ -405,18 +370,17 @@ Start-AzureStealth
|
||||
IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/cyberark/SkyArk/master/AzureStealth/AzureStealth.ps1')
|
||||
Scan-AzureAdmins
|
||||
```
|
||||
|
||||
### [Cloud Brute](https://github.com/0xsha/CloudBrute)
|
||||
|
||||
A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode).
|
||||
Ein Tool, um die Infrastruktur, Dateien und Apps eines Unternehmens (Ziel) bei den führenden Cloud-Anbietern (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode) zu finden.
|
||||
|
||||
### [CloudFox](https://github.com/BishopFox/cloudfox)
|
||||
|
||||
- CloudFox is a tool to find exploitable attack paths in cloud infrastructure (currently only AWS & Azure supported with GCP upcoming).
|
||||
- It is an enumeration tool which is intended to compliment manual pentesting.
|
||||
- It doesn't create or modify any data within the cloud environment.
|
||||
- CloudFox ist ein Tool, um ausnutzbare Angriffswege in der Cloud-Infrastruktur zu finden (derzeit werden nur AWS und Azure unterstützt, GCP kommt bald).
|
||||
- Es ist ein Enumerationswerkzeug, das dazu gedacht ist, manuelles pentesting zu ergänzen.
|
||||
- Es erstellt oder ändert keine Daten innerhalb der Cloud-Umgebung.
|
||||
|
||||
### More lists of cloud security tools
|
||||
### Weitere Listen von Cloud-Sicherheitswerkzeugen
|
||||
|
||||
- [https://github.com/RyanJarv/awesome-cloud-sec](https://github.com/RyanJarv/awesome-cloud-sec)
|
||||
|
||||
@@ -446,16 +410,12 @@ aws-security/
|
||||
azure-security/
|
||||
{{#endref}}
|
||||
|
||||
### Attack Graph
|
||||
### Angriffsgraph
|
||||
|
||||
[**Stormspotter** ](https://github.com/Azure/Stormspotter)creates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work.
|
||||
[**Stormspotter** ](https://github.com/Azure/Stormspotter) erstellt einen „Angriffsgraph“ der Ressourcen in einem Azure-Abonnement. Es ermöglicht roten Teams und Pentestern, die Angriffsfläche und Pivot-Möglichkeiten innerhalb eines Mandanten zu visualisieren und stärkt Ihre Verteidiger, um schnell zu orientieren und die Incident-Response-Arbeit zu priorisieren.
|
||||
|
||||
### Office365
|
||||
|
||||
You need **Global Admin** or at least **Global Admin Reader** (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features **via the web application**.
|
||||
Sie benötigen **Global Admin** oder mindestens **Global Admin Reader** (aber beachten Sie, dass Global Admin Reader etwas eingeschränkt ist). Diese Einschränkungen treten jedoch in einigen PS-Modulen auf und können umgangen werden, indem Sie die Funktionen **über die Webanwendung** aufrufen.
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user