Migrate to using mdbook

This commit is contained in:
Congon4tor
2024-12-31 17:04:35 +01:00
parent b9a9fed802
commit cd27cf5a2e
1373 changed files with 26143 additions and 34152 deletions

View File

@@ -0,0 +1,155 @@
# AWS - EKS Post Exploitation
{{#include ../../../banners/hacktricks-training.md}}
## EKS
For mor information check
{{#ref}}
../aws-services/aws-eks-enum.md
{{#endref}}
### Enumerate the cluster from the AWS Console
If you have the permission **`eks:AccessKubernetesApi`** you can **view Kubernetes objects** via AWS EKS console ([Learn more](https://docs.aws.amazon.com/eks/latest/userguide/view-workloads.html)).
### Connect to AWS Kubernetes Cluster
- Easy way:
```bash
# Generate kubeconfig
aws eks update-kubeconfig --name aws-eks-dev
```
- Not that easy way:
If you can **get a token** with **`aws eks get-token --name <cluster_name>`** but you don't have permissions to get cluster info (describeCluster), you could **prepare your own `~/.kube/config`**. However, having the token, you still need the **url endpoint to connect to** (if you managed to get a JWT token from a pod read [here](aws-eks-post-exploitation.md#get-api-server-endpoint-from-a-jwt-token)) and the **name of the cluster**.
In my case, I didn't find the info in CloudWatch logs, but I **found it in LaunchTemaplates userData** and in **EC2 machines in userData also**. You can see this info in **userData** easily, for example in the next example (the cluster name was cluster-name):
```bash
API_SERVER_URL=https://6253F6CA47F81264D8E16FAA7A103A0D.gr7.us-east-1.eks.amazonaws.com
/etc/eks/bootstrap.sh cluster-name --kubelet-extra-args '--node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,alpha.eksctl.io/cluster-name=cluster-name,alpha.eksctl.io/nodegroup-name=prd-ondemand-us-west-2b,role=worker,eks.amazonaws.com/nodegroup-image=ami-002539dd2c532d0a5,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup=prd-ondemand-us-west-2b,type=ondemand,eks.amazonaws.com/sourceLaunchTemplateId=lt-0f0f0ba62bef782e5 --max-pods=58' --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL --dns-cluster-ip $K8S_CLUSTER_DNS_IP --use-max-pods false
```
<details>
<summary>kube config</summary>
```yaml
describe-cache-parametersapiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1USXlPREUyTWpjek1Wb1hEVE15TVRJeU5URTJNamN6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDlXCk9OS0ZqeXZoRUxDZGhMNnFwWkMwa1d0UURSRVF1UzVpRDcwK2pjbjFKWXZ4a3FsV1ZpbmtwOUt5N2x2ME5mUW8KYkNqREFLQWZmMEtlNlFUWVVvOC9jQXJ4K0RzWVlKV3dzcEZGbWlsY1lFWFZHMG5RV1VoMVQ3VWhOanc0MllMRQpkcVpzTGg4OTlzTXRLT1JtVE5sN1V6a05pTlUzSytueTZSRysvVzZmbFNYYnRiT2kwcXJSeFVpcDhMdWl4WGRVCnk4QTg3VjRjbllsMXo2MUt3NllIV3hhSm11eWI5enRtbCtBRHQ5RVhOUXhDMExrdWcxSDBqdTl1MDlkU09YYlkKMHJxY2lINjYvSTh0MjlPZ3JwNkY0dit5eUNJUjZFQURRaktHTFVEWUlVSkZ4WXA0Y1pGcVA1aVJteGJ5Nkh3UwpDSE52TWNJZFZRRUNQMlg5R2c4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQVXFsekhWZmlDd0xqalhPRmJJUUc3L0VxZ1hNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS1o4c0l4aXpsemx0aXRPcGcySgpYV0VUSThoeWxYNWx6cW1mV0dpZkdFVVduUDU3UEVtWW55eWJHbnZ5RlVDbnczTldMRTNrbEVMQVE4d0tLSG8rCnBZdXAzQlNYamdiWFovdWVJc2RhWlNucmVqNU1USlJ3SVFod250ZUtpU0J4MWFRVU01ZGdZc2c4SlpJY3I2WC8KRG5POGlHOGxmMXVxend1dUdHSHM2R1lNR0Mvd1V0czVvcm1GS291SmtSUWhBZElMVkNuaStYNCtmcHUzT21UNwprS3VmR0tyRVlKT09VL1c2YTB3OTRycU9iSS9Mem1GSWxJQnVNcXZWVDBwOGtlcTc1eklpdGNzaUJmYVVidng3Ci9sMGhvS1RqM0IrOGlwbktIWW4wNGZ1R2F2YVJRbEhWcldDVlZ4c3ZyYWpxOUdJNWJUUlJ6TnpTbzFlcTVZNisKRzVBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://6253F6CA47F81264D8E16FAA7A103A0D.gr7.us-west-2.eks.amazonaws.com
name: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
contexts:
- context:
cluster: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
user: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
name: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
current-context: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-west-2
- --profile
- <profile>
- eks
- get-token
- --cluster-name
- <cluster-name>
command: aws
env: null
interactiveMode: IfAvailable
provideClusterInfo: false
```
</details>
### From AWS to Kubernetes
The **creator** of the **EKS cluster** is **ALWAYS** going to be able to get into the kubernetes cluster part of the group **`system:masters`** (k8s admin). At the time of this writing there is **no direct way** to find **who created** the cluster (you can check CloudTrail). And the is **no way** to **remove** that **privilege**.
The way to grant **access to over K8s to more AWS IAM users or roles** is using the **configmap** **`aws-auth`**.
> [!WARNING]
> Therefore, anyone with **write access** over the config map **`aws-auth`** will be able to **compromise the whole cluster**.
For more information about how to **grant extra privileges to IAM roles & users** in the **same or different account** and how to **abuse** this to [**privesc check this page**](../../kubernetes-security/abusing-roles-clusterroles-in-kubernetes/#aws-eks-aws-auth-configmaps).
Check also[ **this awesome**](https://blog.lightspin.io/exploiting-eks-authentication-vulnerability-in-aws-iam-authenticator) **post to learn how the authentication IAM -> Kubernetes work**.
### From Kubernetes to AWS
It's possible to allow an **OpenID authentication for kubernetes service account** to allow them to assume roles in AWS. Learn how [**this work in this page**](../../kubernetes-security/kubernetes-pivoting-to-clouds.md#workflow-of-iam-role-for-service-accounts-1).
### GET Api Server Endpoint from a JWT Token
Decoding the JWT token we get the cluster id & also the region. ![image](https://github.com/HackTricks-wiki/hacktricks-cloud/assets/87022719/0e47204a-eea5-4fcb-b702-36dc184a39e9) Knowing that the standard format for EKS url is
```bash
https://<cluster-id>.<two-random-chars><number>.<region>.eks.amazonaws.com
```
Didn't find any documentation that explain the criteria for the 'two chars' and the 'number'. But making some test on my behalf I see recurring these one:
- gr7
- yl4
Anyway are just 3 chars we can bruteforce them. Use the below script for generating the list
```python
from itertools import product
from string import ascii_lowercase
letter_combinations = product('abcdefghijklmnopqrstuvwxyz', repeat = 2)
number_combinations = product('0123456789', repeat = 1)
result = [
f'{''.join(comb[0])}{comb[1][0]}'
for comb in product(letter_combinations, number_combinations)
]
with open('out.txt', 'w') as f:
f.write('\n'.join(result))
```
Then with wfuzz
```bash
wfuzz -Z -z file,out.txt --hw 0 https://<cluster-id>.FUZZ.<region>.eks.amazonaws.com
```
> [!WARNING]
> Remember to replace & .
### Bypass CloudTrail
If an attacker obtains credentials of an AWS with **permission over an EKS**. If the attacker configures it's own **`kubeconfig`** (without calling **`update-kubeconfig`**) as explained previously, the **`get-token`** doesn't generate logs in Cloudtrail because it doesn't interact with the AWS API (it just creates the token locally).
So when the attacker talks with the EKS cluster, **cloudtrail won't log anything related to the user being stolen and accessing it**.
Note that the **EKS cluster might have logs enabled** that will log this access (although, by default, they are disabled).
### EKS Ransom?
By default the **user or role that created** a cluster is **ALWAYS going to have admin privileges** over the cluster. And that the only "secure" access AWS will have over the Kubernetes cluster.
So, if an **attacker compromises a cluster using fargate** and **removes all the other admins** and d**eletes the AWS user/role that created** the Cluster, ~~the attacker could have **ransomed the cluste**~~**r**.
> [!TIP]
> Note that if the cluster was using **EC2 VMs**, it could be possible to get Admin privileges from the **Node** and recover the cluster.
>
> Actually, If the cluster is using Fargate you could EC2 nodes or move everything to EC2 to the cluster and recover it accessing the tokens in the node.
{{#include ../../../banners/hacktricks-training.md}}