mirror of
https://github.com/HackTricks-wiki/hacktricks-cloud.git
synced 2026-01-09 11:44:59 -08:00
111 lines
4.4 KiB
Markdown
111 lines
4.4 KiB
Markdown
# AWS - Redshift Privesc
|
||
|
||
{{#include ../../../banners/hacktricks-training.md}}
|
||
|
||
## Redshift
|
||
|
||
For more information about RDS check:
|
||
|
||
{{#ref}}
|
||
../aws-services/aws-redshift-enum.md
|
||
{{#endref}}
|
||
|
||
### `redshift:DescribeClusters`, `redshift:GetClusterCredentials`
|
||
|
||
With these permissions you can get **info of all the clusters** (including name and cluster username) and **get credentials** to access it:
|
||
|
||
```bash
|
||
# Get creds
|
||
aws redshift get-cluster-credentials --db-user postgres --cluster-identifier redshift-cluster-1
|
||
# Connect, even if the password is a base64 string, that is the password
|
||
psql -h redshift-cluster-1.asdjuezc439a.us-east-1.redshift.amazonaws.com -U "IAM:<username>" -d template1 -p 5439
|
||
```
|
||
|
||
**Potential Impact:** Find sensitive info inside the databases.
|
||
|
||
### `redshift:DescribeClusters`, `redshift:GetClusterCredentialsWithIAM`
|
||
|
||
With these permissions you can get **info of all the clusters** and **get credentials** to access it.\
|
||
Note that the postgres user will have the **permissions that the IAM identity** used to get the credentials has.
|
||
|
||
```bash
|
||
# Get creds
|
||
aws redshift get-cluster-credentials-with-iam --cluster-identifier redshift-cluster-1
|
||
# Connect, even if the password is a base64 string, that is the password
|
||
psql -h redshift-cluster-1.asdjuezc439a.us-east-1.redshift.amazonaws.com -U "IAMR:AWSReservedSSO_AdministratorAccess_4601154638985c45" -d template1 -p 5439
|
||
```
|
||
|
||
**Potential Impact:** Find sensitive info inside the databases.
|
||
|
||
### `redshift:DescribeClusters`, `redshift:ModifyCluster?`
|
||
|
||
It's possible to **modify the master password** of the internal postgres (redshit) user from aws cli (I think those are the permissions you need but I haven't tested them yet):
|
||
|
||
```
|
||
aws redshift modify-cluster –cluster-identifier <identifier-for-the cluster> –master-user-password ‘master-password’;
|
||
```
|
||
|
||
**Potential Impact:** Find sensitive info inside the databases.
|
||
|
||
## Accessing External Services
|
||
|
||
> [!WARNING]
|
||
> To access all the following resources, you will need to **specify the role to use**. A Redshift cluster **can have assigned a list of AWS roles** that you can use **if you know the ARN** or you can just set "**default**" to use the default one assigned.
|
||
|
||
> Moreover, as [**explained here**](https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html), Redshift also allows to concat roles (as long as the first one can assume the second one) to get further access but just **separating** them with a **comma**: `iam_role 'arn:aws:iam::123456789012:role/RoleA,arn:aws:iam::210987654321:role/RoleB';`
|
||
|
||
### Lambdas
|
||
|
||
As explained in [https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_FUNCTION.html](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_FUNCTION.html), it's possible to **call a lambda function from redshift** with something like:
|
||
|
||
```sql
|
||
CREATE EXTERNAL FUNCTION exfunc_sum2(INT,INT)
|
||
RETURNS INT
|
||
STABLE
|
||
LAMBDA 'lambda_function'
|
||
IAM_ROLE default;
|
||
```
|
||
|
||
### S3
|
||
|
||
As explained in [https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-run-copy.html](https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-run-copy.html), it's possible to **read and write into S3 buckets**:
|
||
|
||
```sql
|
||
# Read
|
||
copy table from 's3://<your-bucket-name>/load/key_prefix'
|
||
credentials 'aws_iam_role=arn:aws:iam::<aws-account-id>:role/<role-name>'
|
||
region '<region>'
|
||
options;
|
||
|
||
# Write
|
||
unload ('select * from venue')
|
||
to 's3://mybucket/tickit/unload/venue_'
|
||
iam_role default;
|
||
```
|
||
|
||
### Dynamo
|
||
|
||
As explained in [https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-dynamodb.html](https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-dynamodb.html), it's possible to **get data from dynamodb**:
|
||
|
||
```sql
|
||
copy favoritemovies
|
||
from 'dynamodb://ProductCatalog'
|
||
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';
|
||
```
|
||
|
||
> [!WARNING]
|
||
> The Amazon DynamoDB table that provides the data must be created in the same AWS Region as your cluster unless you use the [REGION](https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-source-s3.html#copy-region) option to specify the AWS Region in which the Amazon DynamoDB table is located.
|
||
|
||
### EMR
|
||
|
||
Check [https://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-emr.html](https://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-emr.html)
|
||
|
||
## References
|
||
|
||
- [https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a](https://gist.github.com/kmcquade/33860a617e651104d243c324ddf7992a)
|
||
|
||
{{#include ../../../banners/hacktricks-training.md}}
|
||
|
||
|
||
|