docs: restructure docs and add tutorials (#2883)

Co-authored-by: knqyf263 <knqyf263@gmail.com>
This commit is contained in:
Anais Urlichs
2022-09-15 19:27:58 +01:00
committed by GitHub
parent 192fd78ca2
commit 20f1e5991a
32 changed files with 696 additions and 274 deletions

View File

@@ -0,0 +1,120 @@
# Kubernetes Scanning Tutorial
## Prerequisites
To test the following commands yourself, make sure that youre connected to a Kubernetes cluster. A simple kind, a Docker-Desktop or microk8s cluster will do. In our case, well use a one-node kind cluster.
Pro tip: The output of the commands will be even more interesting if you have some workloads running in your cluster.
## Cluster Scanning
Trivy K8s is great to get an overview of all the vulnerabilities and misconfiguration issues or to scan specific workloads that are running in your cluster. You would want to use the Trivy K8s command either on your own local cluster or in your CI/CD pipeline post deployments.
The Trivy K8s command is part of the Trivy CLI:
With the following command, we can scan our entire Kubernetes cluster for vulnerabilities and get a summary of the scan:
```
trivy k8s --report=summary
```
To get detailed information for all your resources, just replace summary with all:
```
trivy k8s --report=all
```
However, we recommend displaying all information only in case you scan a specific namespace or resource since you can get overwhelmed with additional details.
Furthermore, we can specify the namespace that Trivy is supposed to scan to focus on specific resources in the scan result:
```
trivy k8s -n kube-system --report=summary
```
Again, if youd like to receive additional details, use the --report=all flag:
```
trivy k8s -n kube-system --report=all
```
Like with scanning for vulnerabilities, we can also filter in-cluster security issues by severity of the vulnerabilities:
```
trivy k8s --severity=CRITICAL --report=summary
```
Note that you can use any of the Trivy flags on the Trivy K8s command.
With the Trivy K8s command, you can also scan specific workloads that are running within your cluster, such as our deployment:
```
trivy k8s n app --report=summary deployments/react-application
```
## Trivy Operator
The Trivy K8s command is an imperative model to scan resources. We wouldnt want to manually scan each resource across different environments. The larger the cluster and the more workloads are running in it, the more error-prone this process would become. With the Trivy Operator, we can automate the scanning process after the deployment.
The Trivy Operator follows the Kubernetes Operator Model. Operators automate human actions, and the result of the task is saved as custom resource definitions (CRDs) within your cluster.
This has several benefits:
- Trivy Operator is installed CRDs in our cluster. As a result, all our resources, including our security scanner and its scan results, are Kubernetes resources. This makes it much easier to integrate the Trivy Operator directly into our existing processes, such as connecting Trivy with Prometheus, a monitoring system.
- The Trivy Operator will automatically scan your resources every six hours. You can set up automatic alerting in case new critical security issues are discovered.
- The CRDs can be both machine and human-readable depending on which applications consume the CRDs. This allows for more versatile applications of the Trivy operator.
There are several ways that you can install the Trivy Operator in your cluster. In this guide, were going to use the Helm installation based on the [following documentation.](../../docs/kubernetes/operator/index.md)
Make sure that you have the [Helm CLI installed.](https://helm.sh/docs/intro/install/)
Next, run the following commands.
First, we are going to add the Aqua Security Helm repository to our Helm repository list:
```
helm repo add aqua https://aquasecurity.github.io/helm-charts/
```
Then, we will update all of our Helm repositories. Even if you have just added a new repository to your existing charts, this is generally good practice to have access to the latest changes:
```
helm repo update
```
Lastly, we can install the Trivy operator Helm Chart to our cluster:
```
helm install trivy-operator aqua/trivy-operator \
--namespace trivy-system \
--create-namespace \
--set="trivy.ignoreUnfixed=true" \
--version v0.0.3
```
You can make sure that the operator is installed correctly via the following command:
```
kubectl get deployment -n trivy-system
```
Trivy will automatically start scanning your Kubernetes resources.
For instance, you can view vulnerability reports with the following command:
```
kubectl get vulnerabilityreports --all-namespaces -o wide
```
And then you can access the details of a security scan:
```
kubectl describe vulnerabilityreports <name of one of the above reports>
```
The same process can be applied to access Configauditreports:
```
kubectl get configauditreports --all-namespaces -o wide
```

View File

@@ -0,0 +1,125 @@
# Installing the Trivy-Operator through GitOps
This tutorial shows you how to install the Trivy Operator through GitOps platforms, namely ArgoCD and FluxCD.
## ArgoCD
Make sure to have [ArgoCD installed](https://argo-cd.readthedocs.io/en/stable/getting_started/) and running in your Kubernetes cluster.
You can either deploy the Trivy Operator through the argocd CLI or by applying a Kubernetes manifest.
ArgoCD command:
```
> kubectl create ns trivy-system
> argocd app create trivy-operator --repo https://github.com/aquasecurity/trivy-operator --path deploy/helm --dest-server https://kubernetes.default.svc --dest-namespace trivy-system
```
Note that this installation is directly related to our official Helm Chart. If you want to change any of the value, we'd suggest you to create a separate values.yaml file.
Kubernetes manifest `trivy-operator.yaml`:
```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: trivy-operator
namespace: argocd
spec:
project: default
source:
chart: trivy-operator
repoURL: https://aquasecurity.github.io/helm-charts/
targetRevision: 0.0.3
helm:
values: |
trivy:
ignoreUnfixed: true
destination:
server: https://kubernetes.default.svc
namespace: trivy-system
syncPolicy:
automated:
prune: true
selfHeal: true
```
The apply the Kubernetes manifest. If you have the manifest locally, you can use the following command through kubectl:
```
> kubectl apply -f trivy-operator.yaml
application.argoproj.io/trivy-operator created
```
If you have the manifest in a Git repository, you can apply it to your cluster through the following command:
```
> kubectl apply -n argocd -f https://raw.githubusercontent.com/AnaisUrlichs/argocd-starboard/main/starboard/argocd-starboard.yaml
```
The latter command would allow you to make changes to the YAML manifest that ArgoCD would register automatically.
Once deployed, you want to tell ArgoCD to sync the application from the actual state to the desired state:
```
argocd app sync trivy-operator
```
Now you can see the deployment in the ArgoCD UI. Have a look at the ArgoCD documentation to know how to access the UI.
![ArgoCD UI after deploying the Trivy Operator](../../imgs/argocd-ui.png)
Note that ArgoCD is unable to show the Trivy CRDs as synced.
## FluxCD
Make sure to have [FluxCD installed](https://fluxcd.io/docs/installation/#install-the-flux-cli) and running in your Kubernetes cluster.
You can either deploy the Trivy Operator through the Flux CLI or by applying a Kubernetes manifest.
Flux command:
```
> kubectl create ns trivy-system
> flux create source helm trivy-operator --url https://aquasecurity.github.io/helm-charts --namespace trivy-system
> flux create helmrelease trivy-operator --chart trivy-operator
--source HelmRepository/trivy-operator
--chart-version 0.0.3
--namespace trivy-system
```
Kubernetes manifest `trivy-operator.yaml`:
```
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: trivy-operator
namespace: flux-system
spec:
interval: 60m
url: https://aquasecurity.github.io/helm-charts/
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: trivy-operator
namespace: trivy-system
spec:
chart:
spec:
chart: trivy-operator
sourceRef:
kind: HelmRepository
name: trivy-operator
namespace: flux-system
version: 0.0.5
interval: 60m
```
You can then apply the file to your Kubernetes cluster:
```
kubectl apply -f trivy-operator.yaml
```
## After the installation
After the install, you want to check that the Trivy operator is running in the trivy-system namespace:
```
kubectl get deployment -n trivy-system
```

View File

@@ -0,0 +1,114 @@
# Attesting Image Scans With Kyverno
This tutorial is based on the following blog post by Chip Zoller: [Attesting Image Scans With Kyverno](https://neonmirrors.net/post/2022-07/attesting-image-scans-kyverno/)
This tutorial details
- Verify the container image has an attestation with Kyverno
### Prerequisites
1. [Attestation of the vulnerability scan uploaded][vuln-attestation]
3. A running Kubernetes cluster that kubectl is connected to
### Kyverno Policy to check attestation
The following policy ensures that the attestation is no older than 168h:
vuln-attestation.yaml
{% raw %}
```bash
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: check-vulnerabilities
spec:
validationFailureAction: enforce
webhookTimeoutSeconds: 10
failurePolicy: Fail
rules:
- name: not-older-than-one-week
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "CONTAINER-REGISTRY/*:*"
attestations:
- predicateType: cosign.sigstore.dev/attestation/vuln/v1
conditions:
- all:
- key: "{{ time_since('','{{metadata.scanFinishedOn}}','') }}"
operator: LessThanOrEquals
value: "168h"
```
{% endraw %}
### Apply the policy to your Kubernetes cluster
Ensure that you have Kyverno already deployed and running on your cluster -- for instance throught he Kyverno Helm Chart.
Next, apply the above policy:
```
kubectl apply -f vuln-attestation.yaml
```
To ensure that the policy worked, we can deploye an example deployment file with our container image:
deployment.yaml
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: cns-website
namespace: app
spec:
replicas: 2
selector:
matchLabels:
run: cns-website
template:
metadata:
labels:
run: cns-website
spec:
containers:
- name: cns-website
image: docker.io/anaisurlichs/cns-website:0.0.6
ports:
- containerPort: 80
imagePullPolicy: Always
resources:
limits:
memory: 512Mi
cpu: 200m
securityContext:
allowPrivilegeEscalation: false
```
Once we apply the deployment, it should pass since our attestation is available:
```
kubectl apply -f deployment.yaml -n app
deployment.apps/cns-website created
```
However, if we try to deploy any other container image, our deployment will fail. We can verify this by replacing the image referenced in the deployment with `docker.io/anaisurlichs/cns-website:0.0.5` and applying the deployment:
```
kubectl apply -f deployment-two.yaml
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "cns-website", Namespace: "app"
for: "deployment-two.yaml": admission webhook "mutate.kyverno.svc-fail" denied the request:
resource Deployment/app/cns-website was blocked due to the following policies
check-image:
autogen-check-image: |
failed to verify signature for docker.io/anaisurlichs/cns-website:0.0.5: .attestors[0].entries[0].keys: no matching signatures:
```
[vuln-attestation]: ../signing/vuln-attestation.md