Kyverno is a Kubernetes native policy management tool that allows teams to validate, mutate, and generate Kubernetes configurations using policies, all without writing any code. It can help us to force policies in the cluster like forcing any pods to run with non-root permissions, add a nodeSelector based on labels, and much more.
In this post, I’ll show you how to install and configure Kyverno in your Kubernetes cluster.
Requirements
- A running Kubernetes cluster (You can use any cloud provider’s Kubernetes service (like AKS, EKS, or GKE) or even Minikube for local testing).
- kubectl – Kubernetes cli tool
- helm – Package manager for Kubernetes
Installation Steps
- First, add the Kyverno Helm repository:
# Add helm repo
helm repo add kyverno https://kyverno.github.io/kyverno/
# Update helm repo
helm repo update
- Install Kyverno helm package, there are 2 ways to install it, the first option is to install it without replication, for testing:
helm upgrade --install --wait --timeout 15m --atomic \
--version 3.0.0-rc.1 \
--namespace kyverno --create-namespace \
--repo https://kyverno.github.io/kyverno kyverno kyverno
- The second option is with replication for production environment.
helm upgrade --install --wait --timeout 15m --atomic \
--version 3.0.0-rc.1 \
--namespace kyverno --create-namespace \
--repo https://kyverno.github.io/kyverno kyverno kyverno \
--set admissionController.replicas=3 \
--set backgroundController.replicas=2 \
--set cleanupController.replicas=2 \
--set reportsController.replicas=2
- Verify if the installation was successful:
kubectl get pods -n kyverno
Policies
We are going to create a couple of policies to test how Kyverno works:
- Force all pods running in the non-root namespace to run with non-root permissions:
cat << EOF > non-root-policy.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-run-as-nonroot
annotations:
policies.kyverno.io/title: Require runAsNonRoot
policies.kyverno.io/category: Pod Security Standards (Restricted)
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Pod
kyverno.io/kyverno-version: 1.6.0
kyverno.io/kubernetes-version: "1.22-1.23"
policies.kyverno.io/description: >-
Containers must be required to run as non-root users. This policy ensures
`runAsNonRoot` is set to `true`. A known issue prevents a policy such as this
using `anyPattern` from being persisted properly in Kubernetes 1.23.0-1.23.2.
spec:
validationFailureAction: audit
background: true
rules:
- name: run-as-non-root
match:
any:
- resources:
kinds:
- Pod
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "non-root"
validate:
message: >-
Running as root is not allowed. Either the field spec.securityContext.runAsNonRoot
must be set to `true`, or the fields spec.containers[*].securityContext.runAsNonRoot,
spec.initContainers[*].securityContext.runAsNonRoot, and spec.ephemeralContainers[*].securityContext.runAsNonRoot
must be set to `true`.
anyPattern:
- spec:
securityContext:
runAsNonRoot: "true"
=(ephemeralContainers):
- =(securityContext):
=(runAsNonRoot): "true"
=(initContainers):
- =(securityContext):
=(runAsNonRoot): "true"
containers:
- =(securityContext):
=(runAsNonRoot): "true"
- spec:
=(ephemeralContainers):
- securityContext:
runAsNonRoot: "true"
=(initContainers):
- securityContext:
runAsNonRoot: "true"
containers:
- securityContext:
runAsNonRoot: "true"
EOF
# Create non-root namespace
kubectl crearte ns non-root
# Apply non-root-policy
kubectl apply -f non-root-policy.yaml
# Deploy a simple pod
kubectl -n non-root run nginx-example --image=nginx
# Check the security context config of the pod
kubectl -n non-root describe pod nginx-example | less
- Create a nodeSelector with labels to use with Karpenter:
cat << EOF > add-karpenter-nodeselector-policy.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-karpenter-nodeselector
annotations:
policies.kyverno.io/title: Add Karpenter nodeSelector labs
policies.kyverno.io/category: Karpenter, EKS Best Practices
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Pod
kyverno.io/kyverno-version: 1.7.1
policies.kyverno.io/minversion: 1.6.0
kyverno.io/kubernetes-version: "1.27"
policies.kyverno.io/description: >-
Selecting the correct Node(s) provisioned by Karpenter is a way to specify
the appropriate resource landing zone for a workload. This policy injects a
nodeSelector map into the Pod based on the Namespace type where it is deployed.
spec:
rules:
- name: add-node-selector
match:
any:
- resources:
kinds:
- Pod
namespaceSelector:
matchLabels:
node: "spot-instance"
mutate:
patchStrategicMerge:
spec:
nodeSelector:
kubernetes.io/arch: amd64
karpenter.sh/capacity-type: spot
EOF
# Apply add-karpenter-nodeselector-policy
kubectl apply -f add-karpenter-nodeselector-policy.yaml
# Deploy a simple test pod
kubectl run nginx-example --image=nginx --labels=node=spot-instance
# Check the pods label
kubectl get pods --show-labels
You can find many more examples of policy on Kyverno official website.
Kyverno offers a powerful yet straightforward way to manage policies in Kubernetes. Through its native approach, it provides seamless integration without the need for additional components or extensive configurations. By incorporating Kyverno into your cluster, you can ensure a more secure and compliant Kubernetes environment.