Kyverno for Kubernetes Policy management : Part -1

Asish M Madhu
DevOps for you
Published in
7 min readJul 5, 2023

--

“Policies: The Blueprint of Kubernetes Mindsets”

A better policy

Kyverno is a CNCF incubating project, designed to define policies as a kubernetes resource. It can validate, mutate, generate or clean up any resource in a k8s cluster. Some of the key features include verifying container images and inspecting image metadata, block non-conformant resources using admission controllers, report policy violations, define policy exceptions and more.

Admission Controllers

Before we deep dive into Kyverno, let’s understand what an admission controller is. An admission controller is a component of the k8s api server that intercepts and processes requests to the API server, before they are presented to the etcd data store or processed in the cluster. Admission controllers can thus enforce custom validation rules and policies to ensure that objects such as pods, services, deployments, etc. being created, updated, or deleted comply with the cluster’s configuration and security settings. There are two types of admission controllers

  1. Validating admission controller: These controllers validate incoming requests and accept or reject them based on the predefined rules. If a request fails validation, it is not allowed to proceed further.
  2. Mutating admission controller: These controllers intercept the requests coming to api server and can modify the content of the object being created or updated. They can add, remove or modify fields in the request to ensure it confirms specific policies or defaults.

To enable admission controller, the kubernetes API server should have the flag — enable-admission-plugins .

As I mentioned, the admission controller is part of the API server component itself. Kubernetes can extend the capability of the admission controller by using a dynamic admission controller. Through dynamic admission controllers we can define our own custom policies apart from the static admission controller baked into the api server. A dynamic admission controller leverages Custom Resources Definition (CRD) mechanism to define its components in a k8s cluster. This enables Dynamic admission controllers to be deployed and managed as a separate kubernetes resource.

Below is a basic workflow for a dynamic admission controller.

  1. User sends modification requests to the API server.
  2. API server receives the user request, performs authentication, authorization logic and then runs through the static admission controller that is compiled into the API server. These controllers handle standard validation and checks such as ensuring the resource’s fields are correctly formatted and required fields are present. At this stage, static admission controllers have the first opportunity to approve or reject the request.
  3. The API server will then check if there is any dynamic admission controller enabled. If dynamic admission controllers are configured, the API server queries the Kubernetes API to find registered custom resources representing the dynamic admission controllers’ configurations.
  4. If the API server finds a matching custom resource for a dynamic admission controller, the dynamic admission controller’s logic is executed, and it can perform additional validation, mutation, or any other processing based on the custom configuration defined in its CustomResourceDefinition (CRD).
  5. The dynamic admission controller makes a decision to accept or reject the user’s request based on its logic and the rules defined in its CRD. It can also modify the incoming request payload if it is a mutating admission controller.
  6. After the dynamic admission controller has made its decision, the API server proceeds to post-admission processing. At this stage, any remaining built-in, static admission controllers and other core Kubernetes components validate the request. If the request passes all checks and validations, it is persisted to the etcd data store, and the desired resource is created, updated, or deleted within the Kubernetes cluster.
  7. The API server sends a response back to the user, indicating whether the resource change was successful or not. If the request was rejected, the response includes an error message explaining the reason for rejection.

About Kyverno

Kyverno is a dynamic admission controller which can receive both validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to enforce admission policies or reject requests.

Kyverno can create policy reports in the cluster and generates events during policy enforcements.

Installing Kyverno

We can install Kyverno from the latest release manifest

kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.10.0/install.yaml

I am going to install Kyverno on a local k8s kind cluster. [ Refer: Article on Kind ]

Kind config:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
PodSecurity: true
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
➜ kind create cluster --config kind-config --name=k8s-sample
➜ kind get clusters
k8s-sample

➜ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-sample-control-plane Ready control-plane 16d v1.26.3
k8s-sample-worker Ready <none> 16d v1.26.3
k8s-sample-worker2 Ready <none> 16d v1.26.3
k8s-sample-worker3 Ready <none> 16d v1.26.3

Let’s install Kyverno.

➜ kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.10.0/install.yaml
namespace/kyverno created
serviceaccount/kyverno-admission-controller created
serviceaccount/kyverno-background-controller created
serviceaccount/kyverno-cleanup-controller created
serviceaccount/kyverno-cleanup-jobs created
serviceaccount/kyverno-reports-controller created
configmap/kyverno created
configmap/kyverno-metrics created
customresourcedefinition.apiextensions.k8s.io/admissionreports.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/backgroundscanreports.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/cleanuppolicies.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/clusteradmissionreports.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/clusterbackgroundscanreports.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/clustercleanuppolicies.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/clusterpolicies.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/policies.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/policyexceptions.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/updaterequests.kyverno.io created
customresourcedefinition.apiextensions.k8s.io/clusterpolicyreports.wgpolicyk8s.io created
customresourcedefinition.apiextensions.k8s.io/policyreports.wgpolicyk8s.io created
clusterrole.rbac.authorization.k8s.io/kyverno:admission-controller created
clusterrole.rbac.authorization.k8s.io/kyverno:admission-controller:core created
clusterrole.rbac.authorization.k8s.io/kyverno:background-controller created
clusterrole.rbac.authorization.k8s.io/kyverno:background-controller:core created
clusterrole.rbac.authorization.k8s.io/kyverno:cleanup-controller created
clusterrole.rbac.authorization.k8s.io/kyverno:cleanup-controller:core created
clusterrole.rbac.authorization.k8s.io/kyverno-cleanup-jobs created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:policies created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:policies created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:policyreports created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:policyreports created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:reports created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:reports created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:admin:updaterequests created
clusterrole.rbac.authorization.k8s.io/kyverno:rbac:view:updaterequests created
clusterrole.rbac.authorization.k8s.io/kyverno:reports-controller created
clusterrole.rbac.authorization.k8s.io/kyverno:reports-controller:core created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:admission-controller created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:background-controller created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:cleanup-controller created
clusterrolebinding.rbac.authorization.k8s.io/kyverno-cleanup-jobs created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:reports-controller created
role.rbac.authorization.k8s.io/kyverno:admission-controller created
role.rbac.authorization.k8s.io/kyverno:background-controller created
role.rbac.authorization.k8s.io/kyverno:cleanup-controller created
role.rbac.authorization.k8s.io/kyverno:reports-controller created
rolebinding.rbac.authorization.k8s.io/kyverno:admission-controller created
rolebinding.rbac.authorization.k8s.io/kyverno:background-controller created
rolebinding.rbac.authorization.k8s.io/kyverno:cleanup-controller created
rolebinding.rbac.authorization.k8s.io/kyverno:reports-controller created
service/kyverno-svc created
service/kyverno-svc-metrics created
service/kyverno-background-controller-metrics created
service/kyverno-cleanup-controller created
service/kyverno-cleanup-controller-metrics created
service/kyverno-reports-controller-metrics created
deployment.apps/kyverno-admission-controller created
deployment.apps/kyverno-background-controller created
deployment.apps/kyverno-cleanup-controller created
deployment.apps/kyverno-reports-controller created
cronjob.batch/kyverno-cleanup-admission-reports created
cronjob.batch/kyverno-cleanup-cluster-admission-reports created

It creates the custom resources in a namespace “kyverno”

➜ kubectl get pods -n kyverno
NAME READY STATUS RESTARTS AGE
kyverno-admission-controller-6bbdc7db58-jkdmh 1/1 Running 0 2m59s
kyverno-background-controller-58b95559d7-pvfpg 1/1 Running 0 2m59s
kyverno-cleanup-controller-bfffd8845-x56lx 1/1 Running 0 2m59s
kyverno-reports-controller-69c88b4c8f-x6cxg 1/1 Running 0 2m59s

Let’s start using it. Imagine this security concern which raises security concerns within our k8s clusters. There are thousands of apps running in our organisation and we do not have control over which image repositories, the images will be downloaded from. Of course we can have image scanning tools for our cluster. But, wouldn’t it be nice to enforce a policy that all the container images should be pulled from some certain allowed list of image registries.

We will create a policy for this as below:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: allowed-image-repos
spec:
validationFailureAction: Enforce
background: false
rules:
- name: permitted-repos
match:
any:
- resources:
kinds:
- Pod
validate:
message: >-
All images in this Pod must come from an authorized repository.
deny:
conditions:
all:
#- key: "{{ images.[containers, initContainers, ephemeralContainers][].*.name[] }}"
- key: "{{ request.object.spec.containers[*].image }}"
operator: AnyNotIn
value:
- quay.io/*
- gcr.io
- myregistry.azurecr.io

I am specifying the policy to validate pod resources, to be denied if its resource key “spec.containers[*].image” does not match the registries I have listed. Let’s apply this policy and try to deploy some nginx pods.

➜  kubectl create -f allowed-registry.yaml
clusterpolicy.kyverno.io/allowed-image-repos created

➜ kubectl get clusterpolicy
NAME BACKGROUND VALIDATE ACTION READY AGE MESSAGE
allowed-image-repos false Enforce True 29s Ready

➜ kubectl create namespace policy-test
namespace/policy-test created

➜ kubectl create deploy -n policy-test nginx --image=nginx
error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request:

resource Deployment/policy-test/nginx was blocked due to the following policies

allowed-image-repos:
autogen-permitted-repos: All images in this Pod must come from an authorized repository.

➜ kubectl get pods -n policy-test
No resources found in policy-test namespace.

It did not allow me to create a deploy object, since the image it is getting deployed is from hub.docker.com. We have not included this registry in our list.

Now let’s try to install an image from quay.io, which is allowed as per the policy.

➜  kubectl create deploy -n policy-test nginx --image=quay.io/nginx/nginx-ingress-operator
deployment.apps/nginx created
➜ kubectl get deploy -n policy-test
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 15s
➜ kubectl get pods -n policy-test
NAME READY STATUS RESTARTS AGE
nginx-68c4ffd545-m9bln 1/1 Running 0 21s

Okey, so it permitted creation of deployment, rs and pods. This was an example of a validating policy. The other type of policy is a mutating policy which enables the power to apply some defaults instead of denying and even add some business logic, based on conditions. I am going to cover that in Part-2 of this series.

Conclusion

Kyverno is a policy engine designed specifically for kubernetes. In this article, I shared some details about the admission controller and its workflow. Dynamic admission controllers like Kyverno provides extensions to native kubernetes policy definitions. I shared a practical example of using a validating policy. But the real fun comes when using mutating policies which we will cover in Part-2 of Kyverno. We will go through how we define k8s resources using match, conditions for these resources, managing loops, validation actions etc in the policy definition along with mutating policies.

I hope this article was helpful and adds value to your journey to implement security policies. If you liked my article, you can follow my publication for future articles, which give me the motivation to write more. — https://devopsforyou.com/

--

--

I enjoy exploring various opensource tools/technologies/ideas related to CloudComputing, DevOps, SRE and share my experience, understanding on the subject.