Kyverno for Kubernetes Policy management : Part -2

Asish M Madhu
DevOps for you
Published in
8 min readMar 31, 2024

--

“Policies: Sculpting Kubernetes Outlooks”

Kyverno- Part 2 (devopsforyou.com)

This is the continuation of my previous article (Part — 1) on the topic of policy management using Kyverno. You can read the article here.

Before we start, I would like to refresh the concept of Kyverno. Kyverno is a project designed to define policies as a kubernetes resource. It acts as a dynamic admission controller which can receive both validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to enforce admission policies or reject requests. Refer Part-2 for more details.

A Kyverno policy is basically a collection of rules. A rule consists of below 2 items

  1. A match declaration, with an optional exclude declaration. In this declaration we can cover K8s Resource Kinds, Resource Names, Labels, Annotations, Namespaces, Roles, Users, Groups, Service Accounts etc.
  2. Any of below declarations
    a. Validate
    b. Mutate
    c. Generate
    d. Verify Images

We will cover one interesting policy on each of these. Policies can be defined either at a namespace level or cluster level.

Settings

Some of the major policy settings are as following;

  • background: Use this setting to control scanning of existing resources for potential violations and generate policy reports. The default value is “true”
  • applyRules: Use this setting to define how many of the parent policies should be applied to a matching resource. Default value is “All” — which will apply to all policies. If set to “One”, the first matching policy rule will be applied.
  • validationFailureAction: This setting defines the action that needs to be taken after a policy violation. It can be either “Audit” or “Enforce”. Default value is “Audit”.

Resources

Selecting a resource is controlled using match and exclude filters. A policy can have either of these 2 elements;

any: A logical OR is performed on resources.
all: A logical AND is performed on resources.

After the logical functionality is defined we need to define how resources are filtered. Below are the resource filters

resources: Select the resource by its name, namespace, kind, labels, annotations, namespace etc.
subject: Filter based on user, groups, service accounts etc.
roles: Filter based on roles in a namespace
clusterRoles: Filter based on cluster wide roles

We can also define API group/version regexes etc as below;
networking.k8s.io/v1/*Pilicy

Subresources can also be specified using either a / or .

A match declaration is where we define the resources. There must be at least one match declaration, backed with the logical any/all expression.

spec:
rules:
- name: no-LoadBalancer
match:
any:
- resources:
kinds:
- Service
names:
# Here it performs OR between dev1 and staging-*
- dev1
- "staging-*"
operations:
- CREATE
# An example of a combination of Logical OR and AND. Here it performs \
# OR between any[0].resources and any[1].resources \
# Logical AND is performed between kinds and selector
- resources:
kinds:
# Logical OR
- Deployment
- Statefulset
namespaces:
- prod
operations:
- CREATE
# Here it performs Logical AND between the different Kinds, since kind and selector are peers
selector:
matchLabels:
app: critical
# Example of resource defined in groups/version/kind format
- resources:
kinds:
- networking.k8s.io/v1/NetworkPolicy
selector:
matchLabels:
app: critical
# Example of resource with namespace selector
- resources:
kinds:
- Daemonsets
namespaceSelector:
matchExpressions:
- key: app
operator: In
values:
- filebeat
- fluentd
# The above resources are validated to contain a label 'owner'
validate:
message: "The label `owner` is required."
pattern:
metadata:
labels:
owner: "?*"
exclude:
any:
- clusterRoles:
- cluster-admin

Validating Rules Policy

This is the most common use case in Kyverno, which validates a given set of rules for a k8s resource before it is created or updated. The behaviour of how Kyverno responds to the validation is determined by validationFailureAction field (Enforce/Audit)

We frequently encounter situations where users create pods without appropriate labels, resulting in chaos and difficulties in pod management. Let’s establish a validation rule requiring pods to be created only if proper labels are defined in their definitions. In this instance, we enforce a rule stipulating that for any created pod, a label with the key ‘team’ must be present.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: Enforce
rules:
- name: check-team
match:
any:
- resources:
kinds:
- Pod
validate:
message: "label 'team' is required"
pattern:
metadata:
labels:
team: "?*"

Let’s test this policy

➜   kubectl apply -f custom-cluster-policy.yaml
clusterpolicy.kyverno.io/require-labels created

First we try creating a pod without the label

➜  kubectl run nginx --image=nginx
Error from server: admission webhook "validate.kyverno.svc-fail" denied the request:

resource Pod/default/nginx was blocked due to the following policies

require-labels:
check-team: 'validation error: label ''team'' is required. rule check-team failed
at path /metadata/labels/team/'

Then we try with the label as per our policy.

➜  kubectl run nginx --image=nginx -l team=dev
pod/nginx created

➜ kyverno k get pods -l team=dev
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 56s

The pod creation is allowed after the label is included. Now let’s go to the next type of Policy in Kyverno.

Mutate Rules Policy

Mutate rules offer the capability to adjust a resource creation or update request based on the security standards outlined in the policy. Resource mutation takes place prior to validation, so it’s essential that the validation rules align with the changes made by the mutation section. To mutate existing resources in addition to those subject to AdmissionReview requests, use mutateExisting policies.

We are going to discuss an interesting scenario involving the misuse of emptyDir with a medium set as memory. This is commonly employed when a pod needs space for writing temporary files. If a pod needs better I/O response than saving to disk, it will be using Memory as the medium. But we should have some policy restrictions to limit such usage. We will first demo the issues with unrestricted emptyDir volume backed with memory.
Below is a sample pod definition for this scenario.

# sample_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: sample-emptydir
labels:
team: dev
spec:
containers:
- image: alpine
imagePullPolicy: IfNotPresent
name: sample-emptydir

command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']

volumeMounts:
- mountPath: /write
name: write
volumes:
- name: write
emptyDir:
medium: Memory

Let’s create this pod

➜  kubectl apply -f sample_pod.yaml
pod/sample-emptydir created
➜  kubectl exec -it sample-emptydir -- sh -c "df -h /write"
Filesystem Size Used Available Use% Mounted on
tmpfs 7.8G 0 7.8G 0% /write

As you can see it has taken 7.5Gb, which is 50% of the actual memory of the host system. If this pod starts writing to this volume, it might be impacting other services on the host.

Let’s test that.

➜  kubectl exec -it sample-emptydir -- sh -c "dd if=/dev/random of=/write/samplefile1 bs=100 count=8000000"

I was running the cluster on a Kind cluster in my laptop and my laptop was showing below reading in activity monitor. Notice the “qemu-system-*” process consuming almost all the memory.

My Kind k8s cluster started timing out and unresponsive. I had to kill the process and connect back to the cluster.

Now let’s have a policy to define a default limit for pods which needs Memory as its medium for emptyDirs.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-emptydir-sizelimit
spec:
rules:
- name: mutate-emptydir
match:
any:
- resources:
kinds:
- Pod
mutate:
foreach:
- list: "request.object.spec.volumes[]"
preconditions:
all:
- key: "{{element.keys(@)}}"
operator: AnyIn
value: emptyDir
- key: "{{element.emptyDir.sizeLimit || ''}}"
operator: Equals
value: ''
- key: "{{element.emptyDir.medium}}"
operator: AnyIn
value: Memory
patchesJson6902: |-
- path: "/spec/volumes/{{elementIndex}}/emptyDir/sizeLimit"
op: add
value: 100Mi

The foreach step loops through the list “request.object.spec.volumes[]” and checks for each key having value “emptyDir”, which is having no sizeLimit and medium is defined as “Memory”. For such pods, it applies a patch to the sizeLimit as 100Mi.

➜  kubectl exec -it sample-emptydir2 -- sh -c "df -h /write"
Filesystem Size Used Available Use% Mounted on
tmpfs 100.0M 0 100.0M 0% /write

But, if we create a pod with predefined sizeLimit, the policy will honor that.

Moving on, let’s try the next type of policy with Kyverno — Generate Rules.

Generate Rules Policy

This policy is very interesting, as it provides the capability to generate rules based on conditions or events. This event covers actions like creating, updating, or deleting resources, and also creating or updating policies, in response to a conditions or an event. It’s helpful for setting up extra resources, such as RoleBindings or NetworkPolicies for a Namespace, or for automating tasks that might need other tools or scripts.

There are two types of generate rules;
a. Data Source : Where the source resource is defined in the policy. Based on the circumstance the data source defined as a template in the policy will be created based on circumstances.

In below example, we are going to create a

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-ns-quota
spec:
rules:
- name: generate-resourcequota
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: ResourceQuota
name: default-resourcequota
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
spec:
hard:
requests.cpu: '4'
requests.memory: '16Gi'
limits.cpu: '4'
limits.memory: '16Gi'

b. Clone Source: When the source needs to be taken from an already existing resource in the cluster a clone object is used. For example, when we want to clone a secret object (eg. image pull secret) to all namespaces and any changes to the source secrets, gets automatically updated to all other namespaces.

In the below example, we have an admin namespace, where we store credential for image repositories. This needs to be applied to all application namespaces which have a label “application”

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: sync-mysecret
spec:
rules:
- name: sync-image-pull-secret
match:
any:
- resources:
kinds:
- Namespace
selector:
matchLabels:
application: "true"
generate:
apiVersion: v1
kind: Secret
name: registry-credentials
namespace: "{{request.object.metadata.name}}"
synchronize: true
clone:
namespace: admin
name: regcred

Some other use cases for generate rules are as follows:

  1. I want to create default network policies, resource quotas, role bindings, etc., when a new namespace is created by a user.
  2. When a secret or a config map is updated, I want pods that consume these in their volumes to reload the files to incorporate the changes.

Now let’s jump to the next type of Kyverno Policy which can help verify source images.

Verify Image Rules

The purpose of this type of policy is to check and verify image signatures and attestations of software supply chain. Image attestation can be verified using third party tools like Notory or Sigstore. Notory verifies the X.509 certificates of the image and validate with CA certificate of signed image. Similarly, using Sigstore the policy rule check fails if the required signatures are not found in the OCI registry, or if the image was not signed using matching attestors.

Conclusion

In conclusion, Kyverno stands out as a powerful and flexible policy engine that empowers organizations to enforce policies declaratively within Kubernetes, promoting security, compliance, and operational efficiency. Its intuitive design, policy-as-code approach, and seamless integration make it a valuable tool for teams seeking to streamline their Kubernetes management and ensure consistent best practices across their clusters.

Reference: https://kyverno.io/

I hope this article was helpful and adds value to your journey to implement security policies. If you liked my article, you can follow my publication for future articles, which give me the motivation to write more. — https://devopsforyou.com/

--

--

I enjoy exploring various opensource tools/technologies/ideas related to CloudComputing, DevOps, SRE and share my experience, understanding on the subject.