In my How to Install and Use Starboard to Protect Your Kubernetes Cluster post, I first installed Starboard and reviewed a vulnerability scan report. After reviewing the results, I then tried to action some of the vulnerabilities in my MySQL deployment. In addition to vulnerability scans, Starboard can also conduct configuration audits of your Kubernetes deployment. I wanted to review those next and walk through a Kubernetes audit report action plan.
In working through an action plan, you should first address any CRITICAL
results. After mitigating those, you would want to work through the HIGH
and so forth.
Getting the Configuration Audit Report
You can generate a summary report by running the below command and reviewing the results:
% kubectl get configauditreports --all-namespaces -o wide
NAMESPACE NAME SCANNER AGE CRITICAL HIGH MEDIUM LOW
ctesting statefulset-prod-mysql Starboard 14d 0 0 5 8
ctesting statefulset-psql-test-postgresql Starboard 14d 0 0 6 6
...
default replicaset-ubuntu-65b8d8bdcb Starboard 14d 0 0 6 8
...
Let’s assume you have a much larger Kubernetes deployment and you wanted to first look for any CRITICAL
items in the report. I would suggest moving to json based results so that you can use jq to parse the results looking for specifics. Let’s first look at a single result to determine how we want to search for what we want.
I start by running my previous kubectl
command but changing the output from wide
to json
. I then use the pipe |
command to send the output into jq
.
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "aquasecurity.github.io/v1alpha1",
"kind": "ConfigAuditReport",
"metadata": {
"creationTimestamp": "2023-11-09T15:52:39Z",
"generation": 7,
"labels": {
"plugin-config-hash": "669cfcf6ff",
"resource-spec-hash": "66865f76d9",
"starboard.resource.kind": "StatefulSet",
"starboard.resource.name": "prod-mysql",
"starboard.resource.namespace": "ctesting"
},
"name": "statefulset-prod-mysql",
"namespace": "ctesting",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": false,
"controller": true,
"kind": "StatefulSet",
"name": "prod-mysql",
"uid": "0a7f9054-2e12-44c3-9b25-7f7e06c42d92"
}
...
It looks like all of the results are stored in the .items
JSON path, so I’ll add that new path to my jq command and simply pull back the first element.
% kubectl get configauditreports --all-namespaces -o json | jq '.items[0]'
{
"apiVersion": "aquasecurity.github.io/v1alpha1",
"kind": "ConfigAuditReport",
"metadata": {
"creationTimestamp": "2023-11-09T15:52:39Z",
"generation": 7,
"labels": {
"plugin-config-hash": "669cfcf6ff",
"resource-spec-hash": "66865f76d9",
"starboard.resource.kind": "StatefulSet",
"starboard.resource.name": "prod-mysql",
"starboard.resource.namespace": "ctesting"
},
"name": "statefulset-prod-mysql",
"namespace": "ctesting",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": false,
"controller": true,
"kind": "StatefulSet",
"name": "prod-mysql",
"uid": "0a7f9054-2e12-44c3-9b25-7f7e06c42d92"
}
],
"resourceVersion": "222254035",
"uid": "b18b56d4-0555-40da-b98c-ef67aa01be83"
},
"report": {
"checks": [
{
"category": "Kubernetes Security Check",
"checkID": "KSV013",
"description": "It is best to avoid using the ':latest' image tag when deploying containers in production. Doing so makes it hard to track which version of the image is running, and hard to roll back the version.",
"severity": "LOW",
"success": true,
"title": "Image tag ':latest' used"
}
],
"scanner": {
"name": "Starboard",
"vendor": "Aqua Security",
"version": "0.15.13"
},
"summary": {
"criticalCount": 0,
"highCount": 0,
"lowCount": 8,
"mediumCount": 5
},
"updateTimestamp": null
}
}
In looking at the results, I can see that there are a few fields of interest. These fields contain useful information for identifying the Kubernetes resource in the report:
- .items[].metadata.name
- .items[].metadata.namespace
It also looks like the results are summarized in the following path with counts for each severity type:
- .items[].report.summary
With all of this knowledge, I can use jq to quickly search through all of the items.
A Deeper Explanation of My jq Syntax
Let me also explain my usage of jq before we go too much further. I’m first going to have jq return the items[]
array in the results like this:
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]'
{
"apiVersion": "aquasecurity.github.io/v1alpha1",
"kind": "ConfigAuditReport",
"metadata": {
"creationTimestamp": "2023-11-09T15:52:39Z",
"generation": 7,
"labels": {
"plugin-config-hash": "669cfcf6ff",
"resource-spec-hash": "66865f76d9",
"starboard.resource.kind": "StatefulSet",
"starboard.resource.name": "prod-mysql",
"starboard.resource.namespace": "ctesting"
},
"name": "statefulset-prod-mysql",
"namespace": "ctesting",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": false,
"controller": true,
"kind": "StatefulSet",
"name": "prod-mysql",
"uid": "0a7f9054-2e12-44c3-9b25-7f7e06c42d92"
}
],
"resourceVersion": "222254035",
"uid": "b18b56d4-0555-40da-b98c-ef67aa01be83"
},
"report": {
"checks": [
{
"category": "Kubernetes Security Check",
"checkID": "KSV024",
"description": "HostPorts should be disallowed, or at minimum restricted to a known list.",
"severity": "HIGH",
...
The next step is to pipe that into jq’s select statement so that we can filter on results that match a certain criteria. In my examples below, I’m looking for Kubernetes resources that contain results with various severity levels. In my example below, you can see that I’m doing a select()
where the .report.summary.mediumCount > 0
.
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.report.summary.mediumCount > 0)'
{
"apiVersion": "aquasecurity.github.io/v1alpha1",
"kind": "ConfigAuditReport",
"metadata": {
"creationTimestamp": "2023-11-09T15:52:39Z",
"generation": 7,
"labels": {
"plugin-config-hash": "669cfcf6ff",
"resource-spec-hash": "66865f76d9",
"starboard.resource.kind": "StatefulSet",
"starboard.resource.name": "prod-mysql",
"starboard.resource.namespace": "ctesting"
},
"name": "statefulset-prod-mysql",
"namespace": "ctesting",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": false,
"controller": true,
"kind": "StatefulSet",
"name": "prod-mysql",
"uid": "0a7f9054-2e12-44c3-9b25-7f7e06c42d92"
}
],
"resourceVersion": "222254035",
"uid": "b18b56d4-0555-40da-b98c-ef67aa01be83"
},
"report": {
"checks": [
{
...
This select statement is searching through all elements of the items[]
array to select any that match my criteria. Finally, I’m only pulling out the fields of interest for me during my search, the .metadata.name
and .metadata.namespace
fields.
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.report.summary.mediumCount > 0)| [.metadata.name, .metadata.namespace]'
[
"statefulset-prod-mysql",
"ctesting"
]
[
"statefulset-psql-test-postgresql",
"ctesting"
]
...
This is somewhat of a pain to read so I end the command with a join to merge the fields into a single line using a .
as a separator:
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.report.summary.mediumCount > 0)| [.metadata.name, .metadata.namespace]| join(".")'
"statefulset-prod-mysql.ctesting"
"statefulset-psql-test-postgresql.ctesting"
...
Searching the Report For Results
Armed with my jq command, I’ll search for results that have critical severity levels with a command like:
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.report.summary.criticalCount > 0)| [.metadata.name, .metadata.namespace]| join(".")'
%
Woo hoo no critical! Let’s look for results with a high severity:
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.report.summary.highCount > 0)| [.metadata.name, .metadata.namespace]| join(".")'
%
None of those either! The next step is to check for any medium severity results:
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.report.summary.mediumCount > 0)| [.metadata.name, .metadata.namespace]| join(".")'
"statefulset-prod-mysql.ctesting"
"statefulset-psql-test-postgresql.ctesting"
"statefulset-k8-pg.database-pipeline"
%
I’ve got some items with medium severity so let’s look at what those are by doing some additional filtering (I won’t beat to death my usage of jq here because it “should” make sense based upon what I explained before):
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.metadata.name=="statefulset-prod-mysql" and .metadata.namespace=="ctesting")|.report.checks[]| select(.severity=="MEDIUM")'
{
"category": "Kubernetes Security Check",
"checkID": "KSV027",
"description": "The default /proc masks are set up to reduce attack surface, and should be required.",
"severity": "MEDIUM",
"success": true,
"title": "Non-default /proc masks set"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV021",
"description": "Force the container to run with group ID > 10000 to avoid conflicts with the host’s user table.",
"messages": [
"Container 'git-sync' of StatefulSet 'prod-mysql' should set 'securityContext.runAsGroup' > 10000",
"Container 'mysql' of StatefulSet 'prod-mysql' should set 'securityContext.runAsGroup' > 10000"
],
"severity": "MEDIUM",
"success": false,
"title": "Runs with low group ID"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV036",
"description": "ensure that Pod specifications disable the secret token being mounted by setting automountServiceAccountToken: false",
"severity": "MEDIUM",
"success": true,
"title": "Protecting Pod service account tokens"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV001",
"description": "A program inside the container can elevate its own privileges and run as root, which might give the program control over the container and node.",
"severity": "MEDIUM",
"success": true,
"title": "Process can elevate its own privileges"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV023",
"description": "HostPath volumes must be forbidden.",
"severity": "MEDIUM",
"success": true,
"title": "hostPath volumes mounted"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV012",
"description": "'runAsNonRoot' forces the running image to run as a non-root user to ensure least privileges.",
"messages": [
"Container 'git-sync' of StatefulSet 'prod-mysql' should set 'securityContext.runAsNonRoot' to true"
],
"severity": "MEDIUM",
"success": false,
"title": "Runs as root user"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV032",
"description": "Containers should only use images from trusted registries.",
"messages": [
"container git-sync of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. For Azure any domain ending in 'azurecr.io'",
"container mysql of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. For Azure any domain ending in 'azurecr.io'"
],
"severity": "MEDIUM",
"success": false,
"title": "All container images must start with the *.azurecr.io domain"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV020",
"description": "Force the container to run with user ID > 10000 to avoid conflicts with the host’s user table.",
"messages": [
"Container 'mysql' of StatefulSet 'prod-mysql' should set 'securityContext.runAsUser' > 10000"
],
"severity": "MEDIUM",
"success": false,
"title": "Runs with low user ID"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV037",
"description": "ensure that User pods are not placed in kube-system namespace",
"severity": "MEDIUM",
"success": true,
"title": "User Pods should not be placed in kube-system namespace"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV025",
"description": "Setting a custom SELinux user or role option should be forbidden.",
"severity": "MEDIUM",
"success": true,
"title": "SELinux custom options set"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV022",
"description": "Adding NET_RAW or capabilities beyond the default set must be disallowed.",
"severity": "MEDIUM",
"success": true,
"title": "Non-default capabilities added"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV002",
"description": "A program inside the container can bypass AppArmor protection policies.",
"severity": "MEDIUM",
"success": true,
"title": "Default AppArmor profile not set"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV026",
"description": "Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed 'safe' subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node.",
"severity": "MEDIUM",
"success": true,
"title": "Unsafe sysctl options set"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV033",
"description": "Containers should only use images from trusted GCR registries.",
"messages": [
"container git-sync of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. See the full GCR list here: https://cloud.google.com/container-registry/docs/overview#registries",
"container mysql of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. See the full GCR list here: https://cloud.google.com/container-registry/docs/overview#registries"
],
"severity": "MEDIUM",
"success": false,
"title": "All container images must start with a GCR domain"
}
This looks like fun!
Addressing Audit Results
Let’s take a look at one of the results and attempt to fix it. I’m starting with something “easy” to fix:
{
"category": "Kubernetes Security Check",
"checkID": "KSV012",
"description": "'runAsNonRoot' forces the running image to run as a non-root user to ensure least privileges.",
"messages": [
"Container 'git-sync' of StatefulSet 'prod-mysql' should set 'securityContext.runAsNonRoot' to true"
],
"severity": "MEDIUM",
"success": false,
"title": "Runs as root user"
}
I fixed this by updating my YAML for this deployment to include the securityContext.runAsNonRoot
:
...
primary:
initContainers:
- env:
- name: GIT_SYNC_REPO
value: [email protected]:salgattcy/sample_dbs
- name: GIT_SYNC_BRANCH
value: master
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SYNC_PERMISSIONS
value: "0777"
- name: GIT_SYNC_DEST
value: sample_dbs
- name: GIT_SYNC_ROOT
value: /git
- name: GIT_SYNC_ONE_TIME
value: "true"
name: git-sync
image: registry.k8s.io/git-sync/git-sync:v3.6.5
securityContext:
runAsUser: 65533 # git-sync user
allowPrivilegeEscalation: false
runAsNonRoot: true
volumeMounts:
...
After committing the change, I checked the report again to make sure it’s fixed:
{
"category": "Kubernetes Security Check",
"checkID": "KSV012",
"description": "'runAsNonRoot' forces the running image to run as a non-root user to ensure least privileges.",
"severity": "MEDIUM",
"success": true,
"title": "Runs as root user"
}
It is still showing up in my report or is it? I decided to do some quick checking and found that it appears to be fixed.
% kubectl get configauditreports --all-namespaces -o wide
NAMESPACE NAME SCANNER AGE CRITICAL HIGH MEDIUM LOW
ctesting statefulset-prod-mysql Starboard 29d 0 0 4 8
If you compare the wide results now to when I started, I’ve gone from 5 MEDIUM alerts to 4 MEDIUM.
Wrapping Up and Moving Forward
Insert facepalm here. It looks like I need to add one more filter to my jq to make sure I get what I want. When you do the json view, you get ALL of the audit checks that are run so you will want to add an additional filter for success==false
like the below:
% kubectl get configauditreports --all-namespaces -o json | jq '.items[]| select(.metadata.name=="statefulset-prod-mysql" and .metadata.namespace=="ctesting")|.report.checks[]| select(.severity=="MEDIUM" and .success==false)'
{
"category": "Kubernetes Security Check",
"checkID": "KSV033",
"description": "Containers should only use images from trusted GCR registries.",
"messages": [
"container git-sync of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. See the full GCR list here: https://cloud.google.com/container-registry/docs/overview#registries",
"container mysql of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. See the full GCR list here: https://cloud.google.com/container-registry/docs/overview#registries"
],
"severity": "MEDIUM",
"success": false,
"title": "All container images must start with a GCR domain"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV032",
"description": "Containers should only use images from trusted registries.",
"messages": [
"container git-sync of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. For Azure any domain ending in 'azurecr.io'",
"container mysql of statefulset prod-mysql in ctesting namespace should restrict container image to your specific registry domain. For Azure any domain ending in 'azurecr.io'"
],
"severity": "MEDIUM",
"success": false,
"title": "All container images must start with the *.azurecr.io domain"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV020",
"description": "Force the container to run with user ID > 10000 to avoid conflicts with the host’s user table.",
"messages": [
"Container 'mysql' of StatefulSet 'prod-mysql' should set 'securityContext.runAsUser' > 10000"
],
"severity": "MEDIUM",
"success": false,
"title": "Runs with low user ID"
}
{
"category": "Kubernetes Security Check",
"checkID": "KSV021",
"description": "Force the container to run with group ID > 10000 to avoid conflicts with the host’s user table.",
"messages": [
"Container 'git-sync' of StatefulSet 'prod-mysql' should set 'securityContext.runAsGroup' > 10000",
"Container 'mysql' of StatefulSet 'prod-mysql' should set 'securityContext.runAsGroup' > 10000"
],
"severity": "MEDIUM",
"success": false,
"title": "Runs with low group ID"
}
The important item here is that the success status determines whether the audit check was successful or not. If true, the audit check ran without a problem. If false, the audit check failed and it’s considered vulnerable/out of compliance.
It looks like I have my work cut out for me so I guess that I’ll get moving!