From Vulnerability to Visibility: Demystifying Starboard Infrastructure Scan Reports

In previous posts,How to Install and Use Starboard to Protect Your Kubernetes Cluster and Enhancing Kubernetes Security and Compliance with Starboard Audit Reports: A Practical Guide, I started working through the different security reports available from the Starboard security scanner. The next step is to begin reviewing starboard Infrastructure Scans for security insights.

Getting an Infrastructure Report

After installing Starboard, I waited for it to run and generate all kinds of reports. My previous posts above worked through the vulnerability and audit reports generated by Starboard. With my deployments within Kubernetes secure, I wanted to now focus on the underlying infrastructure. The infrastructure scans run against the nodes in the cluster. You can access a summary by issuing the following command:

 % kubectl get ciskubebenchreports -o wide
NAME                   SCANNER      AGE   FAIL   WARN   INFO   PASS
pool-cfch5xp3i-xonc2   kube-bench   15d   4      35     0      14
pool-cfch5xp3i-xoncp   kube-bench   15d   4      35     0      14
pool-cfch5xp3i-xoncs   kube-bench   15d   4      35     0      14

As you can see in the results, the scanner used to generate the results is called kube-bench. This tool checks the nodes against the CIS Kubernetes Benchmark.

We can also see that I’ve got 4 failures showing up in the results. This is concerning so I’ll focus on these.

Finding the FAIL Details in the Report

As in my previous Enhancing Kubernetes Security and Compliance with Starboard Audit Reports: A Practical Guide post, I’m going to use the JSON based results so that I can easily parse them with jq.

% kubectl get ciskubebenchreports -o json     
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "aquasecurity.github.io/v1alpha1",
            "kind": "CISKubeBenchReport",
            "metadata": {
                "creationTimestamp": "2023-11-25T11:53:31Z",
                "generation": 1,
                "labels": {
                    "starboard.resource.kind": "Node",
                    "starboard.resource.name": "pool-cfch5xp3i-xonc2"
                },
                "name": "pool-cfch5xp3i-xonc2",
                "ownerReferences": [
                    {
                        "apiVersion": "v1",
                        "blockOwnerDeletion": false,
                        "controller": true,
                        "kind": "Node",
                        "name": "pool-cfch5xp3i-xonc2",
                        "uid": "9b085a4b-d9e8-4adb-be69-c8fdbc9d8293"
                    }
                ],
                "resourceVersion": "224607245",
                "uid": "f397d9ad-b01d-47f4-946a-6b2298692bf8"
            },
            "report": {
                "scanner": {
                    "name": "kube-bench",
                    "vendor": "Aqua Security",
                    "version": "v0.6.9"
                },
                "sections": [
                    {
                        "id": "4",
                        "node_type": "node",
                        "tests": [
                            {
                                "desc": "Worker Node Configuration Files",
                                "fail": 0,
                                "info": 0,
                                "pass": 8,
                                "results": [
                                    {
                                        "remediation": "Run the below command (based on the file location on your system) on the each worker node.\nFor example, chmod 644 /etc/systemd/system/kubelet.service\n",
                                        "scored": true,
                                        "status": "PASS",
                                        "test_desc": "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)",
                                        "test_number": "4.1.1"
                                    },
...
                                ],
                                "section": "4.1",
                                "warn": 2
                            },
                            {
                                "desc": "Kubelet",
                                "fail": 4,
                                "info": 0,
                                "pass": 6,
                                "results": [
                                    {
                                        "remediation": "If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to\n`false`.\nIf using executable arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service on each worker node and\nset the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.\n`--anonymous-auth=false`\nBased on your system, restart the kubelet service. For example,\nsystemctl daemon-reload\nsystemctl restart kubelet.service\n",
                                        "scored": true,
                                        "status": "FAIL",
                                        "test_desc": "Ensure that the --anonymous-auth argument is set to false (Automated)",
                                        "test_number": "4.2.1"
                                    },
...
                    }
                ],
                "summary": {
                    "failCount": 4,
                    "infoCount": 0,
                    "passCount": 14,
                    "warnCount": 35
                },
                "updateTimestamp": "2023-11-25T11:53:31Z"
            }
        },
        {
            "apiVersion": "aquasecurity.github.io/v1alpha1",
            "kind": "CISKubeBenchReport",
            "metadata": {
                "creationTimestamp": "2023-11-25T11:53:41Z",
                "generation": 1,
                "labels": {
                    "starboard.resource.kind": "Node",
                    "starboard.resource.name": "pool-cfch5xp3i-xoncp"
                },
                "name": "pool-cfch5xp3i-xoncp",
                "ownerReferences": [
                    {
                        "apiVersion": "v1",
                        "blockOwnerDeletion": false,
                        "controller": true,
                        "kind": "Node",
                        "name": "pool-cfch5xp3i-xoncp",
...

In looking through the JSON report, we can see that our results are contained within the items[] array. There are some useful paths within each item that we’ll want to focus on:

  • .metadata.name
  • .report.sections[].tests[].results[]

The metadata.name contains the node name where the test was run. While there are some other details that could retrieve from the sections and tests..etc, I’m focused on just getting the failures for now. We can see that within each of the results[], there is a status that indicates whether the test resulted in a PASS, WARN, or FAIL. I’m going to focus on the ones that were tagged as FAIL by using the below command:

 % kubectl get ciskubebenchreports -o json | jq '.items[0].report.sections[].tests[].results[]|select(.status=="FAIL")'                  
{
  "remediation": "If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to\n`false`.\nIf using executable arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service on each worker node and\nset the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.\n`--anonymous-auth=false`\nBased on your system, restart the kubelet service. For example,\nsystemctl daemon-reload\nsystemctl restart kubelet.service\n",
  "scored": true,
  "status": "FAIL",
  "test_desc": "Ensure that the --anonymous-auth argument is set to false (Automated)",
  "test_number": "4.2.1"
}
{
  "remediation": "If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If\nusing executable arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service on each worker node and\nset the below parameter in KUBELET_AUTHZ_ARGS variable.\n--authorization-mode=Webhook\nBased on your system, restart the kubelet service. For example,\nsystemctl daemon-reload\nsystemctl restart kubelet.service\n",
  "scored": true,
  "status": "FAIL",
  "test_desc": "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)",
  "test_number": "4.2.2"
}
{
  "remediation": "If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to\nthe location of the client CA file.\nIf using command line arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service on each worker node and\nset the below parameter in KUBELET_AUTHZ_ARGS variable.\n--client-ca-file=<path/to/client-ca-file>\nBased on your system, restart the kubelet service. For example,\nsystemctl daemon-reload\nsystemctl restart kubelet.service\n",
  "scored": true,
  "status": "FAIL",
  "test_desc": "Ensure that the --client-ca-file argument is set as appropriate (Automated)",
  "test_number": "4.2.3"
}
{
  "remediation": "If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.\nIf using command line arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service on each worker node and\nset the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.\n--protect-kernel-defaults=true\nBased on your system, restart the kubelet service. For example:\nsystemctl daemon-reload\nsystemctl restart kubelet.service\n",
  "scored": true,
  "status": "FAIL",
  "test_desc": "Ensure that the --protect-kernel-defaults argument is set to true (Automated)",
  "test_number": "4.2.6"
}

I’m cheating a little with my jq command because I’m only getting the results from item[0] mainly because I’m running on a managed infrastructure so I know that all three nodes should have the same results.

These suggested changes are mostly useless to me since I’m on a managed Kubernetes cluster. The good news is that I was able to pull some details out from the report but found them to be mostly not useful in my particular environment.

Conclusion

It appears that reviewing Starboard Infrastructure scans for security insights fell a little short for me. As my cluster is managed by DigitalOcean, there isn’t much that I could change. Hopefully you are able to at least apply this to your own environment.