How to Install and Use Starboard to Protect Your Kubernetes Cluster

In my Securing Your CI/CD Pipeline: A Beginner’s Guide to Implementing Essential Security Measures post, I started to tinker with SecOps a little with Terrascan. I also stumbled upon another tool called Starboard from Aqua security. In this post, I’m going to focus on using Starboard installation and usage as a Kubernetes Operator to see how it works.

Getting Started

The installation of Starboard seems pretty easy as you can either use kubectl or helm. I decided to go the helm route for my installation.

Changing Installation Defaults

Before performing the install, I looked over the available values for the helm chart. I decided to change the following from its default

  • operator.vulnerabilityScannerScanOnlyCurrentRevisions
  • trivy.ignoreUnfixed

These were originally set to false so I changed it to true using the following starboard_values.yaml file

operator:
  vulnerabilityScannerScanOnlyCurrentRevisions: true
trivy:
  ignoreUnfixed: true

I had considered tinkering with configAuditScannerEnabled and configAuditScannerScanOnlyCurrentRevisions but I wanted to first focus on one thing at a time. I’m also running in a managed cluster so I’m not sure how much I can change from an audit.

I changed the trivy.ignoreUnfixed default value since that was suggested in the example install command.

Deploying to My Cluster

With my default settings changed in the starboard_values.yaml file, I deployed with helm

% helm repo add aquasecurity https://aquasecurity.github.io/helm-charts/
"aquasecurity" has been added to your repositories

% helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "aquasecurity" chart repository
Update Complete. ⎈Happy Helming!⎈

% helm install starboard-operator aquasecurity/starboard-operator \
  --namespace starboard-system \
  --create-namespace \
  -f starboard_values.yaml \
  --version 0.10.13
NAME: starboard-operator
LAST DEPLOYED: Thu Nov  9 10:52:24 2023
NAMESPACE: starboard-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have installed Starboard Operator in the starboard-system namespace.
It is configured to discover Kubernetes workloads and resources in
all namespace(s).

Inspect created VulnerabilityReports by:

    kubectl get vulnerabilityreports --all-namespaces -o wide

Inspect created ConfigAuditReports by:

    kubectl get configauditreports --all-namespaces -o wide

Inspect created CISKubeBenchReports by:

    kubectl get ciskubebenchreports -o wide

Inspect the work log of starboard-operator by:

    kubectl logs -n starboard-system deployment/starboard-operator

After running the helm install, I checked the status of the deployment

% helm --namespace starboard-system status starboard-operator
NAME: starboard-operator
LAST DEPLOYED: Thu Nov  9 10:52:24 2023
NAMESPACE: starboard-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have installed Starboard Operator in the starboard-system namespace.
It is configured to discover Kubernetes workloads and resources in
all namespace(s).

Inspect created VulnerabilityReports by:

    kubectl get vulnerabilityreports --all-namespaces -o wide

Inspect created ConfigAuditReports by:

    kubectl get configauditreports --all-namespaces -o wide

Inspect created CISKubeBenchReports by:

    kubectl get ciskubebenchreports -o wide

Inspect the work log of starboard-operator by:

    kubectl logs -n starboard-system deployment/starboard-operator

It looks like everything is up and running. I decided to let everything run without checking on anything to see the impact of having this installed on my cluster. I saw the installation spike as shown below

Reviewing the Results

The helm command above provides us some various commands to view the reports generated by Starboard. The first command, kubectl get vulnerabilityreports --all-namespaces -o wide, will get the results of the vulnerability scanner across all namespaces. I decided to focus on just my ctesting namespace with the below example:

 % kubectl get vulnerabilityreports -n ctesting -o wide
NAME                                          REPOSITORY           TAG                   SCANNER   AGE   CRITICAL   HIGH   MEDIUM   LOW   UNKNOWN
statefulset-prod-mysql-mysql                  bitnami/mysql        8.0.30-debian-11-r6   Trivy     8d    8          30     38       7     0
statefulset-psql-test-postgresql-postgresql   bitnami/postgresql   14.5.0-debian-11-r6   Trivy     8d    8          37     40       7     0

It looks like I have a bunch of problems in these but what are they? If we change the output from wide to json, we can get some of the details:

% kubectl get vulnerabilityreports -n ctesting -o json                             
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "aquasecurity.github.io/v1alpha1",
            "kind": "VulnerabilityReport",
            "metadata": {
                "creationTimestamp": "2023-11-09T15:53:18Z",
                "generation": 1,
                "labels": {
                    "resource-spec-hash": "56b8c879f7",
                    "starboard.container.name": "mysql",
                    "starboard.resource.kind": "StatefulSet",
                    "starboard.resource.name": "prod-mysql",
                    "starboard.resource.namespace": "ctesting"
                },
                "name": "statefulset-prod-mysql-mysql",
                "namespace": "ctesting",
                "ownerReferences": [
                    {
                        "apiVersion": "apps/v1",
                        "blockOwnerDeletion": false,
                        "controller": true,
                        "kind": "StatefulSet",
                        "name": "prod-mysql",
                        "uid": "0a7f9054-2e12-44c3-9b25-7f7e06c42d92"
                    }
                ],
                "resourceVersion": "219232046",
                "uid": "38aadeed-292b-4056-b1a0-cc686c188c0e"
            },
            "report": {
                "artifact": {
                    "repository": "bitnami/mysql",
                    "tag": "8.0.30-debian-11-r6"
                },
                "registry": {
                    "server": "index.docker.io"
                },
                "scanner": {
                    "name": "Trivy",
                    "vendor": "Aqua Security",
                    "version": "0.25.2"
                },
                "summary": {
                    "criticalCount": 8,
                    "highCount": 30,
                    "lowCount": 7,
                    "mediumCount": 38,
                    "noneCount": 0,
                    "unknownCount": 0
                },
                "updateTimestamp": "2023-11-09T15:53:18Z",
                "vulnerabilities": [
                    {
                        "fixedVersion": "7.74.0-1.3+deb11u5",
                        "installedVersion": "7.74.0-1.3+deb11u2",
                        "links": [],
                        "primaryLink": "https://avd.aquasec.com/nvd/cve-2022-32221",
                        "resource": "curl",
                        "score": 4.8,
                        "severity": "CRITICAL",
                        "title": "POST following PUT confusion",
                        "vulnerabilityID": "CVE-2022-32221"
                    },
                    {
                        "fixedVersion": "7.74.0-1.3+deb11u10",
                        "installedVersion": "7.74.0-1.3+deb11u2",
                        "links": [],
                        "primaryLink": "https://avd.aquasec.com/nvd/cve-2023-38545",
                        "resource": "curl",
                        "score": 7.5,
                        "severity": "CRITICAL",
                        "title": "heap based buffer overflow in the SOCKS5 proxy handshake",
                        "vulnerabilityID": "CVE-2023-38545"
                    },
...
                    {
                        "fixedVersion": "0.0.0-20220412211240-33da011f77ad",
                        "installedVersion": "v0.0.0-20210817142637-7d9622a276b7",
                        "links": [],
                        "primaryLink": "https://avd.aquasec.com/nvd/cve-2022-29526",
                        "resource": "golang.org/x/sys",
                        "score": 5.3,
                        "severity": "MEDIUM",
                        "title": "faccessat checks wrong group",
                        "vulnerabilityID": "CVE-2022-29526"
                    }
                ]
            }
        },
        {
            "apiVersion": "aquasecurity.github.io/v1alpha1",
            "kind": "VulnerabilityReport",
            "metadata": {
                "creationTimestamp": "2023-11-09T15:53:36Z",
                "generation": 1,
                "labels": {
                    "resource-spec-hash": "85b6d54d77",
                    "starboard.container.name": "postgresql",
                    "starboard.resource.kind": "StatefulSet",
                    "starboard.resource.name": "psql-test-postgresql",
                    "starboard.resource.namespace": "ctesting"
                },
                "name": "statefulset-psql-test-postgresql-postgresql",
                "namespace": "ctesting",
                "ownerReferences": [
                    {
                        "apiVersion": "apps/v1",
                        "blockOwnerDeletion": false,
                        "controller": true,
                        "kind": "StatefulSet",
                        "name": "psql-test-postgresql",
                        "uid": "1bbd1644-b4ac-4bc3-94ee-8fccedaf9d82"
                    }
                ],
                "resourceVersion": "219232211",
                "uid": "11cb79a2-ebe8-49ce-ad1f-5087607d5cd4"
            },
            "report": {
                "artifact": {
                    "repository": "bitnami/postgresql",
                    "tag": "14.5.0-debian-11-r6"
                },
                "registry": {
                    "server": "index.docker.io"
                },
                "scanner": {
                    "name": "Trivy",
                    "vendor": "Aqua Security",
                    "version": "0.25.2"
                },
                "summary": {
                    "criticalCount": 8,
                    "highCount": 37,
                    "lowCount": 7,
                    "mediumCount": 40,
                    "noneCount": 0,
                    "unknownCount": 0
                },
                "updateTimestamp": "2023-11-09T15:53:36Z",
                "vulnerabilities": [
                    {
                        "fixedVersion": "7.74.0-1.3+deb11u5",
                        "installedVersion": "7.74.0-1.3+deb11u2",
                        "links": [],
                        "primaryLink": "https://avd.aquasec.com/nvd/cve-2022-32221",
                        "resource": "curl",
                        "score": 4.8,
                        "severity": "CRITICAL",
                        "title": "POST following PUT confusion",
                        "vulnerabilityID": "CVE-2022-32221"
                    },
...
                    {
                        "fixedVersion": "0.0.0-20220412211240-33da011f77ad",
                        "installedVersion": "v0.0.0-20210817142637-7d9622a276b7",
                        "links": [],
                        "primaryLink": "https://avd.aquasec.com/nvd/cve-2022-29526",
                        "resource": "golang.org/x/sys",
                        "score": 5.3,
                        "severity": "MEDIUM",
                        "title": "faccessat checks wrong group",
                        "vulnerabilityID": "CVE-2022-29526"
                    }
                ]
            }
        }
    ],
    "kind": "List",
    "metadata": {
        "resourceVersion": "",
        "selfLink": ""
    }
}

This is great! I can now see that I have a bunch of vulnerabilities within the containers in my ctesting namespace. I see that I’m running 8.0.30-debian-11-r6 of Bitnami MySQL.

Addressing the Vulnerability Results

In checking Docker Hub, it looks like the latest version tag is 8.2.0-debian-11-r0. I updated my values.yaml file to include the image tag:

image:
  tag: 8.2.0-debian-11-r0
auth:
  rootPassword: "blah"
  username: "app_user"
  password: "some_password"
primary:
  persistence:
    enabled: false
initdbScripts:
  my_init_script.sh: |
    #!/bin/sh
    echo "Adding Sample Data"
    curl https://raw.githubusercontent.com/salgattcy/sample_dbs/master/sampleData/sakila-schema.sql |mysql -P 3306 -uroot -p'blah'
    curl https://raw.githubusercontent.com/salgattcy/sample_dbs/master/sampleData/sakila-data.sql |mysql -P 3306 -uroot -p'blah'

I commit the change and wait for the magic of github actions to takeover and it looks like my pod is now crash looping

% kubectl -n ctesting get pod
NAME                     READY   STATUS             RESTARTS      AGE
prod-mysql-0             0/1     CrashLoopBackOff   4 (60s ago)   4m5s
psql-test-postgresql-0   1/1     Running            0             50d

I decided to check the logs

 % kubectl -n ctesting logs prod-mysql-0
mysql 12:28:29.13 
mysql 12:28:29.13 Welcome to the Bitnami mysql container
mysql 12:28:29.13 Subscribe to project updates by watching https://github.com/bitnami/containers
mysql 12:28:29.13 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mysql 12:28:29.13 
mysql 12:28:29.14 INFO  ==> ** Starting MySQL setup **
mysql 12:28:29.16 INFO  ==> Validating settings in MYSQL_*/MARIADB_* env vars
mysql 12:28:29.16 INFO  ==> Initializing mysql database
mysql 12:28:29.18 WARN  ==> The mysql configuration file '/opt/bitnami/mysql/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mysql 12:28:29.18 INFO  ==> Using persisted data
mysql 12:28:29.22 INFO  ==> Running mysql_upgrade
mysql 12:28:29.22 INFO  ==> Starting mysql in background
2023-11-18T12:28:29.244493Z 0 [System] [MY-015015] [Server] MySQL Server - start.
2023-11-18T12:28:29.574625Z 0 [Warning] [MY-011068] [Server] The syntax 'skip_slave_start' is deprecated and will be removed in a future release. Please use skip_replica_start instead.
2023-11-18T12:28:29.574770Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
2023-11-18T12:28:29.574794Z 0 [System] [MY-010116] [Server] /opt/bitnami/mysql/bin/mysqld (mysqld 8.2.0) starting as process 45
2023-11-18T12:28:29.578426Z 0 [Warning] [MY-013242] [Server] --character-set-server: 'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.
2023-11-18T12:28:29.585691Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2023-11-18T12:28:29.829658Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2023-11-18T12:28:30.010995Z 4 [System] [MY-013381] [Server] Server upgrade from '80200' to '80200' started.
2023-11-18T12:28:37.297819Z 4 [System] [MY-013381] [Server] Server upgrade from '80200' to '80200' completed.
2023-11-18T12:28:37.443271Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2023-11-18T12:28:37.443413Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2023-11-18T12:28:37.468955Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /tmp/mysqlx.sock
2023-11-18T12:28:37.469804Z 0 [System] [MY-010931] [Server] /opt/bitnami/mysql/bin/mysqld: ready for connections. Version: '8.2.0'  socket: '/opt/bitnami/mysql/tmp/mysql.sock'  port: 3306  Source distribution.
mysql 12:28:39.25 INFO  ==> Loading user's custom files from /docker-entrypoint-initdb.d
mysql 12:28:39.26 WARN  ==> Sourcing /docker-entrypoint-initdb.d/my_init_script.sh as it is not executable by the current user, any error may cause initialization to fail
Adding Sample Data
/docker-entrypoint-initdb.d/my_init_script.sh: line 3: curl: command not found
mysql: [Warning] Using a password on the command line interface can be insecure.
2023-11-18T12:28:39.272017Z 10 [Warning] [MY-013360] [Server] Plugin mysql_native_password reported: ''mysql_native_password' is deprecated and will be removed in a future release. Please use caching_sha2_password instead'
mysql 12:28:39.28 INFO  ==> Stopping mysql
2023-11-18T12:28:39.285729Z 0 [System] [MY-013172] [Server] Received SHUTDOWN from user <via user signal>. Shutting down mysqld (Version: 8.2.0).
2023-11-18T12:28:42.137563Z 0 [System] [MY-010910] [Server] /opt/bitnami/mysql/bin/mysqld: Shutdown complete (mysqld 8.2.0)  Source distribution.
2023-11-18T12:28:42.138929Z 0 [System] [MY-015016] [Server] MySQL Server - end.

It looks like the new version doesn’t contain curl so I’m not able to download my sample data. After committing this change, it looks like the pod came online successfully

% kubectl -n ctesting get pod          
NAME                     READY   STATUS    RESTARTS        AGE
prod-mysql-0             1/1     Running   6 (3m21s ago)   8m8s
psql-test-postgresql-0   1/1     Running   0               50d

Now we can take a look at our vulnerability report to see how we did with the new version

% kubectl get vulnerabilityreports -n ctesting -o wide                             
NAME                                          REPOSITORY           TAG                   SCANNER   AGE   CRITICAL   HIGH   MEDIUM   LOW   UNKNOWN
statefulset-prod-mysql-mysql                  bitnami/mysql        8.2.0-debian-11-r0    Trivy     8d    0          0      0        0     0
statefulset-psql-test-postgresql-postgresql   bitnami/postgresql   14.5.0-debian-11-r6   Trivy     8d    8          37     40       7     0

It looks like there are no vulnerabilities found but how do we know if the scan has been run against the pod? If we do another output of json on the report we can check the updateTimestamp of the report:

% kubectl get vulnerabilityreports -n ctesting statefulset-prod-mysql-mysql -o json
{
    "apiVersion": "aquasecurity.github.io/v1alpha1",
    "kind": "VulnerabilityReport",
    "metadata": {
        "creationTimestamp": "2023-11-09T15:53:18Z",
        "generation": 2,
        "labels": {
            "resource-spec-hash": "7b87bd54bf",
            "starboard.container.name": "mysql",
            "starboard.resource.kind": "StatefulSet",
            "starboard.resource.name": "prod-mysql",
            "starboard.resource.namespace": "ctesting"
        },
        "name": "statefulset-prod-mysql-mysql",
...
        "summary": {
            "criticalCount": 0,
            "highCount": 0,
            "lowCount": 0,
            "mediumCount": 0,
            "noneCount": 0,
            "unknownCount": 0
        },
        "updateTimestamp": "2023-11-18T12:27:10Z",
        "vulnerabilities": []
    }
}

It looks like this report has been updated and we’re now running a non-vulnerable version!

Wrapping Up

I’ve only looked at the vulnerability scanner so far on one of the pods in my cluster but I’ll need to address a few more it seems. After that, I’ll need to look over the other reports available from Starboard to see how I can address the items that have been found in those. More to come on that…

Meanwhile, it looks like I’ll need to find a new way to get sample data onto my pod. I see a future of me using git-sync as an initContainer like I did in this post to pull data onto a mount in the pod and then just loading the data from there.