Skip to content

sbomreport-to-dependencytrack - Handle SBOMReport Deletion

sbomreport-to-dependencytrack - Handle Report Deletion

Section titled “sbomreport-to-dependencytrack - Handle Report Deletion”

Dependency Track is a platform for storing and analyzing SBOMs, however it is not able to generate SBOMs for applications. In a Kubernetes Cluster one obvious choice would be to use the Trivy Operator for the generation of SBOMs. Trivy Operator observes workloads on the cluster and automatically creates an SBOM as custom resource called “SBOMReport”.

What’s missing is a bridge that imports those SBOMReports resources into Dependency Track. That’s where sbomreport-to-dependencytrack comes in.

While using this as bridge it becomes clear that the importing SBOMs to Dependency Track is working as expected, but every version of an application is set as active and stays active, even when the Application is not running in the cluster anymore. This leads to the number of vulnerabilities in Dependency Track always growing, while in reality many vulnerabilities might already be fixed by performed deployments. In this writeup we’re looking at Issue #22 which adresses this by marking projects for which the SBOMReport was deleted as inactive. This is not a user guide but a reconstruction of my investigation and contribution process.

My expectation is that sbomreport-for-dependencytrack is a small go application, possibly using the operator framework to watch for SBOMReports, converting them to json and sending them to Dependency Track via its API. I’m wondering how good this will be testable in a local setup, as I doubt that I will be able to test the process only with the local binary. My current guess is that I’ll either need to create a cluster (or use Minikube) for that purpose, but let’s dive in first…

…and the first expectation was already wrong. It’s a go application, but not an operator. Instead SBOMReports are sent via trivy-operator as webhook and processed from there. At the current point in time sbomreport-to-dependencytrack only listens for post requests on the root-path and processes the SBOM, so we’ll test the process form end to end on a cluster first, to see what happens.

func (u *Upload) Run(ctx context.Context, input []byte) error {
...
if !sbom.ISVerbUpdate() {
return errors.New("only support verb is update")
}
...
}

Source: uploader/uploader.go

As the number of required applications and the general complexity seems quite low I’ve decided to use minikube as testing cluster. After installing Dependency Track, Trivy Operator and sbomreport-to-dependencytrack, I tried the happy path by deploying a pod and waiting for an SBOM upload.

…except it didn’t work. I could see that the webhook was being called, but sbomreport-to-dependencytrack greeted me with the following error message:

Error: 'artifact' key not found inside report

Unfortunate. But as I already was successfully running this setup with the same versions previously I had a hunch that the reason for this failure was the fact, that I decided to set

operator:
webhookSendDeletedReports: true

as preparation for the task. After digging a bit in the code it became clear why this happens. When specifying “webhookSendDeletedReports” the payload of the webhook looks like:

apiVersion: aquasecurity.github.io/v1alpha1
kind: SbomReport
report:
artifact:
repository: library/alpine
tag: latest

While with the option being not set it looks like:

apiVersion: aquasecurity.github.io/v1alpha1
kind: SbomReport
verb: Update
operatorObject:
report:
artifact:
repository: library/alpine
tag: latest

The change in behavior is visible here:

func (r *WebhookReconciler) reconcileReport(reportType client.Object) reconcile.Func {
...
if r.WebhookSendDeletedReports {
msg := WebhookMsg{OperatorObject: reportType, Verb: verb}
return ctrl.Result{}, sendReport(msg, r.WebhookBroadcastURL, *r.WebhookBroadcastTimeout, webhookBroadcastCustomHeaders)
}
return ctrl.Result{}, sendReport(reportType, r.WebhookBroadcastURL, *r.WebhookBroadcastTimeout, webhookBroadcastCustomHeaders)
}
...
}
Source: [pkg/webhook/webhookreporter.go](https://github.com/aquasecurity/trivy-operator/blob/1caa4d4f6807815eda6f5c2997362964129bf4bb/pkg/webhook/webhookreporter.go)

I will have to make sure to add that information to the README of sbomreport-to-dependencytrack. Thankfully the parsing of SBOMReports is adjustable in the configuration for that tool. After correcting those the SBOM is successfully uploaded to Dependency Track, so let’s look at the current test structure and check how to extend them with our desired behavior.

The relevant logic is in uploader.go, testcases are constructed and during the test its checked if the expected function is called on a mock instance of the dependencytrack-client. There’s also a test which verifies that the “Delete” verb is not supported, so we might just adapt that testcase.

From a use case perspective there is two ways how we could handle deletion of an SBOMReport.

  1. Set the project as inctive in Dependency-Track, keeping the history but not counting into the vulnerability statistics.
  2. Delete the project entirely, removing the history.

Keeping an SBOM for projects which were deployed can matter for audits, so my initial thought was to just add that, but both behaviors have their uses, so I’m thinking of adding a configuration option which either defaults to the previous behavior, deactivates or deletes the project and extend the tests accordingly.

At the time of writing the tests call the upload function with different parameters and assert whether the required function was called on a mock object. A future improvement could be to use a testcontainer for Dependency Track and perform integration tests on that, however this seems overkill for the current task. After the tests have been adjusted the implementation is just a matter of calling the correct endpoints via the dependencytrack-client-go and testing the behavior on my minikube cluster.

At the current point in time sbomreport-to-dependencytrack fills an important gap for a Kubernetes setup with Trivy Operator and Dependency Track, therefore its weird that there’s such a low traction on that project. There are a few improvements which became apparent during the implementation:

  • Updating Go & dependencies
  • Switching logging from fmt.Println to log.slog & add loglevels
  • Add Helm Docs and a values schema so that the Chart feels more mature
  • Build & Release automation
  • Switch the Dockerfile to a scratch-based image, as alpine does not seem necessary there

I’ve created issues for a few of these. Before implementing more I will wait until the contributor reacts to my pull requests as the possibility exists that the project is inactive.

As the project and the task were relatively small there’s not too many learnings in this process. My key takeway is the fact, that supply chain security in Kubernetes still seems to need a fair bit of work. Dependency Track as tool does not ‘feel’ cloud native yet, its config is still mostly managed by clicking through UIs and ingesting SBOMs from a cluster should definetly be a workflow which is supported out of the box, either in Dependency Track or the Trivy Operator. However sbomreport-to-dependencytrack fills this hole neatly and therefore would deserve more support from the community.