LinearB On-Prem Agent v4 Upgrade Guide
This guide outlines the steps to upgrade the LinearB On-Prem Agent to version 4.x.x. The process includes backing up and restoring critical data, updating configurations, and deploying the updated version of the agent.
Prerequisites
Before starting the upgrade, ensure you have the following:
- Access to the Kubernetes cluster with the LinearB On-Prem Agent deployed.
kubectlinstalled and configured.- Permissions to manage Kubernetes resources.
- Review README-prerequisites.md to ensure your environment meets the requirements for the LinearB On-Prem Agent.
Upgrade Process
In order to upgrade the agent from an existing installation to version 4.x.x, follow these steps:
- Backup the file that holds all the integrations and sensitive keys which resides in the MinIO pod (the file named db.json)
- Make sure you are in the correct context and namespace where the LinearB On-Prem Agent is deployed. For example:
Afterwards, please verify that the file was created in the current directory and that it contains the necessary data.
In case you were using k8s native secrets to store integration information (INTEGRATIONS_STORAGE_BACKEND=secrets), please keep the contents using this command:
# Get the actual secret
kubectl get secret integrations -n linearb
# Remove the protective null finalizer:
kubectl patch secret integrations -n linearb -p '{"metadata":{"finalizers":null}}' --type=merge
- Delete the existing LinearB On-Prem Agent namespace:
-
Migrate existing local-values.yaml to new format:
Notes: -
As datadog is now managed by the on-prem-agent chart, please make sure you also update the datadog-agent section in local-values.yaml which is commented by default in local-values.yaml.template .
- In case you have JFROG_HOST set, please change it to "linearb-on-prem-dist.jfrog.io/artifactory/on-prem-oci" as this is the default registry for helm and docker.
-
In OPA v3, local-values.yaml is used as input for all of the helm releases which the OPA consists of. In case you don't have local-values.yaml, you can get the effective values in your current cluster by running:
You should get all the relevant values. -
Migrate services requests and limits: In OPA v3, services requests and limits were optionally supplied by local-values-
.yaml in the following format:
- In OPA v3, you could also change the resources configurations for the batch pods that scheduler-worker, scheduler-sensors-worker scheduler-pm-worker by creating local-values-
.yaml With the content: image: env: chartValuesV2: - name: WORKER_REQUEST_CPU value: "400m"
- name: WORKER_REQUEST_MEMORY value: "100Mi"
- name: WORKER_LIMIT_CPU value: "1000"
- name: WORKER_LIMIT_MEMORY value: "1000"
This can now be achieved by appending this to local-values.yaml:
scheduler-sensors-worker:
image:
env:
plain:
- name: WORKER_REQUEST_CPU
value: "400m"
- name: WORKER_REQUEST_MEMORY
value: "200Mi"
- name: WORKER_LIMIT_CPU
value: "1000m"
- name: WORKER_LIMIT_MEMORY
value: "500Mi"
-
Migrate custom root certificates: Custom root certificate were placed in ~/linearb-onprem/deploy/certs . Now they are a part of the input for the on-prem-agent in .global.CUSTOM_SSL_CERT path, enter it as a multiline yaml starting with |-
-
Ingress configuration (New in v4.0.11+): Starting from v4.0.11, ingress configuration has been significantly enhanced with the following new features:
- Controller installation control: Use
global.ingress.installControlleroringress.installControllerto control whether the ingress-nginx controller is installed - Chart-specific override: Use
onprem-receiver.ingress.enabledto override the globalRECEIVER_INGRESSsetting - Multi-host support: Configure single or multiple hostnames for your ingress
- TLS configuration: Secure your ingress with TLS certificates
- Existing controller support: Set
installController: falseto use an existing ingress controller in your namespace
The global.RECEIVER_INGRESS flag is still supported for backward compatibility. For detailed configuration examples and advanced scenarios, refer to Ingress Usage Scenarios.
-
Deploy the version 4.x.x of the LinearB On-Prem Agent - Please refer to the Agent Installation Guide [README-deploy.md]. If installing on the same cluster, it is advisable to install on a new name space.
-
Once the deployment is complete and all pods are in a running state, restore the backup data by running the following command:
If you were using secrets as a backend, please create the integrations secret mentioned in step 1 manually in the new namespace after deployment. -
To ensure that the integrations’ data has been restored successfully, run the following command:
Note: If the namespace differs from the default linearb, replace linearb in the above command with the appropriate namespace. In the output you should see the list of integrations you have seen in the previous version.