Skip to content

OPA Diagnostics

A comprehensive diagnostics tool for OPA on-premises installations that validates configuration, tests connectivity, and provides detailed system information.

What This Tool Checks

Network Connectivity - Access to LinearB APIs, DataDog, and integrations

Kubernetes Cluster - Node resources, storage classes, and version compatibility

Memory & Resource Analysis - Cluster resource utilization

API Authentication - LinearB token validation

Custom Certificates - SSL certificate validation if required

Integration Connectivity - Source control and project management systems

Integration with Main Chart

OPA Diagnostics is integrated into the main on-prem-agent Helm chart and runs as a Kubernetes Job during deployment. It is installed automatically alongside other OPA components.

Configuration

Diagnostics is enabled by default and runs automatically during every deployment. No additional configuration is required for basic operation.

Optional: Customize Diagnostics

To customize diagnostics behavior or disable it, add the following to your local-values.yaml file:

# ... existing configuration (JFROG_USER, ORG_ID, LINEARB_PUBLIC_API_KEY, etc.) ...

diagnostics:
  enabled: true  # Set to false to disable diagnostics (enabled by default)
  image:
    env:
      plain:
        - name: "ACCOUNT_ID"
          value: "310295"  # Optional: Your LinearB account ID
        - name: "DATADOG_HOST"
          value: "api.datadoghq.com"  # Optional: DataDog host for connectivity testing
  integrations:
    sourceControl:
      - name: "GitHub"
        url: "github.com"
      - name: "GitLab"
        url: "gitlab.com"
    projectManagement:
      name: "Jira"
      url: "your-company.atlassian.net"

Notes: - Diagnostics runs automatically on every helm install or helm upgrade - To disable diagnostics, set diagnostics.enabled: false in your values file - All configuration fields are optional - diagnostics will use sensible defaults

How It Works

When you deploy the on-prem agent using the standard deployment scripts (./set-up-k8s.sh and ./deploy.sh), the diagnostics Job will automatically run if enabled. The Job:

  1. Validates your configuration
  2. Tests network connectivity to required endpoints
  3. Checks API key authentication
  4. Gathers cluster information
  5. Completes and shows results in the logs

The main deployment continues after diagnostics completes.

Viewing Results

# Check diagnostics job status
kubectl get jobs -n linearb -l app.kubernetes.io/name=opa-diagnostics

# Get the latest diagnostics pod
kubectl get pods -n linearb -l app.kubernetes.io/name=opa-diagnostics --sort-by=.metadata.creationTimestamp

# View diagnostics output
kubectl logs -n linearb <diagnostics-pod-name>

# Example:
kubectl logs -n linearb on-prem-agent-diagnostics-48-jgvzt

Configuration

Inherited Values

The following values are automatically inherited from your main local-values.yaml configuration:

  • LINEARB_PUBLIC_API_HOST - LinearB API endpoint
  • LINEARB_PUBLIC_API_KEY - Your LinearB API token
  • ORG_ID - Your LinearB organization ID
  • LINEARB_DATA_LAKE_S3_BUCKET - S3 bucket for data storage

These values are used automatically by diagnostics - you don't need to configure them separately.

Diagnostics-Specific Configuration

All diagnostics settings are optional. Set these values in your local-values.yaml file under the diagnostics: section if you want to customize behavior:

Setting Description Default
enabled Enable/disable diagnostics true (enabled by default)
ACCOUNT_ID LinearB account ID for validation Not set
DATADOG_HOST Datadog API endpoint for connectivity testing Not set
integrations.sourceControl List of Git providers to test (GitHub, GitLab, etc.) Common providers (GitHub, GitLab, Bitbucket)
integrations.projectManagement Project management tool to test (Jira, etc.) Jira

Example Configuration

diagnostics:
  enabled: true
  image:
    env:
      plain:
        - name: "ACCOUNT_ID"
          value: "310295"
        - name: "DATADOG_HOST"
          value: "api.datadoghq.com"
  integrations:
    sourceControl:
      - name: "GitHub Enterprise"
        url: "github.company.com"
      - name: "GitLab"
        url: "gitlab.company.com"
    projectManagement:
      name: "Jira"
      url: "jira.company.com"

Output Example

🔍 Running OPA 4 Installation Diagnostics...
🌐 API Endpoint: on-prem-api.linearb-dev-01.io

📡 Network Connectivity Checks
✅ on_prem_api: on-prem-api.linearb-dev-01.io - SUCCESS
✅ datadog_api: https://api.datadoghq.com - SUCCESS

🔌 Integrations Connectivity Checks
✅ Source Control GitHub: Connection successful
✅ Source Control GitLab: Connection successful

🔑 API Key Validation
✅ LinearB API Token: SUCCESS
✅ LinearB Org ID: SUCCESS

☸️ Cluster Information
✅ Kubernetes Version: v1.31.5+k3s1
✅ Cluster Nodes: 2 nodes detected
✅ Storage Classes: 1 available

📊 DIAGNOSTIC SUMMARY
Overall Status: ⚠️ WARNING
Errors: 1 | Warnings: 2

Troubleshooting

Common Issues

  • Environment variables not set: Ensure global values are configured in your main values file
  • API connectivity failures: Check network policies and firewall rules
  • Certificate validation errors: Verify custom certificate configuration
  • RBAC permission errors: Service account needs cluster-reader permissions (automatically configured)

Understanding Results

  • ✅ SUCCESS - Everything is working correctly
  • ⚠️ WARNING - May need attention but not blocking (e.g., RBAC permission issues for optional features)
  • ❌ ERROR - Must be resolved for optimal operation
  • ℹ️ INFO - Informational messages or skipped checks

Common Warnings (Safe to Ignore)

The following warnings are expected and safe to ignore in most deployments:

⚠️ Persistent Volumes: Unable to list persistent volumes (insufficient permissions)
⚠️ Pods: Unable to list pods (insufficient permissions)

These warnings occur when the diagnostics service account lacks certain optional RBAC permissions. They do not affect the core functionality of the diagnostics or the OPA installation.

Advanced Configuration

Detailed Pod Analysis

Enable comprehensive pod information collection:

diagnostics:
  cluster:
    collectDetailedPodInfo: true

Job Configuration

The diagnostics runs as a Kubernetes Job with:

  • Restart Policy: Never
  • Backoff Limit: 1 retry
  • Active Deadline: 300s (5 minutes timeout)
  • TTL After Finished: 1800s (results kept for 30 minutes)

Support

For issues or questions:

  1. Capture Complete Output - Save the full diagnostics log
  2. Check Prerequisites - Verify Kubernetes and network requirements
  3. Contact LinearB Support - Provide diagnostics output and environment details