Integration Problems
Integration Problems
Section titled “Integration Problems”This page covers common issues with connecting monitoring platforms to Overwatch, receiving webhooks, and parsing alerts. Each section identifies the problem, explains the cause, and provides a resolution path.
Webhooks Not Arriving
Section titled “Webhooks Not Arriving”Problem: You configured a webhook in your monitoring platform but Overwatch is not receiving any alerts.
Cause: The webhook URL may be misconfigured, a firewall or network policy may be blocking outbound requests from the monitoring platform, or the webhook requires HTTPS and the URL is using HTTP.
Solution:
- Copy the webhook URL exactly from Settings > Integrations in the Overwatch dashboard --- trailing slashes and path segments matter
- Verify the URL uses HTTPS --- most monitoring platforms reject HTTP webhook destinations
- Check that your monitoring platform can reach the Overwatch API endpoint (no firewall, VPN, or IP allowlist blocking the connection)
- Send a test webhook from the monitoring platform’s configuration page and check the Overwatch integration status for delivery confirmation
- Test manually with curl to isolate the issue:
# Send a minimal test payload to your webhook endpointcurl -X POST https://api.your-overwatch-instance.com/api/v1/webhooks/datadog \ -H "Content-Type: application/json" \ -d '{"title": "Test Alert", "text": "Manual webhook test", "alert_type": "info"}'Tip: If the curl request succeeds but the monitoring platform’s webhook fails, the issue is on the platform side --- check the platform’s webhook delivery logs for error details.
Alerts Not Parsing Correctly
Section titled “Alerts Not Parsing Correctly”Problem: Webhooks arrive in Overwatch but alerts are not created, or alert fields (title, severity, description) are missing or incorrect.
Cause: The webhook payload format may not match what Overwatch’s alert parser expects. This can happen when the monitoring platform sends a non-standard format, required fields are missing, or a platform update changed the payload structure.
Solution:
- Check which alert parser is configured for your integration --- each platform (Datadog, Grafana, PagerDuty, etc.) has a dedicated parser
- Review the webhook payload in your monitoring platform’s delivery logs and compare it to the expected format documented in the Integration Guides
- Ensure required fields are present in the payload: at minimum, a title or alert name and a severity or priority level
- For self-hosted deployments, check the backend logs for parsing errors:
docker compose logs backend -f | grep -i "parser\|webhook\|alert"Note: If your monitoring platform recently updated its webhook format, Overwatch may need a parser update. Report the issue with a sample payload to support@overwatch-observability.com.
Duplicate Alerts
Section titled “Duplicate Alerts”Problem: The same alert appears multiple times in Overwatch.
Cause: Monitoring platforms retry webhook delivery when they do not receive a timely acknowledgment. If the Overwatch endpoint responds slowly or the network drops the acknowledgment, the platform sends the same payload again.
Solution:
- Check the deduplication settings for your integration in Settings > Integrations > [Platform] > Advanced
- Overwatch deduplicates by alert fingerprint (a hash of key fields like title, source, and timestamp) --- verify these fields are consistent across retries
- If duplicates persist, check the monitoring platform’s retry configuration and increase the timeout or reduce retry attempts
- Review whether the platform is sending distinct payloads for what appears to be the same alert (some platforms include changing timestamps or request IDs that defeat deduplication)
Severity Mapping Is Wrong
Section titled “Severity Mapping Is Wrong”Problem: Alerts appear in Overwatch with the wrong severity level (e.g., a critical alert shows as low severity).
Cause: Different monitoring platforms use different severity scales and terminology. Datadog uses alert, warning, info; PagerDuty uses critical, high, low; Grafana uses alerting, ok. The platform-specific parser maps these to Overwatch’s four-level scale (Critical, High, Medium, Low), and the mapping may not match your expectations.
Solution:
- Review the severity mapping for your platform in the Integration Guides
- Default mappings follow this general pattern:
| Platform | Platform Value | Overwatch Severity |
|---|---|---|
| Datadog | error, alert | Critical |
| Datadog | warning | Medium |
| Datadog | info | Low |
| PagerDuty | critical | Critical |
| PagerDuty | high | High |
| PagerDuty | low | Low |
| Grafana | alerting | High |
- If the default mapping does not fit your workflow, contact your administrator about configuring a custom severity mapping for the integration
Integration Shows “Disconnected”
Section titled “Integration Shows “Disconnected””Problem: A previously working integration shows a “Disconnected” or “Error” status in the Integrations page.
Cause: The API key or credentials used for the integration have expired or been revoked, or the monitoring platform changed its permissions model.
Solution:
- Go to Settings > Integrations and click the affected integration
- Check the error message --- it typically indicates an authentication or permissions failure
- Re-authenticate the integration by updating the API key or credentials
- Verify that the API key has the required permissions on the monitoring platform side (read access to alerts and monitors at minimum)
- Send a test webhook after re-authenticating to confirm the connection is restored
Testing Webhooks
Section titled “Testing Webhooks”When setting up or debugging an integration, use these steps to verify webhook delivery end-to-end:
- Get your webhook URL from Settings > Integrations > [Platform]
- Send a test payload using curl:
# Datadog test payloadcurl -X POST https://api.your-overwatch-instance.com/api/v1/webhooks/datadog \ -H "Content-Type: application/json" \ -d '{ "title": "[TEST] High CPU on web-server-01", "text": "CPU usage exceeded 90% for 5 minutes", "alert_type": "error", "source_type_name": "datadog", "tags": "env:production,service:api" }'- Check the dashboard for the new alert in the Incidents list
- Verify parsing by confirming the alert title, severity, and description match your expectations
- Check logs if the alert does not appear:
docker compose logs backend --tail=50 | grep -i "webhook"Tip: Keep a set of known-good test payloads for each integration so you can quickly validate webhook delivery after configuration changes.
Still Stuck?
Section titled “Still Stuck?”- Review the platform-specific setup guide in the Integration Guides
- Check the API Errors page if webhook requests return HTTP errors
- Contact support at support@overwatch-observability.com with the platform name, webhook URL (redacted), and a sample payload