Alerts & Alert Rules
Configure automated alerting for CPU, RAM, disk, AV, service, and offline conditions.
How Alerts Work
MaxRMM evaluates incoming telemetry against your alert rules on every report cycle. When a rule condition is met, an alert is created with:
- Title — Description of what triggered (e.g., "CPU above 90% on WORKSTATION-42")
- Severity — info, warning, or critical
- Category — The metric type: cpu, ram, disk, av, service, or offline
- Agent — Which endpoint triggered the alert
- Timestamp — When the condition was first detected
Alerts remain open until explicitly acknowledged or resolved by a technician.
Creating Alert Rules
Navigate to Settings → Alert Rules and click Create Rule.
Rule Parameters
| Field | Values | Description |
|---|---|---|
| Category | cpu, ram, disk, av, service, offline | What metric to monitor |
| Condition | above, below, equals | Comparison operator |
| Threshold | Numeric value | The trigger value (e.g., 90 for CPU above 90%) |
| Severity | info, warning, critical | Alert priority level (default: warning) |
| Enabled | true / false | Toggle the rule on or off |
Example Rules
| Rule | Category | Condition | Threshold | Severity |
|---|---|---|---|---|
| High CPU | cpu | above | 90 | warning |
| Critical CPU | cpu | above | 98 | critical |
| Low disk space | disk | below | 10 | critical |
| High RAM usage | ram | above | 95 | warning |
| AV disabled | av | equals | 0 | critical |
| Agent offline | offline | above | 5 | warning |
Severity Levels
| Level | Use Case |
|---|---|
| Info | Informational events that do not require immediate action (e.g., agent came online, patch scan completed) |
| Warning | Conditions that should be investigated soon (e.g., RAM above 90%, disk below 20%) |
| Critical | Conditions requiring immediate attention (e.g., disk below 5%, AV disabled, agent offline for extended period) |
Managing Alerts
Acknowledging an alert
Acknowledging an alert marks it as "seen" without resolving it. This is useful for alerts you are actively investigating.
Resolving an alert
Resolving an alert marks it as fixed. Resolved alerts move to the history view and stop appearing in the active alerts count.
Snoozing alerts for an agent
If an agent is generating noise (e.g., a known maintenance window), you can snooze all alerts for that agent:
PUT /api/alerts/snooze/:agentId
{
"durationMinutes": 120
} This suppresses all unresolved alerts for the specified agent for the given duration.
Maintenance Mode
Maintenance mode suppresses all new alert notifications company-wide. Enable it during planned maintenance windows:
PUT /api/alerts/maintenance
{
"enabled": true,
"durationMinutes": 60
} Alerts are still recorded during maintenance mode, but notifications (Slack, Teams, email) are suppressed. Maintenance mode automatically disables after the specified duration.
Alert Deduplication
MaxRMM deduplicates alerts automatically. If the same condition triggers on the same agent while an existing alert is still open, a new alert is not created. Instead, the existing alert's "last seen" timestamp is updated. This prevents alert storms when a persistent condition (like high CPU) triggers on every telemetry cycle.
Alert Notifications
When an alert fires, MaxRMM can notify you through:
- Slack — real-time messages to a channel
- Microsoft Teams — adaptive cards via incoming webhook
- Webhooks — POST to any URL for custom integrations
Configure notification integrations under Settings → Integrations.