RankPlus monitors site availability from external probes, samples every few minutes and computes a 30-day uptime percentage. A number under 99.9% looks small - but it means 43+ minutes the site was unreachable, and those are usually the most critical minutes.
Why this matters
99.9% monthly availability = 43 minutes of downtime. 99.5% = 3.6 hours. 99% = 7.2 hours. Downtime is rarely uniformly distributed - it usually clusters into short, critical events: the failure mid-checkout, the timeout while Googlebot is indexing, the 500 error exactly when an ad click lands.
Concrete business impact: direct revenue loss - every downtime minute on a commerce site is orders not placed. SEO damage - Google demotes sites with poor availability (a trust signal). Lost trust - a visitor who saw "This site can't be reached" rarely tries again tomorrow. Reputation damage - blogs, SaaS dashboards and service sites that went down once look unprofessional.
How to detect
The RankPlus dashboard's Uptime view shows 30-day availability, an hourly chart and a downtime log: when an outage started, how long it lasted, what response code returned (timeout / 500 / connection refused). That log is the starting point.
How to fix
- Walk through the RankPlus uptime log. Ask: is there a temporal pattern? Every day at the same time (a backup cron)? Every Tuesday night (provider maintenance window)? Right after a plugin update (regression)?
- Talk to the host. They see their side: CPU spikes, PHP-FPM kills, memory-quota overruns, DB replication failures. Request server-side logs for the specific downtime window. Most providers respond within 24-48 hours.
- Inspect server logs directly if you have access:
/var/log/nginx/error.log,/var/log/php-fpm.log,/var/log/mysql/error.log. Errors at the moment of downtime are your strongest signal. - Look for slow queries in MySQL:
SHOW VARIABLES LIKE 'slow_query_log';and inspect the file. A repeated 30-second query is almost always a misbehaving plugin. - If the cause is traffic load: enable page caching (W3 Total Cache, WP Rocket, LiteSpeed Cache) and a CDN (Cloudflare free tier, BunnyCDN). Page caching alone reduces load by 90% on static blogs.
- If the cause is the database: find plugins generating huge tables (WooCommerce Action Scheduler, plugin security logs). Run
wp action-scheduler cleanand prune old logs. Consider Redis Object Cache. - If a specific plugin is to blame: the error log will show it. Disable, find an alternative.
- If the issue recurs without resolution: consider changing hosts. Cheap shared hosting buckles under load. Move to managed WordPress hosting (Kinsta, WP Engine, SiteGround), a VPS with cPanel, or Cloudways.
Common mistakes
- Trusting host-side monitoring only: "only my site was down but the host claims 99.99%" - they measure the server, not the application. RankPlus monitors from the outside.
- Switching hosts without finding the cause: if the issue is a bad plugin, the new host will see the same problem.
- Ignoring short outages: 5 minutes once a week = ~4% monthly downtime. It compounds.
- Disabling monitoring to silence alerts: changes nothing - visitors still hit the failures.
- No fallback: large sites without a maintenance mode or static error page. A static cache layer can serve stale content while the backend recovers.
Verifying the fix
Wait 30 days. In RankPlus you should see uptime climb. Target: 99.9% (under 43 downtime minutes). If you reach it, stabilise. If not, the root cause is not fixed. External monitors (UptimeRobot, Better Uptime) can serve as a corroborating second opinion.