This isn't exactly an answer to your question, but an alternative monitoring architecture that elides this problem entirely is to run netdata on each server you run.
- It appears to collect WAY more useful data than uptime Kuma, and requires basically no config. It also collects data on docker containers running on the server so you automatically get per-service metrics as well.
- Health probes for several protocols including ping and http can be custom-defined in config-files if you want that.
- There's no cross server config or discovery required, it just collects data from the system it's running on (though health probes can hit remote systems if you wish).
- If any individual or collection of services is down, I see it immediately in their metrics.
- If the server itself is down, it's obvious and I don't need a monitoring system to show a red streak for me to know. I've never wasted more than minute differentiating between a broken service and a broken server.
This approach needs no external monitoring hosts. It's not as elegant as a remote monitoring host that shows everything from a third-party perspective, but that also has the benefit of not false-positiving because the monitoring host went down or lost its network path to the monitored host... Netdata can always see what's happening because it's right there when it happens.