So recently been spending time configuring my selfhosted services with notifications usint ntfy. I’ve added ntfy to report status on containers and my system using Beszel. However, only 12 out of my 44 containers seem to have healthcheck “enabled” or built in as a feature. So im now wondering what is considered best practice for monitoring the uptime/health of my containers. I am already using uptimekuma, with the “docker container” option for each of my containers i deem necessary to monitor, i do not monitor all 44 of them 😅
So I’m left with these questions;
- How do you notify yourself about the status of a container?
- Is there a “quick” way to know if a container has healthcheck as a feature.
- Does healthcheck feature simply depend on the developer of each app, or the person building the container?
- Is it better to simply monitor the http(s) request to each service? (I believe this in my case would make Caddy a single point of failure for this kind of monitor).
Thanks for any input!
Dozzle will tell you just about everything you want to know about the health of a container. Sadly, to my knowledge, it does not integrate with any notification platforms like nfty, even though there is a long standing request for that feature.
Jupp running that too 😅 Was not aware of the pensing feature, ill keep my eyes open for that in the future!
I just put a healthcheck in my compose files and then run an autoheal container that will automatically restart them if they are “unhealthy”.
- I don’t, in general all of them are restarted automatically. I have monitoring configured for services, not containers themselves.
- Yes, looking at the original Dockerfile/Containerfile if they have HEALTHCHECK keyword you can assume that they do something.
- Person building the container, often it doesn’t make sense to create a healthcheck at all. Some times healthcheck feature is provided by application as well, it still needs to be part of the containerfile.
- Better to monitor your service (application+db+proxy+queue+whatever), not containers in isolation.
How do you notify yourself about the status of a container?
I usually notice if a container or application is down because that usually results in something in my house not working. Sounds stupid, but I’m not hosting a hyper available cluster at home.
Is there a “quick” way to know if a container has healthcheck as a feature.
Check the documentation
Does healthcheck feature simply depend on the developer of each app, or the person building the container?
If the developer adds a healthcheck feature, you should use that. If there is none, you can always build one yourself. If it’s a web app, a simple HTTP request does the trick, just validate the returned HTML - if the status code is 200 and the output contains a certain string, it seems to be up. If it’s not a web app, like a database, a simple
SELECT 1on the database could tell you if it’s reachable or not.Is it better to simply monitor the http(s) request to each service? (I believe this in my case would make Caddy a single point of failure for this kind of monitor).
If you only run a bunch of web services that you use on demand, monitoring the HTTP requests to each service is more than enough. Caddy being a single point of failure is not a problem because your caddy being dead still results in the service being unusable. And you will immediately know if caddy died or the service behind it because the error message looks different. If the upstream is dead, caddy returns a 502, if caddy is dead, you’ll get a “Connection timed out”
Yeah fair enough this, personally want to monitor backend services too just for good measure. Also to prove to my friends and family that i can maintain a higher uptime % than cloudflare 🤣
If you’re looking for this you can use something like uptime kuma, which pings each service and looks for a specific response or it will ping you
I doubled down recently and now have Grafana dashboards + alerts for all of my proxmox hosts, their containers etc.
Alerts are mainly mean CPU, memory or disk utilization > 80% over 5 minutes
I also get all of my notifications via a self hosted ntfy instance :~)
As i wrote in my post, im already using uptimekuma to monitor my services. However if i choose the “docker container” mode foe uptimekuma to monitor it cant actually so that, as there is no health feature in most containers, so this results in 100% downtime 🙃 Other way would to do it would to just check the url of the service whoch ofc works too, but its not a “true” health check.
For databases, many like postgres have a ping / ready command you can use to ensure it’s up and not have the overhead of an actual query! Redis is the same way (I feel like pg and redis health checks covers a lot of the common stack patterns)
docker inspect --format='{{json .State.Health}}' <container_name>HEALTHCHECKis part of the Dockerfile syntax and ought to be supported by all your container runtimeshttps://docs.docker.com/reference/dockerfile/#healthcheck
You could extend all the dockerfiles that don’t have a health check to implement this feature with whatever health check makes sense for the application, even if for now it’s just a curl of an endpoint.
This is a neat little inspect command indeed!
I use uptimekuma with notifications through home assistant. I get notifications on my phone and watch. I had notifications set up to go to a room on my matrix homeserver but recently migrated it and don’t feel like messing with the room.
I assume you then also use apprise as middleman here or?
You can read the Dockerfile for the HEALTHCHECK clause however not all have it as it’s been introduced in later docker versions.
You can also write your own using things like curl.
2, no, just check the docs.
3, yup
You can make your own health checks in docker compose, so for instance, I had etcd running provided by another company, and I just set up the check in compose using the etcdctl commands (etcdctl endpoint health).
https://docs.docker.com/reference/compose-file/services/#healthcheck
i just let kubernetes handle it for me. k3s specifically.
Maybe a transition to a cluster homelab should be the goal of 2026, would be fun.
maybe! three raspis and k3s have served me mostly well for years, tho with usb/sata adapters cuz the microsd was getting rather unreliable after awhile
Nice one that, fortunately i just rebuilt my server with an i5-12400 new fancy case amd slowly transitioning to an all in ssd build! I would probably lean towards a singlenode cluster using Talos.
- Some kind of monitoring software, like the Grafana stack. I like email and Discord notifications.
- The Dockerfile will have a HEALTHCHECK statement, but in my experience this is pretty rare. Most of the time I set up a health check in the docker compose file or I extended the Dockerfile and add my own. You sometimes need to add a tool (like curl) to do the health check anyway.
- It’s a feature of the container, but the app needs to support some way of signaling “health”, such as through a web API.
- It depends on your needs. You can do all of the above. You can do so-called black box monitoring where you’re just monitoring whether your webapp is up or down. Easy. However, for a business you may want to know about problems before they happen, so you add white box monitoring for sub-components (database, services), timing, error counts, etc.
To add to that: health checks in Docker containers are mostly for self-healing purposes. Think about a system where you have a web app running in many separate containers across some number of nodes. You want to know if one container has become too slow or non-responsive so you can restart it before the rest of the containers are overwhelmed, causing more serious downtime. So, a health check allows Docker to restart the container without manual intervention. You can configure it to give up if it restarts too many times, and then you would have other systems (like a load balancer) to direct traffic away from the failed subsystems.
It’s useful to remember that containers are “cattle not pets”, so a restart or shutdown of a container is a “business as usual” event and things should continue to run in a distributed system.
Thanks for your input 👍
If I go to its web interface (because everything is a web interface) and it’s down, then I know it has a problem.
I could set up monitoring, but I wouldn’t care enough to fix it until I had free time to use it either.
Same here. I’m the only user of my services, so if I try visiting the website and it’s down, that’s how I know it’s down.
I prefer phrasing it differently, though. “With my current uptime monitoring strategy, all endpoints serve as an on-demand healthcheck endpoint.”
One legitimate thing I do, though, is have a systemd service that starts each docker compose file. If a container crashes, systemd will notice (I think it keeps an eye on the PIDs automatically) and restart them.
A superb image will have a health check endpoint set up in the dockerfile.
A good image will have a health check endpoint on either the service or another port that you can set up manually.
Most images will require you to manually devise some convoluted health check procedure using automated auth tokens.
All of my images fall into that latter category. You’re welcome.
(Ok, ok, I’m sorry. But you did just remind me that I need to code a health check endpoint and put it in the dockerfile.)
So many upvotes without a comment :/ Sadly I don’t have much useful info to add either, I’m looking forward to how others do it as well, since I recently noticed this panel in Beszel too.
Honestly, I use the status icons in Homepage dashboard as a health check, since I always use my dashboard to navigate to apps. Red status indicator -> I have to go fix it. Nothing more severe.
But for point 3 I do have a strong hunch that it depends on the container image creator - a health check is usually just a command that either succeeds or not (or a http response that gets a 200 or not), so it can be as simple as pointing a request to the root url of the app. Of course, this is not the most performant way to check this, which is why app makers may also put in explicit liveness/readiness or similar endpoints that return a really short json to indicate their status. But for the containers that have a healthcheck, they must be implemented in the image (too) I think
So I’m also using Beszel and Ntfy to track my systems because it’s lightweight and very very easy. Coming from having tried Grafana and Prometheus and different TSDBs I felt like I was way better off.
I’ve been following Beszels development closely because it was previously missing features like container monitoring and systemd monitoring which I’m very thankful for them having added recently and I use containers as my primary way of hosting all my applications. The “Healthy” or “Unhealthy” status is directly reported by Docker itself and not something Beszel monitors directly so it has to be configured, either by the configuration in the Dockerfile of the container image or afterwards using the healthcheck options when running a container.
As some other comments mentioned, some containers do come with a healthcheck built in which makes docker auto-configure and enable that healthcheck endpoint. Some containers don’t have a healthcheck built into the container build file and some have documentation for adding a healthcheck to the docker run command or compose file. Some examples are Beszel and Ntfy themselves.
For containers that do not have a healthcheck built into the build file it is either documented how to add it to the compose or you have to figure out a way to do it yourself. For docker images that are built using a more standard image like Alpine, Debian or others you usually have something like curl installed. If the service you are running has a webpage going you can use that. Some programs have a healthcheck command built into it that you can also use.
As an example, the postgresql program has a built in healthcheck command you can use of that’ll check if the database is ready. The easiest way to add it would be to do
healthcheck: test: ["CMD", "pg_isready", "-U", "root", "-d", "db_name"] interval: 30s retries: 5 start_period: 60sThat’ll run the command inside the container
pg_isready -U root -d db_nameevery 30 seconds but not before 60 seconds to get the container up and running. Options can be changed depending on the speed of the system.Another example, for a container that has the curl program available inside it you can add something like
healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/"] interval: 1m retries: 3This will run
curl -f http://localhost:3000/every 1 minute. If either of the above examples would exit with an exit code higher than 0 Docker would report the container has unhealthy. Beszel will then read that data and report back that the container is not healthy. Some web apps have something along the line of a/healthendpoint you can use the curl command with as well.Unless the developer has spent some extra time on the healthchecks it is often just a basic way to see that the program inside the container is running. However, usually the container itself exits if the program it is running crashes or quits. So a healthcheck isn’t always necessary as the healthcheck will be that the container has abruptly stopped. This is why things like Uptime Kuma is something to consider running alongside Beszel because it can monitor when a web address or similar is down as well even if a container exits which as of now Beszel is still sadly lacking.
I would recommend you read up on the Docker Compose spec for healthchecks since with the other options you can also do things like timeouts and what not, combining that with whatever program you’re running with the healthcheck you can get very creative with it if you must.
My personal recommendation would be to sticking with Uptime Kuma regarding proper service availability healthchecks since it’ll be easier to configure and get an overview of things like slow load times of web pages and containers that have stopped while using Beszel to monitor performance and resource usage.
Thanks for this very in depth answer, learned a lot from this 🫶
I rely on the developers putting in a health check, but few do.
I’ve also got uptime kuma setup, which is kinda like an external healthcheck.






