-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exec monitor #1117
Comments
I concur. I'd like to use something like this to monitor my various Syncthing/Restic backups, and show the results on my status page. That way at a glance I could see when the latest backup was without having to SSH into the storage box and check it by hand. Straight exec or even a shell script (either or) would be great. |
This would be great, I use Duplicati and would love to see that the backup completed. |
+1 this feature request! Would help to migrate away from nagios for my custom checks |
Just circling back to say the way I solved this for myself in the interim is having a scheduled job run in my Nomad cluster that hits a Push type monitor with a high heartbeat interval. I chose 90,000 seconds since it's a bit more than 24 hours (daily backups). Then in Nomad I have a periodic batch job that just executes my desired task every morning - here it's PostgreSQL dumps to a folder that Restic picks up later (this could be a cronjob or scheduled Kubernetes task or whatever). After the task succeeds (or fails) I hit the Uptime Kuma push endpoint via wget with "up" (has to be #!/bin/bash
umask 0077
timestamp=$(date "+%Y%m%d")
[[ range $db := .app.postgres.backup.databases ]]
output="/dump/$timestamp-[[ $db ]].sql"
echo -n "Backing up [[ $db ]] to: $output"
if pg_dump [[ $db ]] > "$output"; then
echo " - success"
wget -q -O /dev/null 'https://uptime.example.com/api/push/xxSZxxh5xx?status=up&msg=IT%27S%2012%20O%27CLOCK%20AND%20ALL%27S%20WELL%21'
else
echo " - failed!"
wget -q -O /dev/null 'https://uptime.example.com/api/push/xxSZxxh5xx?status=failed&msg=Postgres%20backups%20failed'
fi
[[ end ]]
exit 0 Not an exact solution, since you only find out an operation failed on a 24 hour lag when the check in doesn't happen. So still 👍 for this being added natively. :) |
Such a feature should be considered high-priority because it would immediately expand the supported monitor types without requiring @louislam to write explicit support for them. Before adding a single new monitor, add this, because it implicitly adds every protocol under the sun. For example, let's say I wanted to use SNMP monitoring for an old router (this is just an example, it can be any protocol which has command-line packages that can use it). Instead of asking you "please add SNMP support", "oh Louis I need SNMPv3, you only added v2", I'd just install net-snmp on Linux, and call snmpget, Kuma checks the result code, and the problem is solved:
I could even do all the advanced stuff I want in a bash script. |
It is not that easy. Please read my comment from #3178 (comment). |
Frankly, you're a developer of a tool, you can't stop a user from using the tool to destroy his system. You already require the admin to authenticate to add checks, what more can you do? GNOME Terminal doesn't stop the user from doing "rm -rf *" :) If you really want to hand-hold the user you could restrict the execution to a list of scripts defined in a specific directory. So for example:
Let's say uptime-kuma-custom-scripts directory contains:
When the user is adding a new monitor, if they select Custom Monitor as the type, you ls /app/custom-monitors, show every file as an option in a dropdown selection. So in my case I would select snmp-check.sh. And then you run this pre-defined task. No concerns here, right? |
I really like the idea approach of being able to execute commands. I get the security risk - definitely concerning. One way a vendor would address this is through only allowing the execution of trusted/predefined scripts within a folder.
Yes, this is the way that most vendors would address this type of concern. You could even require the scripts to be signed, hashed, or pulled from a trusted source. Then, within the UI, you would simply specify the script and any parameters or variables. |
Definitely feeling the shortcomings of Uptime Kuma without this feature. I made a list of the services I want to monitor and I have more protocols unsupported by Kuma than supported ones. 😆 Now my choices are:
Btw some features on @louislam todo list like domain expiration warning are trivially implementable with the feature we're requesting:
That said, I realize Kuma is trying to be multiplatform (but who is doing their IT on Windows?), and Louis would probably prefer a cross-platform solution. Although bash is multipaltform, if the Windows user installs cygwin. |
If the services you are monitoring are not web based, and you are comfortable writing custom scripts, the Push type monitor should work more than good enough for you. |
Hi @procheeseburger, I also use Duplicati. And for monitoring the completed backups I use www.duplicati-monitoring.com. It's a free service that alerts you when backups are completed or not (it actually reads the report from Duplicati). So it sends a daily email with a notice of the amount of backups completed. Not sure how you would use gRPC for this, but you can already use the Push Monitor type and a heartbeat alert from Duplicati to also monitor this. |
I'm sorry, but if you are so good with bash scripts, can't you simply implement your monitoring requirements with a simple bash script on a cron job, and do a wget/curl call to the Push notification URL with an up or down status depending on the exit code? This is what I do for my Windows PS Scripts on a Windows Task (yes we use Windows-based hosting and monitoring). The important part here is the service I'm monitoring is behind a NAT and Firewall, and my Uptime Kuma instance is running at another independent location. This way I can get monitor anything anywhere (from different data centres) and my Uptime Kuma notifications are not dependent on the monitored locations or services' internet access. My concern with gRPC's would be that if your Kuma Instance is compromised as it is an internet-facing service, and they figure out they can execute any gRPC commands or scripts right from your monitor side, your infrastructure may get infiltrated this way. |
Just found this project and I am impressed by the usability. I would like to upvote the request for the execution of commands, possibly from a whitelist of user-given directories. With this implemented, the entire universe of monitoring-plugins from here: https://github.com/monitoring-plugins/monitoring-plugins would become available. And this is an enormous tried-and-tested collection. It would allow Uptime Kuma to use ssh checks (see #2609) to monitor exquisite things (snmp, ldap, smb, uptime, sensors, file age,...). |
Regarding the security risk of executing arbitrary command, I think it is only a problem if uptime-kuma's account are shared with other users. (The server owner are already capable of executing arbitrary already.) One way to solve this would be disabling this feature by default unless some environmental variable are set. ( |
As a workarround there is some projects that expose commands as REST API. See: This yes, if it was directly integrated in uptime-kuma, it would be way better. |
I think with one of the web to shell bridges we would be able to retrieve an OK/NOK status based on the standard HTTPS monitor but we would not be able to fetch the value for the corresponding metric right? (ex: CPU use, load, memory, disk space) |
Please have a look at #819 (comment) and further discussions in #819 |
Yup, I see. But I think it would be more interesting to have remote execs as first class citizen monitors which would grab a metric and plot it - just like happens, for example, with the HTTPS monitor. I used this intensively with Adagios + SSH. I would be very interesting to bring this to UK, because it has a mind blowing UI. It would enable the use of the full monitoring-plugins package which is available on Linux machines and gives you the parsing of the OS metrics for free (no need to do scripts by hand like mentioned in #819 . These plugins have been distilled for many years, which is an advantage over the use of adhoc scripts. |
You should really consider into this. As another user points out there are many scenarios where the Push Monitor it is not situable for. I don't buy the security excuse someone post here, you can always allow only to execute scripts from specific path (as a whitelist) and the problema is gone. This would Open a whole world of opportunities to monitor. From dockers logs, ssh, usb ports, etc... infinite list. Kuma would be the definitive MONITOR. |
This comment was marked as duplicate.
This comment was marked as duplicate.
We just don't want people to get angry with us again. => If security folks tell us that this is not ok, then we try to listen. I am not working in security This is especially a security boundary where crossing might be risky as this would essentially disable the multi-user (or auth in general!) features of I would argue that such a feature would essentially only be viable without auth, as circumventing auth is trivial with console access to the machine. If you can come up with a design that fits into our security model, we can discuss this but currently I don't see such a design.
I might have overread something, but after re-reading the thread I cannot see a comment that is not adressed. If you are asking about |
@CommanderStorm here you have an example. We don't talk about remote execution of arbitrary code, just to allow user to load their owns scriptings and be happy.
Consider if you have to watch 18 servers, which have multiple dockers containers running and you only have SSH access, and you can't change their systems configuring push monitors because that IT because it doesn't belongs to you. You don't wan't to change nothing more than neccesary. You have to monitor not only if the dockers containers are running, but if they are doing what they has to, so yo manually inspect the logs from each one parsing it with a lot of logic... This is a custom scenario. I would'nt pretend that someone else codes a official plugin to do this, but let me do it by myself atleast. With Push Monitors you have to open ports, change iptables, use tunnels/vpns, etc. a lot of complications with something so trivial to do if you have custom monitors. |
I think you are overcompicating your live.
I think from a security standpoint the first one is preferable as there is more compartmentalisation. |
The same argumet as with GHSA-7grx-f945-mj96 applies though.
@n-thumann (the researcher who discovered GHSA-7grx-f945-mj96) has better ideas how to prevent such an attack.. We really don't want to ship insecure software and if the security community thinks someting is not secure we should likely listen. |
Great discussion. I'd like to add that security is not a topic of concern here and that arbitrary code execution by a user is neither necessary nor desirable. How I see this:
So, whatever code is executed is code that has been placed by the admin. The admin could as well delete, turn off the system, etc, There is no escalation here. In case the custom scripts need to connect via SSH to remote systems, the code that is executed runs with the privileges of the remote user, which has been provisioned with this in mind - usually a restricted user created for this single purpose. In this use case the SSH port is usually whitelisted by IP, the SSH users are usually whitelisted by name and have their keys auto-managed by a central configuration system. I am pretty obsessed about security but I do not see a problem here. |
If an admin of any system uploads malicious/exploitable executable then the system is already lost and there is nothing that can be done about it. The admin of a mailserver can impersonate hosted domain users and send malware on their behalf. The admin of a webserver can host malware discretely in a subdir of a hosted website. The admin of a DNS server can hijack specific DNS records and so on. In regards to GHSA-7grx-f945-mj96: the problem is that any user is able to install plugins via an API. You need to see if you really want any user to do so and if an API endpoint is the right way to do it. But this is not the point of the present issue. |
This comment was marked as duplicate.
This comment was marked as duplicate.
A malicious admin can already inject code into U-K to be executed both client and server side. The possibilities are endless. Think displaying Google, Github or bank login pages, phishing credentials, OTPs, mining crypto, DDOS, ... you name it. They have the combined potential of the user's browser session and the server backend at their disposal. An admin without malicious intent running executables as monitors is not the problem. Letting unauthenticated or low-priviliged users run or even install arbitrary or exploitable code is. There's no need to replace /bin/sh either. When someone with malicious intent gains shell access, they have hit the jackpot already. |
As someone who works in cyber, this is 1,000% correct. The proper way to restrict this is to restrict execution to specific scripts or paths. Yes, someone could replace that specific file, but doing so would require access to the file system, which is game over already. If you really want to get protective, then you could add approval required when the file is modified, based on hash or modification date. But I think that's probably overkill. Specific path restriction is probably adequate - You wouldn't want to allow execution of anything anywhere on the system, because a website compromised will typically allow you to upload files to website restricted directories. And all of this illustrates why the web service shouldn't be running as an admin/root anyway... |
1 similar comment
This comment was marked as duplicate.
This comment was marked as duplicate.
One possibility would be to provide available "exec monitors" in a configuration file, maybe with the option to pass environment variables from the monitor's setup page. Anything beyond that is probably unnecessary. You can't hash all binaries and libraries a process is going to open. Also, "plain" Uptime Kuma isn't perfect either. Compared to e.g adding a shell script calling ldapsearch, exposing the GUI or API to the public is by far the bigger risk - no offense intended! Combine that with full access to the docker socket and it's an incident waiting to happen. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Having configuration files just to have double accounting does not seem worth it (or at least I don't see a benefit).
We try to restrict what needs to be configured via environment variables to the least ammount of options as having to dig trough a gigantic parameter dump is terrible for everyone involved. |
This comment was marked as off-topic.
This comment was marked as off-topic.
I know, and have socket proxies everywhere. But this also "advanced stuff". Have you actually documented what access U-K needs? Until then people will probably use the same proxy for portainer, U-K and others (I.e. almost fully exposed). And it's not just database credentials. There are authorization headers, some clear text passwords maybe, API tokens, and so on. As you say, reducing these to the bare minimum is advanced stuff for most users. And even then, you probably don't want to leak the endpoints, error messages, etc. Some apps encrypt them "at rest", but as long as the key is readily available this doesn't change much. Anyway, let's end this here. I accept that this is only partially related. |
All of the security concerns can be solved with a few scenarios to implement that funcionality. And yes I do agree that some of those things might be a security concerns, but every system that allows user input is vulnerable to some extent. To enable custom script monitoring it might be required to:
Also don't forget that docker container would need to be modified by the end-user to actually utilize such custom monitoring options, as image doesn't have all the packages available pre installed. That would greatly improve monitoring possibilities instead of requiring developer to write it's specific checks |
🏷️ Feature Request Type
New Monitor
🔖 Feature description
Please add a monitor which executes user provided programs and checks the exit code.
✔️ Solution
The monitor executes programs with optional arguments as provided by the user and checks the exit code. Users of the docker image would need to mount a directory with static binaries and shell scripts in order to use them.
e.g calling gRPCurl to properly check if a gRPC services works. This is currently not possible and would mimic Kubernetes exec probe or Monits program status test.
❓ Alternatives
No response
📝 Additional Context
No response
The text was updated successfully, but these errors were encountered: