Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exec monitor #1117

Open
1 task done
otbutz opened this issue Jan 3, 2022 · 39 comments
Open
1 task done

Exec monitor #1117

otbutz opened this issue Jan 3, 2022 · 39 comments
Labels
area:monitor Everything related to monitors feature-request Request for new features to be added type:new proposing to add a new monitor

Comments

@otbutz
Copy link

otbutz commented Jan 3, 2022

⚠️ Please verify that this feature request has NOT been suggested before.

  • I checked and didn't find similar feature request

🏷️ Feature Request Type

New Monitor

🔖 Feature description

Please add a monitor which executes user provided programs and checks the exit code.

✔️ Solution

The monitor executes programs with optional arguments as provided by the user and checks the exit code. Users of the docker image would need to mount a directory with static binaries and shell scripts in order to use them.

e.g calling gRPCurl to properly check if a gRPC services works. This is currently not possible and would mimic Kubernetes exec probe or Monits program status test.

❓ Alternatives

No response

📝 Additional Context

No response

@otbutz otbutz added the feature-request Request for new features to be added label Jan 3, 2022
@sysr-q
Copy link

sysr-q commented Mar 7, 2022

I concur. I'd like to use something like this to monitor my various Syncthing/Restic backups, and show the results on my status page. That way at a glance I could see when the latest backup was without having to SSH into the storage box and check it by hand.

Straight exec or even a shell script (either or) would be great.

@weirlive
Copy link

weirlive commented Mar 7, 2022

This would be great, I use Duplicati and would love to see that the backup completed.

@poblabs
Copy link

poblabs commented Jan 8, 2023

+1 this feature request! Would help to migrate away from nagios for my custom checks

@sysr-q
Copy link

sysr-q commented May 2, 2023

Just circling back to say the way I solved this for myself in the interim is having a scheduled job run in my Nomad cluster that hits a Push type monitor with a high heartbeat interval. I chose 90,000 seconds since it's a bit more than 24 hours (daily backups).

Screen Shot 2023-05-02 at 12 35 14 PM

Then in Nomad I have a periodic batch job that just executes my desired task every morning - here it's PostgreSQL dumps to a folder that Restic picks up later (this could be a cronjob or scheduled Kubernetes task or whatever). After the task succeeds (or fails) I hit the Uptime Kuma push endpoint via wget with "up" (has to be status=up exactly) or "failed" (can be anything else) accordingly.

#!/bin/bash
umask 0077
timestamp=$(date "+%Y%m%d")

[[ range $db := .app.postgres.backup.databases ]]
output="/dump/$timestamp-[[ $db ]].sql"
echo -n "Backing up [[ $db ]] to: $output"
if pg_dump [[ $db ]] > "$output"; then
  echo " - success"
  wget -q -O /dev/null 'https://uptime.example.com/api/push/xxSZxxh5xx?status=up&msg=IT%27S%2012%20O%27CLOCK%20AND%20ALL%27S%20WELL%21'
else
  echo " - failed!"
  wget -q -O /dev/null 'https://uptime.example.com/api/push/xxSZxxh5xx?status=failed&msg=Postgres%20backups%20failed'
fi
[[ end ]]
exit 0

Not an exact solution, since you only find out an operation failed on a 24 hour lag when the check in doesn't happen. So still 👍 for this being added natively. :)

@jerkstorecaller
Copy link

Such a feature should be considered high-priority because it would immediately expand the supported monitor types without requiring @louislam to write explicit support for them. Before adding a single new monitor, add this, because it implicitly adds every protocol under the sun.

For example, let's say I wanted to use SNMP monitoring for an old router (this is just an example, it can be any protocol which has command-line packages that can use it). Instead of asking you "please add SNMP support", "oh Louis I need SNMPv3, you only added v2", I'd just install net-snmp on Linux, and call snmpget, Kuma checks the result code, and the problem is solved:

#!/bin/bash
snmpget -v2c -c public 192.168.1.1 .1.3.6.1.2.1.1.1.0

I could even do all the advanced stuff I want in a bash script.

@louislam
Copy link
Owner

It is not that easy. Please read my comment from #3178 (comment).

@jerkstorecaller
Copy link

It is not that easy. Please read my comment from #3178 (comment).

Frankly, you're a developer of a tool, you can't stop a user from using the tool to destroy his system. You already require the admin to authenticate to add checks, what more can you do? GNOME Terminal doesn't stop the user from doing "rm -rf *" :)

If you really want to hand-hold the user you could restrict the execution to a list of scripts defined in a specific directory. So for example:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data -v uptime-kuma-custom-monitors:/app/custom-monitors louislam/uptime-kuma

Let's say uptime-kuma-custom-scripts directory contains:

snmp-check.sh
email-check.sh
tftp-check.sh

When the user is adding a new monitor, if they select Custom Monitor as the type, you ls /app/custom-monitors, show every file as an option in a dropdown selection. So in my case I would select snmp-check.sh. And then you run this pre-defined task. No concerns here, right?

@stacksjb
Copy link

stacksjb commented Aug 3, 2023

I really like the idea approach of being able to execute commands. I get the security risk - definitely concerning.

One way a vendor would address this is through only allowing the execution of trusted/predefined scripts within a folder.

It is not that easy. Please read my comment from #3178 (comment).

Frankly, you're a developer of a tool, you can't stop a user from using the tool to destroy his system. You already require the admin to authenticate to add checks, what more can you do? GNOME Terminal doesn't stop the user from doing "rm -rf *" :)

If you really want to hand-hold the user you could restrict the execution to a list of scripts defined in a specific directory. So for example:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data -v uptime-kuma-custom-monitors:/app/custom-monitors louislam/uptime-kuma

Let's say uptime-kuma-custom-scripts directory contains:

snmp-check.sh
email-check.sh
tftp-check.sh

When the user is adding a new monitor, if they select Custom Monitor as the type, you ls /app/custom-monitors, show every file as an option in a dropdown selection. So in my case I would select snmp-check.sh. And then you run this pre-defined task. No concerns here, right?

Yes, this is the way that most vendors would address this type of concern. You could even require the scripts to be signed, hashed, or pulled from a trusted source.

Then, within the UI, you would simply specify the script and any parameters or variables.

@jerkstorecaller
Copy link

Definitely feeling the shortcomings of Uptime Kuma without this feature.

I made a list of the services I want to monitor and I have more protocols unsupported by Kuma than supported ones. 😆
As it is Kuma seems designed primarily for web developers. All the supported monitors are web adjacent.

Now my choices are:

  1. Find a worthy alternative to Kuma which allows running arbitrary scripts as your uptime check. Any recommendations?
  2. If there's no decent alternative, learn to write an HTTP API which when called, will execute an arbitrary Linux command. Run it, and have Kuma call it, eg http://localhost/workaround/check-smtp/ which then calls check-smtp.sh, and if exit code is not 0, return 404 or whatever to signify failure. I'm sure it's nothing hard, it's more about adding an extra piece of custom complexity.

Btw some features on @louislam todo list like domain expiration warning are trivially implementable with the feature we're requesting:

#!/bin/bash

expiration_date=$(echo | openssl s_client -servername site.com -connect site.com:443 2>/dev/null | openssl x509 -noout -dates | grep notAfter | cut -d'=' -f2)
expiration_date_seconds=$(date -d "$expiration_date" +%s)
current_date_seconds=$(date +%s)
days_left=$(( (expiration_date_seconds - current_date_seconds) / 86400 ))

if [ "$days_left" -lt 10 ]; then
  echo "Less than 10 days left for certificate expiration. We must warn the user!"
  exit 1
else
  echo "Still plenty of time!"
  exit 0
fi

That said, I realize Kuma is trying to be multiplatform (but who is doing their IT on Windows?), and Louis would probably prefer a cross-platform solution. Although bash is multipaltform, if the Windows user installs cygwin.

@chakflying
Copy link
Collaborator

If the services you are monitoring are not web based, and you are comfortable writing custom scripts, the Push type monitor should work more than good enough for you.

@bernarddt
Copy link

This would be great, I use Duplicati and would love to see that the backup completed.

Hi @procheeseburger, I also use Duplicati. And for monitoring the completed backups I use www.duplicati-monitoring.com. It's a free service that alerts you when backups are completed or not (it actually reads the report from Duplicati). So it sends a daily email with a notice of the amount of backups completed. Not sure how you would use gRPC for this, but you can already use the Push Monitor type and a heartbeat alert from Duplicati to also monitor this.

@bernarddt
Copy link

Definitely feeling the shortcomings of Uptime Kuma without this feature.

Btw some features on @louislam todo list like domain expiration warning are trivially implementable with the feature we're requesting:

I'm sorry, but if you are so good with bash scripts, can't you simply implement your monitoring requirements with a simple bash script on a cron job, and do a wget/curl call to the Push notification URL with an up or down status depending on the exit code?

This is what I do for my Windows PS Scripts on a Windows Task (yes we use Windows-based hosting and monitoring). The important part here is the service I'm monitoring is behind a NAT and Firewall, and my Uptime Kuma instance is running at another independent location. This way I can get monitor anything anywhere (from different data centres) and my Uptime Kuma notifications are not dependent on the monitored locations or services' internet access.

My concern with gRPC's would be that if your Kuma Instance is compromised as it is an internet-facing service, and they figure out they can execute any gRPC commands or scripts right from your monitor side, your infrastructure may get infiltrated this way.

@ghomem
Copy link

ghomem commented Jan 29, 2024

Just found this project and I am impressed by the usability. I would like to upvote the request for the execution of commands, possibly from a whitelist of user-given directories. With this implemented, the entire universe of monitoring-plugins from here:

https://github.com/monitoring-plugins/monitoring-plugins

would become available. And this is an enormous tried-and-tested collection. It would allow Uptime Kuma to use ssh checks (see #2609) to monitor exquisite things (snmp, ldap, smb, uptime, sensors, file age,...).

@maple3142
Copy link

Regarding the security risk of executing arbitrary command, I think it is only a problem if uptime-kuma's account are shared with other users. (The server owner are already capable of executing arbitrary already.)

One way to solve this would be disabling this feature by default unless some environmental variable are set. (UPTIME_KUMA_EXEC_MONITOR_ENABLED=true)

@redge76
Copy link

redge76 commented Feb 29, 2024

As a workarround there is some projects that expose commands as REST API. See:
https://github.com/msoap/shell2http
https://github.com/adnanh/webhook
https://github.com/fdefelici/shellst

This yes, if it was directly integrated in uptime-kuma, it would be way better.
jerkstorecaller solution is quite elegant and safe #1117 (comment)

@ghomem
Copy link

ghomem commented Apr 28, 2024

As a workarround there is some projects that expose commands as REST API. See: https://github.com/msoap/shell2http https://github.com/adnanh/webhook https://github.com/fdefelici/shellst

This yes, if it was directly integrated in uptime-kuma, it would be way better. jerkstorecaller solution is quite elegant and safe #1117 (comment)

I think with one of the web to shell bridges we would be able to retrieve an OK/NOK status based on the standard HTTPS monitor but we would not be able to fetch the value for the corresponding metric right? (ex: CPU use, load, memory, disk space)

@CommanderStorm
Copy link
Collaborator

fetch the value for the corresponding metric right? (ex: CPU use, load, memory, disk space)

Please have a look at #819 (comment) and further discussions in #819

@ghomem
Copy link

ghomem commented Apr 29, 2024

fetch the value for the corresponding metric right? (ex: CPU use, load, memory, disk space)

Please have a look at #819 (comment) and further discussions in #819

Yup, I see. But I think it would be more interesting to have remote execs as first class citizen monitors which would grab a metric and plot it - just like happens, for example, with the HTTPS monitor.

I used this intensively with Adagios + SSH. I would be very interesting to bring this to UK, because it has a mind blowing UI. It would enable the use of the full monitoring-plugins package which is available on Linux machines and gives you the parsing of the OS metrics for free (no need to do scripts by hand like mentioned in #819 . These plugins have been distilled for many years, which is an advantage over the use of adhoc scripts.

https://github.com/monitoring-plugins/monitoring-plugins

@EmaX093
Copy link

EmaX093 commented May 14, 2024

You should really consider into this. As another user points out there are many scenarios where the Push Monitor it is not situable for.

I don't buy the security excuse someone post here, you can always allow only to execute scripts from specific path (as a whitelist) and the problema is gone.

This would Open a whole world of opportunities to monitor. From dockers logs, ssh, usb ports, etc... infinite list. Kuma would be the definitive MONITOR.

@ghomem

This comment was marked as duplicate.

@CommanderStorm
Copy link
Collaborator

CommanderStorm commented May 14, 2024

I don't buy the security excuse

We just don't want people to get angry with us again.
See GHSA-7grx-f945-mj96 for another issue that went into the same direction of a feature that if used maliciously has security implications.
I would argue that if that is the level of security we are operating under, encouraging AUTHENTIFICATED ARBITRAIRY CODE EXECUTION allowing for priviege escalation is not something that we can allow.

=> If security folks tell us that this is not ok, then we try to listen. I am not working in security

This is especially a security boundary where crossing might be risky as this would essentially disable the multi-user (or auth in general!) features of

I would argue that such a feature would essentially only be viable without auth, as circumventing auth is trivial with console access to the machine.

If you can come up with a design that fits into our security model, we can discuss this but currently I don't see such a design.

there are many scenarios where the Push Monitor it is not situable for.

I might have overread something, but after re-reading the thread I cannot see a comment that is not adressed.
Could you please cite your sources? 😓

If you are asking about monitoring-plugins/monitoring-plugins, this can be another monitor or a set of monitors.

@EmaX093
Copy link

EmaX093 commented May 14, 2024

It is not that easy. Please read my comment from #3178 (comment).

Frankly, you're a developer of a tool, you can't stop a user from using the tool to destroy his system. You already require the admin to authenticate to add checks, what more can you do? GNOME Terminal doesn't stop the user from doing "rm -rf *" :)

If you really want to hand-hold the user you could restrict the execution to a list of scripts defined in a specific directory. So for example:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data -v uptime-kuma-custom-monitors:/app/custom-monitors louislam/uptime-kuma

Let's say uptime-kuma-custom-scripts directory contains:

snmp-check.sh
email-check.sh
tftp-check.sh

When the user is adding a new monitor, if they select Custom Monitor as the type, you ls /app/custom-monitors, show every file as an option in a dropdown selection. So in my case I would select snmp-check.sh. And then you run this pre-defined task. No concerns here, right?

@CommanderStorm here you have an example. We don't talk about remote execution of arbitrary code, just to allow user to load their owns scriptings and be happy.

I might have overread something, but after re-reading the thread I cannot see a comment that is not adressed.
Could you please cite your sources? 😓

Consider if you have to watch 18 servers, which have multiple dockers containers running and you only have SSH access, and you can't change their systems configuring push monitors because that IT because it doesn't belongs to you. You don't wan't to change nothing more than neccesary. You have to monitor not only if the dockers containers are running, but if they are doing what they has to, so yo manually inspect the logs from each one parsing it with a lot of logic... This is a custom scenario. I would'nt pretend that someone else codes a official plugin to do this, but let me do it by myself atleast.

With Push Monitors you have to open ports, change iptables, use tunnels/vpns, etc. a lot of complications with something so trivial to do if you have custom monitors.

@CommanderStorm
Copy link
Collaborator

I think you are overcompicating your live.
You can either:

  • locate the push monitors on the 18 servers (=> no credentials leave the servers, but you might need to allow that port/..)
  • locate the push monitors on the same server uptime kuma is on (=> need for credetial sharing, likely no need for networking adjustments assuming that this machiene can ssh, but not http as sugested above)

I think from a security standpoint the first one is preferable as there is more compartmentalisation.

@CommanderStorm
Copy link
Collaborator

We don't talk about remote execution of arbitrary code, just to allow user to load their owns scriptings and be happy.

The same argumet as with GHSA-7grx-f945-mj96 applies though.
If we call the arbitrairy executable a plugin, a shell script or sandboxed js I don't see a real difference.
(Please correct me if my security understanding is bad)

  • Admin uploads malicious/exploitable executable to said directory, lets call it sh.
  • (I think the rest is obvios: remove auth/install a backdoor/...)

@n-thumann (the researcher who discovered GHSA-7grx-f945-mj96) has better ideas how to prevent such an attack..

We really don't want to ship insecure software and if the security community thinks someting is not secure we should likely listen.
My reluctance to allow for authentificated remote code execution is especially given the liability upcoming laws like the Product Liability Directive (and the Cyber Resiliance Act, but that likely does not matter here) introduce:
https://fosdem.org/2024/schedule/event/fosdem-2024-3683-the-regulators-are-coming-one-year-on/

@ghomem
Copy link

ghomem commented May 14, 2024

Great discussion. I'd like to add that security is not a topic of concern here and that arbitrary code execution by a user is neither necessary nor desirable.

How I see this:

  • the admin of the UK system should place the custom scripts in the system (example: apt install monitoring-plugins or manual copy of custom scripts)
  • normal users logged in via web should see them as available but would not be able to change them

So, whatever code is executed is code that has been placed by the admin. The admin could as well delete, turn off the system, etc, There is no escalation here.

In case the custom scripts need to connect via SSH to remote systems, the code that is executed runs with the privileges of the remote user, which has been provisioned with this in mind - usually a restricted user created for this single purpose. In this use case the SSH port is usually whitelisted by IP, the SSH users are usually whitelisted by name and have their keys auto-managed by a central configuration system.

I am pretty obsessed about security but I do not see a problem here.

@ghomem
Copy link

ghomem commented May 14, 2024

Admin uploads malicious/exploitable executable to said directory, lets call it

If an admin of any system uploads malicious/exploitable executable then the system is already lost and there is nothing that can be done about it. The admin of a mailserver can impersonate hosted domain users and send malware on their behalf. The admin of a webserver can host malware discretely in a subdir of a hosted website. The admin of a DNS server can hijack specific DNS records and so on.

In regards to GHSA-7grx-f945-mj96:

the problem is that any user is able to install plugins via an API. You need to see if you really want any user to do so and if an API endpoint is the right way to do it. But this is not the point of the present issue.

@mathieu-rossignol

This comment was marked as duplicate.

@thielj
Copy link

thielj commented May 17, 2024

  • Admin uploads malicious/exploitable executable to said directory, lets call it sh.

A malicious admin can already inject code into U-K to be executed both client and server side. The possibilities are endless. Think displaying Google, Github or bank login pages, phishing credentials, OTPs, mining crypto, DDOS, ... you name it. They have the combined potential of the user's browser session and the server backend at their disposal.

An admin without malicious intent running executables as monitors is not the problem.

Letting unauthenticated or low-priviliged users run or even install arbitrary or exploitable code is.

There's no need to replace /bin/sh either. When someone with malicious intent gains shell access, they have hit the jackpot already.

@stacksjb
Copy link

  • Admin uploads malicious/exploitable executable to said directory, lets call it sh.

A malicious admin can already inject code into U-K to be executed both client and server side. The possibilities are endless. Think displaying Google, Github or bank login pages, phishing credentials, OTPs, mining crypto, DDOS, ... you name it. They have the combined potential of the user's browser session and the server backend at their disposal.

An admin without malicious intent running executables as monitors is not the problem.

Letting unauthenticated or low-priviliged users run or even install arbitrary or exploitable code is.

There's no need to replace /bin/sh either. When someone with malicious intent gains shell access, they have hit the jackpot already.

As someone who works in cyber, this is 1,000% correct.

The proper way to restrict this is to restrict execution to specific scripts or paths. Yes, someone could replace that specific file, but doing so would require access to the file system, which is game over already.

If you really want to get protective, then you could add approval required when the file is modified, based on hash or modification date. But I think that's probably overkill.

Specific path restriction is probably adequate - You wouldn't want to allow execution of anything anywhere on the system, because a website compromised will typically allow you to upload files to website restricted directories.

And all of this illustrates why the web service shouldn't be running as an admin/root anyway...

1 similar comment
@stacksjb

This comment was marked as duplicate.

@thielj
Copy link

thielj commented May 17, 2024

One possibility would be to provide available "exec monitors" in a configuration file, maybe with the option to pass environment variables from the monitor's setup page. Anything beyond that is probably unnecessary. You can't hash all binaries and libraries a process is going to open.

Also, "plain" Uptime Kuma isn't perfect either. Compared to e.g adding a shell script calling ldapsearch, exposing the GUI or API to the public is by far the bigger risk - no offense intended! Combine that with full access to the docker socket and it's an incident waiting to happen.

@thielj

This comment was marked as off-topic.

@CommanderStorm

This comment was marked as off-topic.

@CommanderStorm

This comment was marked as off-topic.

@thielj

This comment was marked as off-topic.

@CommanderStorm
Copy link
Collaborator

One possibility would be to provide available "exec monitors" in a configuration file

Having configuration files just to have double accounting does not seem worth it (or at least I don't see a benefit).
I'd expect a simple "list all executable files in the directory without following links" to be the same level of security.

maybe with the option to pass environment variables from the monitor's setup page

We try to restrict what needs to be configured via environment variables to the least ammount of options as having to dig trough a gigantic parameter dump is terrible for everyone involved.

@CommanderStorm

This comment was marked as off-topic.

@thielj
Copy link

thielj commented May 18, 2024

@thielj
I know this is off-topic:

Combine that with full access to the docker socket

I know, and have socket proxies everywhere. But this also "advanced stuff". Have you actually documented what access U-K needs? Until then people will probably use the same proxy for portainer, U-K and others (I.e. almost fully exposed).

And it's not just database credentials. There are authorization headers, some clear text passwords maybe, API tokens, and so on. As you say, reducing these to the bare minimum is advanced stuff for most users. And even then, you probably don't want to leak the endpoints, error messages, etc. Some apps encrypt them "at rest", but as long as the key is readily available this doesn't change much.

Anyway, let's end this here. I accept that this is only partially related.

@pprazzi99
Copy link

pprazzi99 commented Jul 12, 2024

All of the security concerns can be solved with a few scenarios to implement that funcionality. And yes I do agree that some of those things might be a security concerns, but every system that allows user input is vulnerable to some extent.

To enable custom script monitoring it might be required to:

  • Set environemnt variable (e.g. CUSTOM_MONITOR) to true
  • Enable such monitor type in setting (could only be done by superadmin)
  • Only superadmin can configure such checks

Also don't forget that docker container would need to be modified by the end-user to actually utilize such custom monitoring options, as image doesn't have all the packages available pre installed.

That would greatly improve monitoring possibilities instead of requiring developer to write it's specific checks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:monitor Everything related to monitors feature-request Request for new features to be added type:new proposing to add a new monitor
Projects
None yet
Development

No branches or pull requests