Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DietPi-LetsEncrypt | Support multiple domains #1622

Closed
LexiconCode opened this issue Mar 14, 2018 · 38 comments · Fixed by #3728 or #4220
Closed

DietPi-LetsEncrypt | Support multiple domains #1622

LexiconCode opened this issue Mar 14, 2018 · 38 comments · Fixed by #3728 or #4220

Comments

@LexiconCode
Copy link

Let's Encrypt now supports wildcard certificates this would be the preferred way to integrate into the diet pi platform and simplify Web servers.

@MichaIng
Copy link
Owner

@LexiconCode
Thanks for the hint. If I understand right, than this feature is letsencrypt server side, thus is supported with older CertBot (Debian repo) versions as well, by just providing the domain with wildcard.

  • This would need testing, e.g. by dry-run.

As we use the entered domain as well for webserver configuration, we would need to introduce an additional field for domain with wildcard (used for certbot -d ...) and the concrete subdomain, that shell be used for the webserver.

@pedrom34
Copy link

Wouldn't it be lighter to use acme.sh instead of certbot? Acme.sh supports wildcard certificates generation.

@MichaIng
Copy link
Owner

@pedrom34
Thanks for sharing this idea. For now, until some other DietPi tasks are done, I would stay with certbot as kind of official and well known solution. Also it's auto configuration of Apache + Nginx (not just the cert installation itself) saves us some work/maintenance.

But definitely worth trying and maybe it works even better than certbot? The problem with certbot so far is, that we use it in different versions, APT packages for Stretch and above and github master on Jessie, as there is no APT package available. But cert renewal and some other stuff is again not done without APT package. This also leads to some regular work...

@pedrom34
Copy link

pedrom34 commented Apr 18, 2018

@MichaIng
Yeah, I know there's other stuff to do :)
But well, I personally installed acme.sh on my router, and it works like a charm. There's a lot of options and stuff. I thought it worth mentioning it, for the future of dietpi :) Or even for OP, if he don't want to wait for an update to use wildcard certs!

@MichaIng MichaIng added this to the Planned for implementation milestone Oct 15, 2018
@MichaIng MichaIng changed the title DietPi-Software | Integrate let's encrypt wildcard certificates with web servers configurations. DietPi-LetsEncrypt | Integrate multiple+wildcard domains Oct 21, 2018
@FredericGuilbault
Copy link
Contributor

FredericGuilbault commented Jun 30, 2019

it's well known that debian prefer stability over new features and therefor everything is outdated.

let's encrypt and certbot have inplemented wildcard support via ACME2v in the version 0.22.0
https://community.letsencrypt.org/t/certbot-0-22-0-release-with-acmev2-and-wildcard-support/55061

Debian now ship 0.28.0 && 0.31.0
https://packages.debian.org/sid/certbot
https://packages.debian.org/stretch/certbot

So I think this issue is outdated for certbot.
And I would not recomend acme.sh support as certbot is a usefull wrapper and do the job now.
acme.sh is more for advance users and edgecase of pure cert management.

@MichaIng
Copy link
Owner

MichaIng commented Jun 30, 2019

@FredericGuilbault
Yeah the Certbot version is not the issue. dietpi-letsencrypt simply does not allow to enter a wildcard domain yet (or even multiple domains). It requires a single domain as it also adds this as server name to webserver configs (required for Certbot again). So this requires a UI rewrite to allow adding a main domain (for webserver configs) and separate sub domains and/or wildcard subdomain. (Although just checked and Nginx allows and even requires all domains to be listed in its server name directive)

Furthermore of course Certbot needs to verify that the domains indeed belongs to you, which is with the default authentication methods simply done by Certbot placing a file into your webserver dir and LetsEncrypt servers trying to access those files via the domains you added.
But what I did not think about first that of course it cannot test the infinite number of possible subdomains that a wildcards stands for 😉. Therefore Certbot/LetsEncrypt has the DNS authentication method which allows to verify these domains belong to you by checking DNS records (don't ask me for details, as I do not fully understand those 😄): https://certbot.eff.org/docs/using.html#manual

@MichaIng MichaIng modified the milestones: Planned for implementation, v6.32 Aug 24, 2020
@MichaIng MichaIng changed the title DietPi-LetsEncrypt | Integrate multiple+wildcard domains DietPi-LetsEncrypt | Support multiple domains Aug 24, 2020
@MichaIng
Copy link
Owner

I degraded this request to add multiple domains only, not wildcard domains. The letter means DNS authentication method, which means additional modules and custom methods/steps to follow. That can be better done manually, even from an external system, since 90% needs to be done manually anyway. I switched my private server and dietpi.com over to acme.sh already and dietpi-letsencrypt will have the cert generation and webserver implementation clearly separated so that any client can be used manually as well and our script will implement any given/chosen cert or allow to create one via known clients (acme.sh and certbot for now).

Multiple domains is a quite simple request. The clients take a comma-separated list and simply check all the same way. Thanks to @Ruben-VV an initial support for Lighttpd is there: #3728
Nginx and Apache2 plugins require the server name directive to be added, although this can be skipped when using webroot authentication, AFAIK, which is actually a quite generic and reliable approach as well.
So either we need to take the first given domain as "main" domain and add it to Apache2+Nginx configs, or we switch to webroot authentication. I'm probably able to do this during beta phase.

@MichaIng MichaIng modified the milestones: v6.32, v6.33 Aug 27, 2020
@MichaIng MichaIng modified the milestones: v6.33, v6.34 Oct 2, 2020
@jcw
Copy link

jcw commented Oct 10, 2020

It's unfortunate to see this getting pushed out. I run a personal server w/ Nginx + Nextcloud + Gitea + PiHole (and some static sites), and this appears to be the final hurdle to switch the whole thing over to a repurposed 2009 Mac Mini which is now running DietPi. With all that and Samba + BTRFS support out of the box, DietPi really comes as close to a dream home-setup as I've ever gotten ...

P.S. FWIW - I can help test multi-domain certs, if that's of any use.

P.P.S. Another option would be to support the Caddy webserver, which fully automates HTTPS & LetsEncrypt use.

@MichaIng
Copy link
Owner

You'll get most flexibility by using DNS authentication. Depending on the DNS provider you might find a Certbot DNS plugin that will basically guide you through the required steps: https://certbot.eff.org/docs/using.html#dns-plugins
Those can be installed easily via APT: https://packages.debian.org/python3-certbot-dns
E.g.

apt install python3-certbot-dns-cloudflare

then follow the related guide above (differs with each DNS provider).

But the above is a bid difficult to put inside a script due to those differences, multi domains (without wildcard) is much easier for that, so yeah I'll likely find time with next release and let you know when it's ready for testing.

Caddy is indeed an interesting web server. The problem with web servers is, that it basically requires to add support to all web applications as well, so installing the web server is easy to do and to implement but add configurations for 10 - 20 applications is the hard part. Probably it can be done in waves across multiple DietPi versions 🙂.

@jcw
Copy link

jcw commented Oct 11, 2020

Thx, my DNS provider is not in the certbot list alas (united-domains). I've been using Caddy for a few years (v2 since a few months), only switched to nginx because it was the next best option in DietPi. I probably know nothing about most of the 10-20 affected apps in DietPi, but would be delighted to help out on at least the apps I mentioned. Not sure how to start on this kind of work though - shall I just start browsing and hacking on some of the dietpi-* scripts? Perhaps it makes more sense to just setup and config Caddy on my own, and then I can share the configs. I'll set up a spare RasPi 3B+ for this.

@jcw
Copy link

jcw commented Oct 11, 2020

Here's a Caddyfile (I've placed it in /etc/caddy/) which serves static files in /var/www/ and proxies to Nextcloud and Gitea. By replacing the :80 on the first line with a fully-qualified domain, Caddy should switch to HTTPS and get/maintain the necessary certificate(s) from LetsEncrypt.

Caddyfile.gz

(the file can be split up in pieces by using the import directive)

P.S. You probably also need to set the email global option at the start of the file for LetsEncrypt.

@MichaIng MichaIng modified the milestones: v6.34, v6.35 Nov 28, 2020
@anselal
Copy link

anselal commented Jan 7, 2021

Don't know if this is related here but when I ran dietpi-letsencrypt it only recognized the default enabled site although I had enabled others as well

@jcw
Copy link

jcw commented Jan 7, 2021

You know that Caddy does HTTPS and LetsEncrypt by default, and on its own, right? IOW, it does not use/need certbot.

@MichaIng
Copy link
Owner

MichaIng commented Jan 7, 2021

Ah indeed, I forgot about that even that I read it not too long ago 😄.

I have ideas to implement HTTPS automatically for all webservers, by allowing users to enter (via dietpi.txt and, if not done, interactively) main public domain name and secondary domains, then running Certbot (although I plan to migrate to acme.sh, either optionally or completely) automatically + apply HTTPS vhost and if no public domain is available applying a self-signed certificate. With the letter we're currently doing some experiences as HTTPS is required to connect to Bitwarden_RS with all clients and web UI. Not that easy to satisfy all browsers and OSes, and quite a few annoying steps to make the OS trust a self-signed certificate, but actually nowadays I think there is no reason to do any plain HTTP connection, even in isolated home networks. But this is a long-term idea, an we'll likely apply HTTPS via self-signed certificates first for a few other software titles with internal webserver, to assure that it works with all modern and intermediate browsers 😉.

@jcw
Copy link

jcw commented Jan 7, 2021

Well, I don't want to hijack this valid issue and turn it into a "let's caddy" discussion, but after several years of use, here's how I tend to do things: all my domains are set up in Caddy, which then handles the static sites for me. For all other sites, I use Caddy as reverse proxy, and set up the servers as plain HTTP on port other than 80 (since Caddy already occupies 80, and has to, to be able to automate Let's Encrypt). Given that adding a new domain/server to the Caddy config file is just a few lines, I really don't have a use for any other setup anymore - anything that wants to serve, can serve, as long as it can be configured as HTTP on some non-80 local port. Even within DietPi, I think this approach could have merit for other services that are activated.

I've cut back massively on the number of domains I manage, all I can say is that the above has made HTTPS and Let's Encrypt a no-brainer. Even for quick temporary test (sub-) domains.

If DietPi were to gain support for this approach, I'd switch to it as my front-end. FWIW. YMMV.

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 7, 2021

sounds like an alternative to Nginx Proxy Manager https://nginxproxymanager.com/

@jcw
Copy link

jcw commented Jan 7, 2021

Interesting, thanks for the pointer - but it's not quite the same in terms of deployment: "Install Docker and Docker-Compose". Caddy is a single executable.

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 7, 2021

it's true that it require more supporting software like Docker and Docker-Compose. While Docker is already available, Docker-Compose will follow shortly on DietPi. It might be easier for some folks to use a GUI instead going down on command line. But that's again a different discussion about personal preferences 😜

@MichaIng
Copy link
Owner

MichaIng commented Jan 8, 2021

Isn't that Docker container just a wrapper/frontend around a simple Nginx proxy? I mean every webserver can do simple proxy tasks, Nginx by nature with less effort and less resources than Apache2, as it is build from ground up as a proxy, Lighttpd might do well similarly. But at least performance/memory/simplicity-wise it only makes sense when running multiple websites/webservers on multiple machines, or if a web application that runs via internal webserver on a different port, needs to be available via 80/443 for convenience or to pass the firewall. So my current opinion is to not use proxies on a regular basis to enable HTTPS via our scripts, but use a dedicated ACME client and make the issued cert/key available for each application. That also assures encryption and server authentication on every connection level and it allows us full control over e.g. key type (RSA/ECC), size OCSP and such. Caddy would then be a special case where, when installed and bound to port 80, dietpi-letsencrypt errors out with an info that Caddy issues it's own certificate and that a separate certificate can be issued when stopping Caddy at port 80.

But the more I read about Caddy (currently doing) to more I like to test it with certain web applications. Curious about relevant features like access control, headers, redirects, rewrites and such things, also how it performs when serving dynamic content via PHP-FPM.

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 8, 2021

Isn't that Docker container just a wrapper/frontend around a simple Nginx proxy?

Nope it's more. It's a stack out of 2 container. One running Nginx + web application while the other is a database container. As well it's doing more than SSL handling (host management, certificate handling, access control, redirects, TCP/UDP traffic stream handling, 404 host definition) and it is multi-user capable. Probably oversized for simple certificate handling. 🤣

@anselal
Copy link

anselal commented Jan 8, 2021

I tested https://nginxproxymanager.com/ on a VM and it is great, but it doesnt run on my Rpi 2. It gives a bad gateway error. There are several issues open about that

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 8, 2021

you need to use a different database than the one stated on install instructions. Docker is downloading incorrect architecture. I'm using Portainer to deploy following stack

version: "2"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      # Public HTTP Port:
      - '80:80'
      # Public HTTPS Port:
      - '443:443'
      # Admin Web Port:
      - '81:81'
    environment:
      # These are the settings to access your db
      DB_MYSQL_HOST: "db"
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: "npm"
      DB_MYSQL_PASSWORD: "npm"
      DB_MYSQL_NAME: "npm"
      DISABLE_IPV6: 'true'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
    depends_on:
      - db
  db:
    image: yobasystems/alpine-mariadb:latest
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: 'npm'
      MYSQL_DATABASE: 'npm'
      MYSQL_USER: 'npm'
      MYSQL_PASSWORD: 'npm'
    volumes:
      - ./data/mysql:/var/lib/mysql

@anselal
Copy link

anselal commented Jan 8, 2021

Tried that too but didn't work. Will give it a try again.

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 8, 2021

did you use Portainer? It is good to get some inside into the container. I had the experience that the database was the issue, usually resulting into the application container to fail. Will do a test on my RPi1

@MichaIng are you ok we continue on this topic or should we move into a new issue??

@jcw
Copy link

jcw commented Jan 8, 2021

Being the person responsible for messing mixing things up in the first place, I now propose to revert this issue back to its original topic: how to deal with multi-domain HTTPS support in DietPi, and then perhaps setting up some new issues referring to this one: one could be about Nginx Proxy Manager, the other about Caddy server.

I suspect that there are numerous trade-offs to deal with all this in DietPi.

@MichaIng
Copy link
Owner

MichaIng commented Jan 8, 2021

What exactly does it store in the database 🤔. So when installing this to run lets say Nextcloud, you end up running two MariaDB servers. That is why I don't like containers for embedded devices/SBCs 😄.

Probably oversized for simple certificate handling. 🤣

Indeed, also all the features are basic webserver features, of course which needs config file adjustments, ah aside of issuing TLS certificates of course.

Yeah, the problem with such is always, either it runs, and can be easily configured the way you need it, then it makes things easier (aside of the resource overhead), or it does not, and then you need to understand the depth of it, not only the container, but probably the underlying software as well (MariaDB, Nginx, the ACME client), how they are invoked, deal with bugs of the container additionally with those of the underlying software, which are probably fixed upstream already. On the other hand, with Debian packages it's sometimes similar, if they ship with a too specific setup away from upstream. And then we come and put another layer on top (dietpi-software, dietpi-letsencrypt) 😄.

@MichaIng are you ok we continue on this topic or should we move into a new issue??

I'm fine with some ideas/discussion here as the actual step that closes this feature request stands already: Allow to define multiple domains, comma-separated, with the main domain first (that shall be applied as server name), and pass that to Certbot. Implementing/switching to acme.sh, standalone issuing, Caddy implementation, a new script for applying HTTPS directly to other web applications (not running port port 80/443), is a different topic.

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 8, 2021

So when installing this to run lets say Nextcloud, you end up running two MariaDB servers.

not necessarily. I successfully tested MariaDB installed directly on DietPi host. Now I have application container running on Docker only, while using MariaDB outside the container/Docker.

What exactly does it store in the database 🤔

there you go

MariaDB [(none)]> SHOW TABLES FROM npm;
+--------------------+
| Tables_in_npm      |
+--------------------+
| access_list        |
| access_list_auth   |
| access_list_client |
| audit_log          |
| auth               |
| certificate        |
| dead_host          |
| migrations         |
| migrations_lock    |
| proxy_host         |
| redirection_host   |
| setting            |
| stream             |
| user               |
| user_permission    |
+--------------------+
15 rows in set (0.002 sec)

MariaDB [(none)]>

@anselal
Copy link

anselal commented Jan 8, 2021

did you use Portainer? It is good to get some inside into the container. I had the experience that the database was the issue, usually resulting into the application container to fail. Will do a test on my RPi1

@MichaIng are you ok we continue on this topic or should we move into a new issue??

I didn't use portainer but I used version 3, maybe that was the issue. Don't have to my rpi right now cause they shutdown it down at work. will check it on Monday

How do you install docker-compose on your rpi 1? via pip ?

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 9, 2021

@anselal

How do you install docker-compose on your rpi 1? via pip ?

Yes, I used pip3 install docker-compose -i https://www.piwheels.org/simple #3078 (comment)

This will pull pre-compiled modules from piwheels.org and you don't need to compile it on your RPi1

But looks like NPM apps container is not starting correctly on armv6l

@anselal
Copy link

anselal commented Jan 9, 2021

neither does it on my RPi2

@Joulinar
Copy link
Collaborator

Joulinar commented Jan 9, 2021

you can have a look to your container log. Just go to the directory you stored the compose file docker-compose.yml into and run

docker-compose logs app
docker-compose logs db

@anselal
Copy link

anselal commented Jan 11, 2021

you need to use a different database than the one stated on install instructions. Docker is downloading incorrect architecture. I'm using Portainer to deploy following stack

version: "2"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      # Public HTTP Port:
      - '80:80'
      # Public HTTPS Port:
      - '443:443'
      # Admin Web Port:
      - '81:81'
    environment:
      # These are the settings to access your db
      DB_MYSQL_HOST: "db"
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: "npm"
      DB_MYSQL_PASSWORD: "npm"
      DB_MYSQL_NAME: "npm"
      DISABLE_IPV6: 'true'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
    depends_on:
      - db
  db:
    image: yobasystems/alpine-mariadb:latest
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: 'npm'
      MYSQL_DATABASE: 'npm'
      MYSQL_USER: 'npm'
      MYSQL_PASSWORD: 'npm'
    volumes:
      - ./data/mysql:/var/lib/mysql

I used your docker-compose.yml and I finally got it running on my RPi2. I guess version: "2" what the trick ;)

Thank you very much !!

PS: If you have any insides on how to run it on the RPi 1 I would be glad to test it out and confirm that it works

@MichaIng
Copy link
Owner

PR up, to allow this for all webserver + OCSP stapling switch + standalone certificate without a webserver installed: #4220

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment