Let's Encrypt wildcard certificates

I have been using Let’s Encrypt for a while now and until recently was using the standard acme package and creating cnames for all of the subdomains I use (ie: router.domain.com). However I decided about 6 months back to start using wildcard certificates instead but of course, I was not able to auto-renew as you have to do a DNS challenge and that requires TXT records to be created. My domain registrar and DNS host is GoDaddy.

My current implementation has been to create an Ubuntu server and setup the acme package and manually create the TXT records in order to generate the certificates for the 4 domains that I have, the downside, of course, is that it takes a good chunk of time every 90 days to renew the certificates and I would like a more automated process. You see once I have the certificates from LE I then use them on my internally hosted reverse proxies to provide signed SSL certificates for all of my web-managed devices (ie: pfSense, Proxmox, Unifi, etc.). I do have multiple proxies so that domains/subdomains that are outward-facing are on one proxy and internal ones are on another, I will also be adding a third what will be just for infrastructure devices such as my managed switches, NAS units, etc.

I am open to ideas if someone knows a more secure and efficient way to accomplish my goal of having signed SSL certs for devices that are internal only as well as my external-facing sites (I host about 8 different sites all on their own VM and accessed via a reverse proxy.)

Does anyone know a way to automate the renewal of wildcard certs either with GoDaddy or with another DNS provider? Ideally at no cost, as I looked at Cloudflare and its 20 a month for their services. Or if there is a self-hosted method to respond to the DNS challenges that would automate the process of renewing the certificates?

There are many different DNS providers that support automatic renewal of wildcard domains through the acme.sh client. We use Linode. There is a full list of supported DNS Providers here: https://github.com/Neilpang/acme.sh/wiki/dnsapi and it looks like GoDaddy is supported, so you may be in luck.

You could also try to set something up with nsupdate on a BIND Nameserver that you own. I did this once when LetsEncrypt first came out, and we needed certificates for an internal-only domain.

There is a package in pfsense called acme that can automate DNS renewal with many different providers, I think GoDaddy is one of them but I’m not positive. If you’re not using pfsense you might be able to still use their script in some way. The project is on github.

For internal certs you could set up your own CA. You’d have to deal with installing trusted certs on all the clients, but it is an option. I haven’t used it myself, but I know the smallstep project can use the ACME api to renew internal certs.

FWIW – I couldn’t get the acme client in pfsense to work with cloudflare. YMMV.

I’ll be curious when @Astraea actually comes up with a solution. I’m just curious for example how you provide the SSL certs to your clients through the reverse proxy. What software are you using and how you actually set this up.

I will be trying the acme solution using the GoDaddy API to see if that works before trying on Cloudflare and will report back how that all works.

@kevdog the way that I propagate the certificates in the past has been to run a script that copies the certificates from what I call the “security server” where I obtain them to the various internal reverse proxies that I have running as VMs. I have those certificates applied to the Nginx settings for each domain/subdomain that I have set up for reverse proxy.

For example, if I want to visit my Unifi Controller web GUI I would type https://unifi.domain.com. As all of the devices on my network use my pfSense install for their DNS server, this would then ask the pfSense for the DNS record. Because the URL that I am asking for it internal only I have my pfSense set up with a host override listing the appropriate internal proxy as the IP address. From there it would then send the request to that reverse proxy server and that would serve up the page pased on the proxy pass in the Nginx configuration file. I have it set to use https between the back end server and Nginx where possible and have Ngnix accept the self-signed certificates.

For my domains and subdomains that are accessible externally, I do not have a host override setup so they resolve like any other domain and I have a dedicated reverse proxy for externally accessible resources that follows the similar process to internal ones minus the host override.

If I have only a few webservers running behind the reverse proxy I was wondering rather that push/pull scripts, if it would be easier for the servers to obtain the LE certs directly. Since its only a few servers I don’t think I’m going to run into rate limits the LE imposes. Curiously – are you using HA, or Nginx as your reverse proxy servers?

@kevdog – From what I understand in order for LE to issue a cert you either need to do a challenge using port 80 or a DNS challenge. Also if you are hosting multiple webservers and have only a single public IP you would need a reverse proxy in front of the webservers in order to handle directing traffic to the various servers as a port can only be forwarded to 1 internal IP at a time.

I am running all of my servers as VMs so that I don’t need to setup HA at the service server level (like redundant Nginx servers). The reason I have multiple reverse proxies is to separate internal from external and also internal from infrastructure.

So I installed Ubuntu 18.04 Server tonight on my Proxmox install and installed acme.sh as per the directions, I then followed the directions for creating an API key and secret for Go Daddy and I was able to issue a certificate for each of my domains using wildcards with no problems at all. Now to work out how to best share said certificates with my reverse proxies.

acme.sh install: Preparation & Installation

Go Daddy API: DNS API

Well, I finally have a solution that works and wanted to share it here not only for reference but also for feedback from the community.

What I ended up doing was as per my previous post installing the acme.sh script listed on the LE site and following the directions to issue a wildcard certificate for each of my 5 domains using the Go Daddy DNS API provided. This allows the process of certificate renewals to be automated which was one of the two parts I hated about my previous implementation.

Today was setting up the next steps. I installed Samba on the same server as the certificates are generated on and created a read-only share to the certificate directory. Then I set up a CIFS share on another server running Nginx by adding an FSTAB entry that can read from the provided share on the other server. The final step was to update the Nginx configuration to use the now accessible certificates from the other server and after an Nginx restart I was able to browse to the domain I set up without issues and chrome reported the correct certificate information and showed that it was valid and trusted (the nice lock symbol in the browser window).

The only hiccup I can see with this setup now is that when the certificates change I will need to have the proxy servers reload Nginx to pick up the new certificates, so maybe I need to set up a cron job to restart every 90 days or just add it to my list of administrative tasks that I check/preform during monthly maintenance.

Any feedback on this setup before I roll it into production would be much appreciated as I am hoping to set up the proxies in the next few days as that will allow me to start bringing other servers online, like my much-missed email gateway, email server and also my cloud server.

I don’t have much input since I have the same dillema as you – as do a lot of people – basically – I have the certs on one machine – but how do I get them to other machines and what happens every 90 days after an update.

I tried the nfs share method (you’re using samba), however the problem I had with NFS was permissions. It was kind of messy. I’m not sure if you have the same problem with SAMBA however it kind of didn’t work so well for me.

I’m working on another solution based on a post I found on the Lets Encrypt site – https://community.letsencrypt.org/t/automated-deployment-of-key-cert-from-reverse-proxy-to-internal-systems/64491/4.
I’m using certbot as the client (I believe you’re using acme so something may be a little different), but I’m renewing with the --reuse-key directive. By using this directive, the poster said the private key always stays the same, but the fullchain.pem might change between upgrades. Problem then is distributing the fullchain.pem key. I suppose you could you Samba (definitely one method). Fullchain.pem key doesn’t contain private info so it can be distributed (public key). On host with keys, I set up apache webserver and only offered access to directory based on local IP addresses. Fullchain.pem key is copied to accessible directory with deploy-hook.sh script (as explained to post). Similar to post I setup cron job on the clients to curl to site to get the fullchain.pem key and then modified script to copy/change permissions on correct directory.

I’m not sure if this method is more robust than yours currently.

I’ll have a look at that link and see what they have setup. As for samba I have it configured so that there is only one user that can access the share which gets mounted as read only on the client side and is also setup as read only on the server side so the keys cannot be manipulated. I have also disabled all guest access to samba on this server as well as browsing of the share so that you have to know the full and absolute path to the share as well as have the login credentials in order to access anything.