Automated Lab Backup Systems

Hey guys, I’m trying to improve the automated backup system for my lab. I have used Systemd to compress my lab server’s entire /home directory into a tarball and then I push it to an encrypted store on my GDrive with rclone (I chose GDrive because, as a GFiber customer, I get 1TB free cloud storage for which I have no other use-case). I’m looking to improve this system, and I want to do it with a dedicated backup machine so as to avoid CPU cycles on the server. I have this mini PC that I am not using as a desktop system anymore, and so it’s the prime candidate for this role. The idea would be to use RTC wake events to have it wake from S5, perform the archival, compression, and encryption to its disk. Then it would use rclone to push it to my google drive, and finally shut itself down with the rtcwake command (which I’ve already tested). I just have one problem. My server has many users, which means many home directories all with different permissions. This design leverages rootless Podman to be able to enforce system of least privileges on the host’s container processes themselves. So my thought was to create a system user on the server and give it just enough privileges to be able to read all files in /home as well as read and execute all directories in /home. I cannot think of a way to do this. Am I being unreasonable with my goals. As for the tool to do this, out of the three pieces of software I’ve been looking at – duplicity, Borg, and Restic – restic seems to be the tool that I should use.

I haven’t personally built a setup like this before, but you’re definitely over complicating the permission side of things.

Trying to craft a “least privilege” user for root-level tasks is a deep rabbit hole. Just run restic as root on your server it’s designed to handle all those permission hurdles for you, and it manages the encryption natively, which keeps everything secure without the architectural headache. Or since this is a learning lab, go see just how deep that rabbit hole is and report back.

1 Like

Yeah, I actually found the solution; and it wasn’t too divergent from the path that I’d been on since before I posted this. It’s better if I show rather than try to explain.

For my solution, the first step was to give the system user privileges to execute an sftp-server as root in /etc/sudoers:

magnus ALL=(root) NOPASSWD: /usr/libexec/openssh/sftp-server -R
Defaults: magnus !requiretty

The thing I was trying to do then was, in /etc/ssh/sshd_config, set ForceCommand sudo /usr/libexec/openssh/sftp-server -R in a `Match` block:

Match User magnus
    AuthorizedKeysFile /etc/ssh/magnus_key
    AllowTcpForwarding no
    X11Forwarding no
    PubkeyAuthentication yes
    PubkeyAuthOptions none
    # ForceCommand sudo /usr/libexec/openssh/sftp-server -R

This was the incorrect solution. The correct solution was to have the following in my /etc/ssh/magnus_key instead:

command="sudo /usr/libexec/openssh/sftp-server -R",no-pty ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMkhjrCICnIRD/PZdeidf2WsXQjnDaxY0PpsrgpGOSrd magnus@magnus-backup-vault.home.arpa

That’s it. I had sudoers correct, and I had the idea on track. It’s just the implementation that was wrong. I don’t honestly know why ForceCommand didn’t work when the authorized_keys solution did.

Perhaps you should consider redacting or anonymizing operational details of your backup. Not a huge deal, just not something I’d be comfortable sharing openly on a public forum.

It’s a public key, and fundamentally, I don’t believe in security by obscurity.

Also, it shows like how the problem was solved in case others want to see it.

1 Like