Bulk point-in-time restore from versioned b2 or s3 (or s3 compatible) object storage

In the context of the TrueCloud Storj backups on TrueNas Scale 24.10, on a recent video I recall Tom questioned how one can do a bulk restore of the latest versions of files from versioned s3 / b2 object storage. This may be useful for example in the event the visible backup set files in object storage are deleted or corrupted, but there are versions being retained in accordance with a retention policy etc.

Whilst I have not seen a method from the GUI to do this (either in TrueNas or on the web portal of b2 or popular s3 services - other than the “rewind” feature in Minio), it may helpful to note that rclone can be used to achieve this fairly easily (i.e. no need to make a script from scratch). For b2 and s3 compatible backends, rclone includes a “–b2-version-at” or “–s3-version-at” flag which displays the file versions of the remote at a specified point in time. Usage example is as follows: rclone copy [s3remote]: ./ --s3-version-at=2025-01-13T11:42:00Z

Posting in case that’s helpful to someone.

3 Likes

Thanks, and a friend has been testing out this as a method with the other services:

Thanks - looks interesting (noting that tool is an early stage proof-of-concept). NB: The rclone solution will work for any service that is S3-compatible (Storj, Wasabi, B2 etc…) and is not dependent on rclone having been used to upload the data in the first place (so you could point-in-time restore a whole restic repo created by TrueCloud Backup for example). The rclone solution is well tested (has been in rclone for several years now).

1 Like

Hi jkv,

Thanks, this is helpful!

I dug around a little bit with rclone and I only saw a way to do single files at a time with the timestamps per file. But I may have totally missed some options.

Do you know if there’s a way to do a bulk restore using rclone --b2-version-at or --s3-version-at (or similar) or is it file by file only?

And do you know if it’s a full copy operation or a rollback using, for example, delete marker removal?

Restores of whole file structures are possible using that flag. The example command gave above would download whatever is defined at the s3remote to the current working directory (recursively - ie all files and folders subject to any filters you specify). It’s exactly the same functionality as using Rclone copy without that —s3-versions-at flag except that the flag means rclone will see the s3 remote as it existed at the specified date and time (ie the file versions will be the most recent version existing at that time, files created afterwards will be ignored and files deleted afterwards will be visible) - provided that the versions are retained in the bucket.

You can use the flag with the rclone ls command to list the files (recursively) as a the specified time etc.

The advantage is that the flag means it does not matter how many subsequent versions have been created and whether only certain files have been corrupted etc - it gives a true point in time position provided that the versions are retained in the bucket.

One limitation is that using this flag, files in the s3 bucket cannot be modified or deleted. That means you cannot restore in place in one operation- you’d have to copy (download) the file structure to another location and then, if you want to do a restore to the original s3 bucket, upload (using rclone sync) the file structure to be restored. All of this happens recursively subject to any specified include / exclude filters.

See that flag in the manual here for more details: Amazon S3

Awesome! Thank you for that detailed rundown. Very helpful. Seems like this is indeed a great well-tested option for safely pulling down (then optionally later pushing back up) data from a point in time.

I’ll probably leave the script out there (with some warnings) for those situations where folks want to “restore” in place, so they can use the TrueNAS GUI in their workflow still. I’ll add notes about the rclone method to the readme.

Thanks again!

2 Likes

I’ve expanded on the recovery methods discussed earlier and added lots of documentation about how to set up both rclone and bucket security to this repo. The idea being both to help folks prepare their setups and to be a quick reference if they ever need to do a recovery.

I welcome input from anyone in the community on the advice given there about setting up the buckets and security limitations of various methods at the two providers we’re focusing on: B2 & Storj.

I want to be sure I’m thinking through this threat model properly and mitigating it as well as possible with each service.

2 Likes