How to backup an iSCSI LUN?

I have a Windows server running under Esxi. The datastore is running out of space so I moved one of the drives to a Synology NAS and shared it back to the Windows server over iSCSI. Works great and performance is good too. However, how do I back that share up? Veeam backups of the server at the .vmdk file level and does not “see” the iSCSI connection. I looked at HyperBackup on the NAS but it can only backup to itself or another NAS, not the cloud. I connected the iSCSI Lun to the Veeam server and tried to backup the share as a Windows drive. Worked but it’s dumping it right back on the same NAS it is shared from. I am kinda going in circles. Suggestions?

If you wanted Veeam to handle the backups you would need to mount the iSCSI LUN to ESXi and create a new datastore on it. From there you could add a new virtual drive to your Windows VM using that new datastore.

Personally I would create a new volume on your NAS and use that for the datastore/virtual disk and then you should be able to migrate the data via your Windows VM since it would have both drives visible to it.

If you don’t have the storage available for this on the NAS, then you need to migrate the data off the existing LUN and then you could use it for a datastore in ESXi.

Why not just set up rclone on the synology (it will be a command line operation, if you are comfortable with that) and set it to run periodically from a cron job? Rclone works really well, and it works with object stores like Amazon S3, etc. Pretty easy to do really. It was one of the first home lab projects I tackled when I built a NAS out of a raspberry pi. Here are the notes I wrote for myself when I did this a few years back

Install Rclone to sync the data to AWS or other cloud

go to https://rclone.org/install/

Get the install script: sudo -v ; curl https://rclone.org/install.sh | sudo bash

SSH into the Pi NASTY (log in as one of the users created earlier, admin won’t work)

run the install script

run rclone congfig

follow prompts

Type “n” for new remote

assign a name to this new remote (myawsaccount)

Input user key and secret key when prompted

Use the default region and default endpoint

For storage class I chose Glacier instant retrieval, can be anything

no need to enter the name of the bucket at this point.

Can close SSH window and go back to OMV web interface

Inside of OMV, set up a scheduled task rclone copy --create-empty-src-dirs /srv/dev-disk-by-uuid-a7f5e82c-5f54-4bae-b4cf-c349f0c45953/RSYNC/ myawsaccount:louiesbackups

where myawsaccount is whatever you named your remote in rclone config and louiesbackups is the name of your target bucket on AWS S3

Make sure the scheduling is 0 3 * * * to do a backup to the cloud at 3AM every morning

The problem is when you pay for a product like Veeam, you’re not just paying for the product but also for support. There a countless ways via scripts and such to backup almost anything. But if any of them start to fail for whatever reason, aside from Google and forums, you’re on your own. In this case, you’re not only losing support from Veeam, but also VMware.

The suggestion from @FredFerrell is the correct one. Create an additional data store in ESXi that points at the NAS and store the VM (or the required individual disks) there. This setup would be supported by both Veeam and VMware (provided you are following guidance from both parties, which typically isn’t hard for this type of thing)

Good idea. I only have a 1G connection to the Synology and adding another layer of processing through VMware, I wonder if it would slow things down more?

Thank you all for the replies. I ended up figuring out how to free up enough space to get by for now. I moved the Q drive share back onto the original VMware datastore as it originally was. I hope to replace this server soon so this will work for now.

1 Like