Using FreeNAS as XCP-NG iSCSi Storage for VM's & SMB (combinations)

Hello Folks,

I’m a little lost and i could use some of your help and ideas about XCP-NG virtualization using storage from FreeNAS, mainly SMB and iSCSi, on type “file” (located on SMB Share or not) or “device” (using space from dataset or dedicated Zvol).

Setup and Storage needs

XCP-NG 4 VM’s running custom APPs (win10) developed in Matlab.
Each VM storage is a separated iSCSi block storage from a FreeNAS Box.
The system (all 4 VM’s) needs to have a common shared SMB storage for the custom Apps to run. (Like Work folders)


  1. While setting up the FreeNAS iSCSI, i came to conclusion that iSCSi can be created as “device” and as “File”. Also if i locate this file in an SMB shared location, i can see that file from the SMB share, but on the “device” mode, not.
    What is the appropriate way of setting up iSCSI? dedicate a single ZVol and use type as “device” or select “device” by using space in Dataset, or use type “File” and locate this on a dataset?

  2. In case of iSCSi type “File”, and the file is on a dataset shared with SMB, can this be a method of backup ?? For example if i periodically copy this file, can i restore it to the same or another FreeNAS box and access the iSCSi “virtual drive” on that file by creating the iSCSi again and pointing that file ?
    In the past i used to actually “see” my files and keep backups with GoodSync :smiley: on FreeNAS and iSCSI i don’t know what to do.

  3. The 4 VM’s running the custom apps, are using common data, each one running different algorithms. To my understanding this cannot be iSCSi target, it should be SMB share. (Work folder)
    Where this SMB share is optimum to exists in terms of speed and backup?
    A.- it could be just a dedicated SMB share directly on FreeNAS, used direct by the 4 VM’s. What is the appropriate way of doing backup for this ?
    B- it could be a separate VM only for storage purpose debian or ubuntu with SMB share (iSCSi storage again on FreeNAS) but within the same host as the other 4 VM’s, that way data is not transferred outside the host while processing. Do you think this will earn something in comparison to option A?
    C- do i have other options for the common storage work folders?

  4. What method of backups do you suggest for the storage above and for those different cases ? Can the FreeNAS “replication” do the job ? or maybe Syncthing ? or maybe RSync ?
    What you guys usually using and for what ?

Thanks for your time.

There is a lot to answer here so I’ll just make it short and sweet

  1. Device is block level and file is file level. Best to use? Device.

  2. Most of what you are asking is on the freenas side of backing up. Xen orchestra he great tools for doing delta backups and continuous replication to SR and SMB. If you as asking for specifically backing up freenas iscsi then do a ZFS replication to another freenas box.

  3. SMB would probably be fine as long as your freenas box and your xcpng host have enough bandwidth like 10Gb or LAGG connection to each other.

  4. This depends on how you want your infrastructure done. You have options on both Xen orchestra and freenas to do replication so it’s up to you. Me personally I would do all of the VM delta backups and continuous replication to another freenas box for disaster recovery and backups on Xen Orchestra and all of the SMB shares setup to use ZFS replication from freenas.

This is just a quick answer from my phone but, I’m sure this will need to be discussed more because infrastructure takes a lot of time and careful planning.

Yes, choose device for the iSCSI setup and the issue with backing up VM’s on FreeNAS/TrueNAS using replication is that you will lose the metadata as that is stored in XCP-NG not on the drive. For backing up the VM’s Xen Orchestra is the way to go

For any data shares that you have on FreeNAS/TrueNAS directly, backing up via replication is fine.

1 Like

Thanks a lot @xMAXIMUSx. Your comments are well noted. It seems that i have to play a little with iscsi and ZFS replication to get familiar and comfortable.

Is not possible to have 10Gb connection in my country. There is no cards or switches. Supply of equipment is short in general. I might test with link aggregation 6 X1Gb if FreeNAS can support.

There is a possibility i will need FTP Server also. What you suggest for this , considering i can’t work with Linux yet to implement ftpd or other ftp server using bash, can I virtualize FreeNAS to use it only for FTP because of its interface ? What’s your opinion ?

A little research and practice go a long way. If I were you I would setup ssh and utilize SFTP on the freenas box. I would create a dataset to house all of the data so you can also replicate that data with ZFS replication. Unless you need a specific server put somewhere else for this function and Linux isn’t an option then you could probably set up FileZilla server on windows. But for best practice please use SFTP instead of FTP.

Hello @xMAXIMUSx,

I have worked in the past with Filezilla FTP Server and also Windows IIS FTP Setup, and also Synology configuration for FTP. I did several tests and found Filezilla is the most slow. Is been easier to have high availability on 2 synology NAS, and i never managed to make high availability with Server 2016 clustering, but i managed to configure FTP Server on IIS.

Honestly, for this project we have low spec dataloggers all over country with 16bit ARM processors supporting only FTP Protocol, some of them FTP / SSL / TLS but not all of them. (Industrial things).

So for next test will be testing:

  • FTP Server on FreeNAS box, OR virtualized with XCP NG and maybe high availability between 2 hosts XCP-NG. I know this is supported on TrueNAS Servers, but now i can’t proceed to purchase because i need to have a prove of concept.

  • 2 Synology or XPEnology with high availability between them.

  • Win10 IIS FTP server, virtualized on XCP NG VM and high availability between 2 XCP-NG Hosts. I will not going into clustering Windows Server, i found it relatively difficult, time consuming and with lag of documentation.

Thanks for your time.