Exposing storage for databases

Hey all

I have a compute server running XCP-NG for VMs, and a storage server running TrueNAS. The compute and storage are directly connected to each other via a 10g connection. For my VM storage I use a NFS share on the storage server. I like to keep my VMs small and store all the data separately outside of the VM disk itself. My question now is, what would be the best option to store my databases/docker data… that run on said VMs? Should I mount a separate data set as NFS inside the VM for this kind of storage (I have done some reading and most posts I could find discourage saving a database files on NFS, is this assumption correct?)? Should I expose it through iSCSI as this might be more performant for database storage? What I like about the NFS option is that I don’t have to define any size for a ZVOL, which gives for flexibility long term, but I’m afraid of the stability issues it may cause. Are there other options that I might be looking over?

Thanks

There is nothing wrong with setting up NFS provided you put it on a dedicated storage network. I do this by having my VM’s with two interfaces, one for the app they are services and the other for storage.

Nearly all my VMs store the application data on a dataset exported per NFS. No stability issues at all.

To let you think that I am that crazy: The TrueNAS instances exporting the datasets per NFS are actual VMs residing in the same VLANs as their client VMs.

Just saying that to let you know that you don’t need to fear using NFS as storage inside your VMs.

Maybe I am doing it incorrectly, but I use the docker NFS driver to store my data base data and other data on NFS shares. Seems to work fine for me, but the share is on a virtualized instance of TrueNAS running on the same Proxmox host, so networking isn’t much of a limitation. Also I am only doing this for a couple of low traffic Wordpress based web sites. I have it set up as follows in my docker compose file

  • type: volume
    source: wp-site
    target: /var/www/html
    volume:
    nocopy: true

volumes:
- type: volume
source: wp-data
target: /var/lib/mysql
volume:
nocopy: true

volumes:
wp-site:
driver_opts:
type: “nfs”
o: “vers=4,addr=192.168.1.2,nolock,soft,rw”
device: “:/mnt/ssd_pool/wordpress/site”
wp-data:
driver_opts:
type: “nfs”
o: “vers=4,addr=192.168.1.2,nolock,soft,rw”
device: “:/mnt/ssd_pool/wordpress/db”

It is on a dedicated storage network, more over the storage and the compute are directly connected via fiber and there is no routing happening.

Are you running any business applications? For me it will be postgreSQL data that I want to export via NFS, and reliability is really import more so than speed I would say.

This is really useful information and I was not aware of the Docker NFS driver, we do have lots of docker containers and I was planning to mount a nfs share directly on the host of the docker and save the docker data that way. But the way you do it seems even better :slight_smile:

Thank you so much y’all

1 Like

I have a handful of postgres DBs storing their files on NFS. Not stbility issues. Although probably not the best performance.

What are the specs of your TrueNAS server and the server running PostgreSQL?
We have a pretty beefy TrueNAS server (256gb ram), 4 x 10gbps uplinks, AMD Epyc… Our compute server has similar specs.

The next few days I will do some benchmarking to see how will it performs.

compared to your setup I am running a toy example. This is just a homelab. I am curious what you will find, but I am pretty sure performance will not be killer…