New VM storage area on a new Truenas

Hey gurus

I’d like to start a discussion on a project I’m facing to try and get some directional advice before I dive straight in.

Some things you should know before we start.

  • I have zero experience with Truenas
  • The hardware I have is fixed (because it’s free)
  • I’m not adverse to trying anything
  • the end result will be used in production

Scenario
I have a Windows 2016 data center Hyper-V running on a DL380 G8 with 96gb memory, it has 5 VMs running on local storage which is now almost full. The company has need to introduce more VMs, as well as an ever expanding file server contents.

New hardware
Enter my newly acquired DL380 G10 12 bay 3.5" server, Intel Xeon silver edition 8 core, 16gb ram. This has 2 bays at the rear with 2x 300gb sas drives, and 12 bays at the front with 9x 8tb DS SAS drives.

What I’ve played with so far
I’ve run the Truenas installation quite a few times over the past day or so, and the first problem I’ve run in to is the hardware controller. I don’t appear to be able to present the disks to Truenas without creating RAID0 logical drives on each of the disks, once that’s done they appear in Truenas as useable. I’m pretty sure this is not right but seems to work. The controller is a P420i and try as I might I can’t seem to get it to go in to HBA which is what most forum posts suggest.

Past this I’ve played with creating and destroying zpools, NFS shares and iscsi targets but nothing major.

I’ve searched around on the internet and iscsi target seems to be the defacto go to for VM storage, but I’d also like some suggestions from everyone on how to best setup for …

A) VM Storage
B) SMB shares

I’d love some thoughts with this project, and will in the end (once I’m happy) be placed in to a live environment, and by happy I mean once I’ve learned it inside and out so I can safely support it.

So I guess that’s it. Happy to answer any questions and have a good discussion on do’s and don’ts.

Using TrueNAS as SMB shares might be an easy way to go. Connect it to their AD and offer up the shares to the users.

Thanks for your input on this @LTS_Tom

So far today I’ve been able to get software defined access to the hardware controller and so now Truenas has control over all the disks as they should be. I have my pool setup with a vdev of 32.85TiB and vlog of 7.28TiB.

I have the iscsi, NFS, and SMB services installed and now I need to get the unit on site to finish the config and start to get my head around targets and SMB with AD connection.

I’m still open to people chiming in to comment on the iscsi or NFS for VM storage front. What’s best for connecting to the existing hypervisor ? There are fiber channel cards in these servers but never used those either …

ITS ALIVE!

Sharing files from the server and using iSCSI as an extent presented to the VM both work, the iSCSI one is more complex to setup and may have more performance bottlenecks.

1 Like

I had a difficult time getting my P420 controller into IT mode, needed some software to configure it. There may be a “Service Pack for Proliant” that you need to “find” and I think that has the tool I needed (need to find my documents). You also may need the latest firmware for the controller and might as well update BIOS and BMC while you are doing it all.

That said, you might be better off finding a used LSI card and using that, I am not fond of the hoops I had to jump through to get my P420 controller working, but it was super cheap and I thought would be better because it would integrate with the rest of the HP system. In the end I have it working, but it is also only a lab system, not a production system. If you have HP support, you should contact them, gen10 is new enough that you might be able to get them to help.

Well Day 2, and @LTS_Tom you certainly were not joking about performance on iscsi.

I got the system setup with iscsi and initiated at transfer to the fresh connection using xcopy of a 500gb VM file. This is direct over one Gbit NIC to a unifi switch and back in the same rack. And it’s been going for nearly two hours. It’s consistently running 90-105MiB/s

I’m unsure how to use SMB to test the speed difference. Is that literally just a network share and point the hypervisor in a mapped drive fashion?

I think we need to get the right sizes of things to see if what you get are expected.

Is the image 500GB ?

The transfer rates - 90-105Mbps or 90-105MBps (the last one is what to expect on a 1Gbps link).

Also - going from your screenshot, I would guess the pool has been setup as a raidz1 ? It gives you a lot of capacity, but not much in terms of IOPS.
The SLOG - do you use two 8TB spinners for that? That’s way too much space, and not excactly fast - better to go put some small SSDs in (with PLP and high write endurance - TrueNAS forums have discussions about what’s good SLOG)

If you want to see how busy your disks are, you can use “gstat” on the commandline - and I also find “zpool iostat -v” as a nice tool.

The zpool01 is a z2 pool with 7x 8Tb DS SAS drives, the slog is a mirror 2x 8tb DS SAS drives.

Guess I won’t be moving stuff to this server just yet if it’s configured wrong.

For serving files I would have no issues using raidz2 (using it at home for photo-editing via 10Gbps - but has added both SLOG and L2ARC), but backend for running VMs of in other settings than a lab where performance doesn’t have to be good … not so much :slight_smile:

With VMs you normally want high IOPS, and sync writes (which in term can ask for fast SLOG). Not fun to write at huge speeds, and then after a powerfailure see that all VMs are corrupted due to lack of sync writes. If I remember correctly @LTS_Tom has made some videos about this.

2 Likes

So following up on this thread, this has been a bit of a nightmare to be honest. Clearly over my understanding of something i should have put in to production. you live and learn.

Seems like there is a weird idiosyncrasy in having Domain controller VMs stored anywhere but on local storage, this reallllllly broke the network when i moved the DC’s off to their new home, and i ended up losing the BDC entirely due to lack of sync. The PDC thankfully restored from a backup back to the local storage without issue.

This is such a shame as i have a tonne of storage in this Truenas box and virtually none on the local HyperV.

Im going to look in to shrinking the drives properly in the VHDX files i think. The 500GB images i have for SQL Server and the CRM servers are not that big, hell the database in them is only 2GB. Clearly the person that setup the disks sized them wrongly, might just have to rebuild them or shrink them to a smaller size. All the OS drives are 256GB and the secondary images are all 500GB. Massively over kill for this tiny office.