TrueNAS 13 and VMWare ESXi issue

Installing TrueNAS 13 on HPE ProLiant 380p Gen 8 for a home lab. The storage system has (18) 1.2TB 10K SAS drives and (4) 128GB SATA SSD. The application is multi path iSCSI with multiple VMWare ESXi 7.03 hosts as the initiators. The original design is a single pool with 18 disks in a ZFS Raid Z1 data vdev and the 4 SSD’s in a cache vdev. The problem is ESXi won’t create any datastores on the target. The error message in the vkernel.log on the ESXi host is complaining about a missing file system. So I suspect ESXi is having trouble formatting vmfs on the volume. Any ideas?

A RAIDZ1 with 18 disks is not recommended, since the risk of 2 disks failing at the same time is way too high.

Do you have an HBA in IT mode (not a RAID card) and passed the entire HBA through to TrueNAS?

Chris

I put the RAID card in HBA mode. When TrueNAS comes up, it sees each drive and returns the serial number and model number. So based on that, I assume that hardware RAID is turned off correct? Also it’s worth noting that a SmartArray 420 won’t boot any drives connected to it in HBA mode. Because the drives are connected to a hot plug backplane, I can’t use a standup HBA card to connect to any drive on the backplane to that card and boot. I have to boot from a SSD connected to USB/SATA adapter.

As to the drive layout, would you use Z3 or Z2?

In that case you have a bit of a problem with the suitability of your system.

The general requirement from ZFS (and therefore TrueNAS) is to have direct access to the disks without any RAID card in between. Switching the RAID card to HBA mode is not sufficient for reliable operation, as many have found out the hard way. In other words: it works, until it doesn’t. Please have a look at What's all the noise about HBAs, and why can't I use a RAID controller? | TrueNAS Community if you want more details.

For running TrueNAS under ESXi you need to pass through the entire HBA to the TrueNAS VM, not the individual drives.

As to the pool configuration, that depends entirely on your use-case. So you will need to specify this in some level of detail.

Update. I was able to create some test volumes of 3TB - 5TB and successfully create VMWare data stores. As soon as I configured any volume (zvol) above 6TB, the ESX host complained about not being able to create a VMFS 6 file system. The weird thing is if I configure a large NFS share, mount the NFS share on ESX, I can create a datastore perfectly. Not sure why NFS works but iSCSI doesn’t.

A couple of challenges with this server.

First, the G8’s are notorious for being picky on what cards you can install in the PCIe bus. If you put in a card that the server does not like, the card works, but the 6 fans spin at 100%. I’ve even had trouble with HP NIC’s. I have two DL380G8’s and they both have the same behavior. Quite annoying.

Second, all the drives are connected to a hot plug backplane. The backplane has 2 mini-SAS cables that connect to the controller. So the problem is finding an HBA that supports up to 12 drives that doesn’t spin the fans at 100%.