It’s on a local sales group, so even if it’s implied I think it would be hard to act upon.
I’ll just bite the bullet on the new one, I would think it’s less likely to have any issues.
It’s on a local sales group, so even if it’s implied I think it would be hard to act upon.
I’ll just bite the bullet on the new one, I would think it’s less likely to have any issues.
New old stock all day long when the price difference is not significant, granted $200 is at least 1 good enterprise drive and probably 2 of them.
I got the unit ordered last night, should be in this week.
I already had 5 x 12TB Segate IronWolf Drives. These should work great. I’ve also got a ton of Samsung 1TB SSD’s which will work fine to populate the 2.5" drive bays and work as Cache drives.
I will have the unit tomorrow, pretty excited to dig in. I have a HPE Aruba 2930 with 2 10GB SFP+ add-on cards. Each add-on card has 4 SFP+ ports, with all of the other ports on the switch being 2.5GB.
I have some testing lined up to use this as an iSCSI target for my ESXi and xcp-ng clusters. They mainly use local SAS SSD’s for the datastore so this will just be testing for a future project. If this is a unit I fall in love with I want to buy the new mini rack server whenever it comes out for a full scale iSCSI target. I have 10 x 3.8TB Samsung SAS drives ready.
I guess you know this, iSCSI does not allow for thin provisioning, while NFS does and does not perform much worse.
I am well aware.
I have looked at a few reviews of this kind of usage and they got solid performance with spinning rust.
Spinning drives have their place, and part of that process is cost. Spinning drives cost less per terabyte in the large sizes than solid state, especially when you look at at single level over quad level devices. An “8tb” enterprise SSD is $1200+usd, and 8tb HGST spinning drive is $200usd, both only SATA as well.
The old saying still holds, speed costs money, how fast do you want to go? Cars, drives, still applies.
As far as iSCSI vs NFS vs SMB, I’ve done some tests, need to do more tests with iSCSI. Depending on the “block” size of the data, NFS can really stomp SMB. But in the smaller “block” sizes, SMB is faster by about double. This applies (to a limited amount) to both ESXi and XCP-NG. ALos didn’t matter if I was sending to a spinning drive array, or a single NVME drive, both were Truenas Electric Eel and the NFS speeds from both hypervisors were approximately the same, ESXi doesn’t support SMB shares for the VMs (as far as I know). All testing done with thin provisioned storage, there may be some small gains with NFS as thick. Eventually I’ll get to trying iSCSI to see if there is a big enough difference to move to thick storage.
Why do I care about the speed? Migrating a VM from one storage to another is a process, and I want that process to go faster. Windows updates are a small process, I’d like that process to go a bit faster too. Those are the only two cases where I see my storage not being as fast as I’d like, and so far the two cases where throwing faster storage at the problem isn’t doing anything. 4k writes are double the speed in SMB than in NFS and so I may move all my production over to SMB shares (XCP-NG). VMware is in my lab only, can’t afford it for production here. Migrating a VM (live or shutdown) from storage1 to storage2 over SMB is about half the time as NFS to NFS, and all of this is sync off because the speed is really slow with sync on. I think the only way to speed up enough with sync on is a large and very fast cache drive as more RAM doesn’t seem to help much. Production server is only 32GB of ram for cache, lab is 96GB of ram and the write speeds are approximately the same.
Kind of a big long post that’s slightly off topic, but a case where 8 spinning drives is doing the same as a single NVMe that is 20 times faster. The local speeds on that NVME are over 20gbps and my network is 10gbps, I expected a lot faster small “block” sizes and that didn’t happen when running Truenas with the NVME drive.
To me none of that really matters in my homelab. I want something function, and reliable. SAS and NAS specific spinning rust drives are what I use and what I will probably use moving forward for any bulk storage.
I do have a bunch of Samssung 3.8TB SAS drives I use for the datastores on my clusters. They are local storage arrays each with 4 drives in a RAID 5 volume. Haven’t really had any issues but these really only have like 8k hours of powered on time.
Some of my WD SAS drives are upwards of 20k for powered on hours, and 8TB Exyos 7E8 drives have been solid. Plenty of spares for these ready for when they are needed.
I do have 10G for my hosts and storage that interfaces with it. Everything else is 2.5G, and 1G of it’s my CCTV system or phones.
So I got my Mini X+ today and I found it interesting that the boot drive was a WD Blue NVMe drive, and had 64GB RAM and not 32GB RAM.
This was a sealed box, never opened.
I thought they has SATADOM boot drives?
Since we were coresponding on this a while it fells like I should respond, but I do not have a Mini X+ and don’t know what it should be. It certainly makes sense to have the system on a SATADOM to have more slots for NVMes for caches.