I want to set up a large cluster (HPC) with infiniband at 40GB QSFP interconnect and large 380TB storage.
I have the following hardware.
1. supermicro 2U, 4 node rack server. Each node with 2x Xeon E5-2670 and 128GB RAM with each node having x2 QSFP ports.
2. Intel True scale 12200-18 port, 40GB switch.
3. QSFP 40gb DAC (1 metere) x12.
4. Another Intel 1U, 2 node server. Each node with 2x Xeon E5-2620 and 128GB RAM attached to 2x Netapp DS4246 diskshelves. Each diskshelve has 24 drive bays loaded with 24x 8TB WD drives and IOM6 modules to connect to the intel server which hosts the LSI 9300-8e 12Gbps 8 Port SAS SGL PCI-E Host Bus Adaptor.(this will act as a storage sever).
5. Dell 124T PowerVault LTO5 library
6. Netgear Prosafe 24-port Gigabit switch
7. I want to implement the entire VM lab to operate over the IB network, using about a dozen or so VLANs to split all the segments up. Gbit network is just there to get traffic from the lab out to the web and back in.
8. I want the storage subsystem to operate in iSCSI mode, active-active failover to each IB link.
I am not sure XCP-NG and freenas will be able to handle all this.
I am thinking of installing Qlustar on all nodes and try it out.
Tom please comment.