Currently we have 7 physcial servers - all Windows
2x Web Servers, 2 x SQL servers (clustered) 2x DC’s and a 1x SQL Reporting Server
SQL Servers connected via SAS links to a SAN for the shared storage of the SQL Data
All running Server 2012r2 and old hardware so looking to bring it into the 21st century and virtualize it all before it goes EOL next year.
Not done a virtual conversion before so looking for suggestions and idea’s around the design of it
Currently all connected via 1GB switches.
Thinking 2 host servers (HyperV) with the 7 servers spread out over the 2. HV servers probably clustered
SQL servers get hit quite hard with a fairly heavy IO
Do i really need a SAN for the SQL data or would a NAS with iscsi luns with 10GB links back to the HV hosts suffice performace wise vs a san?
Idea in my head is 1x 1GB network for network traffic, 10GB network for iscsi network between the 2 hosts, and 10GB network for hyperv migration network?
Would i use local storage on the hosts for the VM’s or would live on shared storage too? A seperate NAS from the SQL Data NAS with 10GB links?
Do i need 10GB switches or 1GB switches with 10GB SFP ports? Or will direct 10GB links from Hosts to NAS work?
Suggestions and thoughts most welcome.