Fast plus backup to big might be OK depending on how often you sync, but how many drives are you talking about and what sizes are you looking at? Seems that with a combination of fast and big you might be able to just settle on medium sized and have the redundancy all in one. Then do cloud backups for off site storage. If you have 8 drives, you could run two 4 drive vdev and combine those together in a “stripe”. Then you could lose 1 drive per vdev without losing data and have the speed from the simple stripe. If you are going to use all SSD, then the speed should be decent enough. More drives = better performance but higher cost.
And if you are not making money from this, do you really need to saturate a 10gb connection? Time is money, but when there is no money coming in, decreasing the time will cost a fair bit of expense. I’d like to saturate a 10gb connection in my test lab, but the reality is that there is no need other than curiosity. I have not purchased what I would need to do this, even though I have 10gb optical cards for each VM host and an extra for the storage server. I’d need to upgrade drives in a very big way to get this done, and that starts eclipsing the learning value. I’d also need to upgrade the hardware for the storage, the older server does not have fast enough IO for that type of drive. It would be easy to spend $3000 to $10,000usd just to make the storage happen, and that’s a ton of money for something that doesn’t bring value beyond curiosity.
Summary is that while a 10gb network might be a good way to go, because getting 5gb of saturation might be somewhat economical, I don’t think I would build out the storage with the aim of trying to hit that full bandwidth. Or maybe my pricing is wrong and there are cheaper solutions that I’m overlooking. If there are cheaper solutions, I’d certainly like to know because I might build out my lab for SSD instead of the little spinning drives, currently sporting some massive 250gb drives at SATA 3gbs max split into 2 vdev (not striped) with 1 vdev for VM and the other for HA heartbeat and ISO storage. Certainly not the fastest beast I could create, but lets me learn without costing too much money. If I could replace all 8 drives with decent size and speed SSD for not very much money, I’d do it because I could use the storage for other things where saturating a 1gb connection would be useful (like video editing). Though that said, you can get faster with USB connected drives so maybe not.
This is getting long, but how big are these animation still frames? With lots of small files, the process of transferring the data can really slow things down. There are a lot of things that do not (in my network) saturate even a 1gb connection. Example is that when you extract the installation files for Avid Media Composer and copy them to a client workstation, there are a ton of little icon images, XML files, HTML files, etc. All these little files require the transmission to acknowledge the receipt before proceeding and the file system to place them in storage, this causes the overall transfer speed to be much lower than say transferring a very large compressed ZIP file or video file. Moving video files I commonly see 100MBps to 112MBps across my network, each client is a 1gb connection, that is essentially saturated as far as I’m concerned and probably a limitation of the client SSD. But I’m not sure how big a 4K (or larger) TIFF or EXR image will be so you may want to look at where the bottlenecks might be happening. Yes a 10gb network will make them a little faster, but it might not be what you are expecting. A sequence of HD PNG images would certainly be slower than a big ZIP file.
Finally this is not to discourage you from pursuing your desires, but just some notes on what I see on my network at work. We teach (among other things) video editing and for many years we used shared storage to accomplish this. We are currently on user owned USB drives, but that flops back and forth with the shared storage every few years. My 8 drives and dual 10gb connection seem to be fine for the compression we use on the files, I’ve tested past 40 streams at 35mbps and the server is barely working (4 streams on each of 10 clients). Typically we are only 2 streams for short duration with 1 stream normally working (each). Specifically we are using XDCam EX codec because that’s what our current cameras shoot, and our playback server is good with XDCam so it is an easy export. I need to test DNxHD at 145mbps one of these days and see where things go bad. Should be able to get 2 streams on each client for a decent number of clients, but I won’t speculate on that until I try it. I’m just using SMB shares with Windows clients for this.