Network Speed Issue with 10gbe and UnRAID

I humbly ask this community for some assistance. This issue has been driving me crazy and I cannot figure out where I have gone wrong.
I recently built an UnRAID server out of some spare parts I had laying around. The gear is not super old. I also rebuilt my workstation around the same time. Some preliminary tests transferring files between the workstation and the UnRAID box showed I was saturating my 1GBps network. I saw the Mikrotik CRS305-1G-4S+IN was less than $150 and I went all in on setting up these computers on 10Gbps. I bought 2 Asus XG-C100C 10gbe network cards. Installed everything and………disappointment
The transfer rates were a lot lower than I was expecting. From the Workstation TO UnRAID, I am seeing between 3.8 and 4 Gbps transfer speeds. But from the UnRAID TO the Workstation, I am seeing 1.6 to 1.8 Gbps transfer.
I have tried all manner of configurations. I have even hooked the machines together without the Mikrotik to minimize the possibilities. At no time do the CPU’s of either machine get above about 9%. Memory is at about 5%, Disk usage is also very low.

Have not tried:
Changing MTU’s to 9000 or similar. MTU is still on 1500. (Will changing that screw with me accessing the rest of the network at 1Gbps?)

Have tried:
Downloaded and updated drivers on the Workstation machine. (There was a newer driver on the Asus website)
Swapped out the Asus 10gbe card for a Mellanox Card. (In the UnRAID server)
Changed cables (a few times) All cables are CAT6 and in runs shorter than 25 feet. (At one point I used a DAC to connect the Mellanox card to the Mikrotik and CAT6 with SFP+ cage to the Asus card……same results)
Hooked the computers directly together (no switch) Same results as using the Mikrotik.

Thanks for any input.

Specs as follows:

Win10 Pro 64bit, Ryzen 9 3900X, Firecuda 520 NVMe SSD, 64 GB Ram, Asus XG-C100C 10Gbe NIC

Unraid v 6.8.3 Pro, AMD FX-8350 8Core, 16 Gb, 256GB NVMe SSD for Cache Drive, 5 X 8TB Seagate Nas Drives for storage, MCX311A-XCAT Mellanox CX311A PCIe3.0x4 (1)10GbE SFP+ NIC, 10Gtek SFP+ Cage.

I believe that changing to jumbo frames is something you should try. I do not personally have 10gb in my home network to my unRAID server, but from what I’ve read about it, it sounds like you may want to give it a shot, some people say that has made an improvement.

I don’t think it’ll screw up your 1gb clients or traffic, but maybe someone with more experience with changing the MTU can chime in.

I changed the Jumbo frames and saw no noticeable change in speed. Thanks

Swap to cat6a or even cat7/8 cables…

It is a pretty short run. I am willing to grab a CAT6a cable and try, but I really don’t think that will help.

I have a similar setup and I am not using a CAT6a cable. Are you sure that you transfer data from the cache drive and not from the HDD?

These speed numbers I am getting from iPERF. I was under the impression that iPERF3 did not use the drives in either machine for the speed tests.

Don’t know how iPERF works in detail. I tested it with a “normal” windows copy from cache drive to NVMe drive.

My understanding is that it removes the drives from the testing equation. It tests just the bandwidth capacity between two points.