TCP windowing and latency


I’ve seen a couple of iPerf throughput tests where members mention different performance results between single and parallel speed tests across WAN links. I think it would be helpful if you did a video that explains why TCP windowing and latency matter when running these across the internet.


1 Like

I have thought about doing a video on this topic along with why iperf does not really represent real world use. I need to setup an IMIX traffic generator to show the difference. I have not used it, but there is this one that is supposed to work well.

1 Like

If this is what I think it is, the TCP window sizes used to be a fun way to manipulate online gaming performance, giving advantages that would otherwise be unfair or outright cheating.
I haven’t seen any real world advantages to messing with it anymore, particularly since the OS seems to manage this size itself.

Would love to know more about this.

That was really myth since gaming depends more on good latency results rather than data transfer. Most online multiplayer games use very little data, but it needs to be responsive.

You are correct that systems seem to manage it fine, but in the example of iPerf it only uses a single stream by default. Many applications are able to run multiple streams when transferring data.