Two Network Interfaces on Raspberry Pi Lowers Network Speed?

Hello,

So, I added a USB 3.0-to-Gigabit Ethernet dongle to my Raspberry Pi 4B to brute force solve an issue with two Docker containers having a port conflict on eth0. (tl;dr Pi-Hole is annoying when it comes to behaving well with other servers on your network.)

My previous speed tests on this machine were extremely consistent, and matched up with the rest of my internal network. I’m now seeing a consistent 20-25 percent reduction. I expect 220Mbps down/13 up on this machine, and now routinely get 175-180Mbps.

I just plugged the USB interface in and went on my way, so what I think is happening is that the OS is just giving network traffic (packets) to whichever interface is available in that moment. There’s got to be some overhead/latency to doing that, which causes just enough of a delay to be noticeable and skew the speediest results.

I’ve got a ridiculously fast connection for my use case, so I’m not immediately concerned about this causing problems, but if it’s a symptom of something being misconfigured, I’d like to fix that. If it’s just that the traditional speed tests don’t account well for multiple NICs, I’ll stop worrying about it.

When you say you just plugged it in and went on your way…

You have a RPi with two network interfaces, both connected to the same subnet? both connected to the same container? I’m far from a docker expert but unless you configured them with some sort of Bond / LAG I would not be expecting that to have helped any.

Are the other servers on your network also containers on the Pi or are they on something else.

Thanks for your reply. :slight_smile: My goal here was to end a port conflict between containers (or rather, between pi-hole and every other container).

I can command each container to use a specific network interface, so I’m letting the pi-hole stay on its own interface (eth0). It apparently has to have direct access to :443 traffic, since SSL ads are (apparently?) a(n Adsense?) thing. Thanks, Google.

The USB interface is getting every other container. These include public-facing micro-services. This was the easiest way to solve the port conflict on :443, and has the additional benefit of just letting me pull a single cord to kill all public-facing incoming network connections, without taking down my internal DNS.

Pi-Hole does not support SSL connections without some sort of deep magic I don’t understand, so even though I can reverse proxy it, figuring out how to actually do that will take more time than I actually have. (If/when I reverse proxy pi-hole, I could theoretically get back down to a single ethernet interface without port conflicts. Apparently, the pi-hole team just assumes everyone will use a VPN to manage it from outside the network, which is … annoying.)

You might just be maxing out the Pi, quite a bit going on there by the sounds of it.

I have to confess, I don’t know how pi-hole works but if you have one IF for the pi-hole and the other for the other servers then it shouldn’t be doing any “which IF is free” logic as each container can only use one as they are only allocated one.

Does pi-hole do some clever DNS / ARP stuff or does the one IF have two vlans one for WAN and another for LAN? iperf might be an idea to find out if the pi hardware is coping with the load.

1 Like

Sorry. I realize on re-reading my last message I was a bit loose with my language.

Let me try to clarify a bit:

  1. Pi-Hole’s configuration does not allow any port customization or designation of which network interface to use; but:
  2. For ease of install and maintenance–and in all honesty, because I’m still new with Linux and want to be able to recover quickly if I blow something up–Pi-Hole runs inside a docker container so it’s isolated from the rest of the system.
  3. When I bring up the PiHole container, I am able to tell docker to only allow that container to talk to the rest of the world on a specific network interface.

Since I’m seeing lower speed on a command line speed-test, what I think might be happening is that the speed test is defaulting to multi stream downloads, and something about running multi-stream over two NICs is causing a slowdown.

I haven’t managed to figure out how to tell the speediest command-line tool to use a single stream, which should stick to one NIC (I think?).

I can’t log into the desktop environment and run a speediest there. My Pi hates running XFCE–trying to run speediest with 1 Fiirefox or Chrome tab opens throttles the CPU so hard that it only reports half my total speed.

¯_(ツ)_/¯

First bit there makes sense, so the Pi-Hole can only see one NIC.

Which OS are you running the speed test from? the host OS on the Pi, another container or a different device?

Still not entirely sure why anything would be trying to use two NIC’s. Do you have both NIC’s setup and available from the “host” OS on the Pi? If you do run it from another box on the same network do you consistently get a better speed?

Still just sounds like you are hitting the limits of the Pi honestly.

I’m running a CLI client for speediest.net on Manjaro Linux, the Pi’s host OS.

I managed to figure out the issue after my last post. I’m now getting the expected results from the Pi on eth0 (internal NIC).

The problem does seem to be a hardware limitation, but not the one we suspected. It finally occurred to me to check and see if the speedtest CLI app had a help menu. It does. Oops.

First I tried single stream download. The default is multi-stream. That made it slower. So, no.

There was also an option to bind the speedtest downloads/uploads to a single IP. I put in the IP of the Pi’s built-in link, and got the result included above.

So, something about the way that the program/web app works cannot deal well with having two available NICs. It’s multi-stream, so I’m guessing it tries to divide the streams between available NICs, and gets confused and falls apart.

Testing the USB NIC, I see that it’s always just a little bit slower. The first test, it was a good 20 megabits slower. Performance anecdotally appeared to improve after repeated runs, then ended up ~20 megabits slower again. It’s possible there’s at least one of 1) USB overhead that causes the constant slight slowdown; or 2) some sort of low power mode it enters that considerably slows down the initial run.


I should note for completeness sake that it IS possible to bind NICs to specific apps at the OS level using something called a “network namespace.” While this sounded sensible and straightforward at a high level, trying to read a tutorial made me see black spots, so I choose to take the minor performance hit (10-20 percent?) on that NIC and cause myself headaches with docker, as normal. :slight_smile:

Thanks for your help trying to figure this out.

No problem, glad you have got it sorted.

1 Like