Hi,
I have the following setup:
- pfSense with 2 static WAN IPs (gateway groups/failover)
- DDNS over Cloudflare: Host A entries are getting updated on failover (not proxied, no tunnel)
- HAProxy (and ACME) installed on pfSense
- 3 Synology DiskStations, 2 of them running Synology Drive Server
- DNS for all DiskStations is pointing to HAProxy (on pfSense) on WAN and LAN
Everything is running fine so far. I can access Synology DSM through HAProxy. The only problem I have, is Synology Drive Client for Windows and macOS. Synology Drive Client uses TCP port 6690 which cannot be changed. I set up a tcp frontend (and backend) for that port, which works fine. I don’t need SSL offloading on that port since the DiskStations are getting their certs over Let’s Encrypt (for encrypted tcp connections to Synology Drive Server).
Unfortunately on tcp frontends, I can’t use the same ACLs like on HTTP frontends since non-HTTP tcp connections don’t use an HTTP header (which makes sense).
I need to find a custom ACL to access the Synology Drive Servers on all of the DiskStations. Right now I’m only able to use one Synology Drive Server on one of the DiskStations because I’m using the backend as the default backend (without an ACL).
I found the payload option, but I can’t get it work. Can anyone help me with the implementation of that ACL?
Thanks!
Paul
You have 2 options.
- Use SNI-based
bind *:443
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend bk_server1 if { req_ssl_sni -i server1.example.com }
use_backend bk_server2 if { req_ssl_sni -i server2.example.com }
use_backend bk_server3 if { req_ssl_sni -i server3.example.com }
backend bk_server1
server s1 192.168.1.10:443
backend bk_server2
server s2 192.168.1.11:443
backend bk_server3
server s3 192.168.1.12:443
- Destination port based.
frontend ft_tcp
bind *:9001
bind *:9002
bind *:9003
mode tcp
use_backend bk_server1 if { dst_port 9001 }
use_backend bk_server2 if { dst_port 9002 }
use_backend bk_server3 if { dst_port 9003 }
Thanks for your answer!
I just gave it try, but it didn’t work. My frontend now looks like this:
frontend SynologyDrive
bind 172.16.80.1:6690 name 172.16.80.1:6690
bind WAN1-IP:6690 name WAN1-IP:6690
bind WAN2-IP:6690 name WAN2-IP:6690
mode tcp
log global
timeout client 30000
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend XXX-DS01-Drive_ipvANY if { req_ssl_sni -i xxx-ds01.domain.de }
I also tried the following, which is pretty much the same thing, but that didn’t work either:
frontend SynologyDrive
bind 172.16.80.1:6690 name 172.16.80.1:6690
bind WAN1-IP:6690 name WAN1-IP:6690
bind WAN2-IP:6690 name WAN2-IP:6690
mode tcp
log global
timeout client 30000
acl xxx-ds01 req.ssl_sni -i xxx-ds01.domain.de
tcp-request inspect-delay 5s
tcp-request content accept if { req.ssl_hello_type 1 }
use_backend XXX-DS01-Drive_ipvANY if xxx-ds01
My backend:
backend XXX-DS01-Drive_ipvANY
mode tcp
id 104
log global
timeout connect 30000
timeout server 30000
retries 3
load-server-state-from-file global
server xxx-ds01 172.16.80.11:6690 id 106
Am I missing anything obvious here?
I can’t use the port based method, since Synology Drive Client can only use port 6690.
You can set the frontend port to be whatever and then map the backend port properly. The frontend and backend don’t have to match.
I know, but since I cannot change the port on the client, that doesn’t help me in my sittuation.
Synology Drive Client (Win/mac) port tcp 6690 > frontend tcp port 6690 > backend tcp port 6690.
Installed pfSense 2.8.0 wir HAProxy 2.9.14-7c591d5 last night, but no change.
What I found out in the meantime:
Whenever I use one DiskStation as the default backend, which works fine, I can see 3 tcp connections being established in the logs. And then the client is connected.
When I try to use the req.ssl_sni option, only 2 tcp connections are being established and the connection fails.
I don’t know what Synology is doing with their desktop clients, but it’s really annoying!
Maybe I am confused, but I don’t proxy my Synology at all. I gave it a Let’s Encrypt cert with a DNS challenge and it has a FQDN on my internal network, and when I am home, Synology Drive and Photos just hit the unit directly. On the road, I either do one of two things. I can hit the FQDN via my Tailscale network, or I can use Synology Quickconnect. Quick connect works great with double NAT, CGNAT, etc. and doesn’t require a public IP address. Why bother exposing the Synology via a public IP address?