Convert pfsense into L7 FW - Adam Networks

No. The whitelisting approach refers to the packet filter. In other words, all outgoing connections are dead until a successful DNS query is made. This prevents you from connecting directly to an IP, but not to server.yourdomain.tld if that domain name is not on the DNS blacklist. Of course, you can also take a whitelisting approach with DNS blocking, but this is not really feasible unless you want to explicitly whitelist every domain name your users should be allowed to visit.

  • SSH and VPN - Yes - as long as they are not running over 80/443 and no successfull DNS request was made.
  • DoH - Not sure. Requests via 443 to the DoH server - probably not. But since an external DNS server would answer the request, it wouldn’t create the appropriate firewall rule in pfsense to allow the connection, so probably - Yes
  • Proxy: Web services over 80/443 are generally accessible if the corresponding DNS request was successful, so - it depends, I guess.

As far as I remember, ports don’t matter. It’s been a while since I watched this video and thought about it. The way I remember it the FW only creates temporary conditional allow rules based on the IP addresses in the DNS whitelist (or do they use a blacklist?). But maybe I am not remember this correctly?

Sounds like a nightmare. I mean how would they maintain something like that? But yes, if they really maintain a DNS whitelist, then you wouldn’t be able to access anything that isn’t on the whitelist. But you’d probably also get a triple-digit number of tickets every day, because people aren’t getting to the sites they need to access. :wink:

Pfblockerng does this very function.

In a general sense, I see what you are saying but technically pfblockingng uses a blacklist of known DNS/DoH IP addresses, and does not interact with the DNS server directly - AFAIK. However, in a general senses it kind of does the same thing, so your point stands.

If they don’t use a whitelist then they are just shifting the shell game around, and they don’t have anything really new here. Blacklist via an IP FW list or via a DNS list that feeds the FW rule list, they both basically do the same thing.

The fundamental problem is blacklists suck. They need to have a deep enough whitelist to make it useful, then default block any outbound traffic not to an IP address that resolves to a domain (or IP) on that whitelist.

If they have that I bet they’d stop you and I from getting out. If not, then they just have what you already have - and you and I both could get out from your network (and mine) without any trouble. Default allow is a big problem. I think solving this on the DNS side is easier than the IP side. But that does mean reddit would be either allowed or blocked for said individual.

I really think that either direction of whitelist and blacklist that it will always be a cat and mouse game - never ending.

DNS entires are ever changing so administering new entries for a site for Amazon or Facebook or mastodon would be astronomical. Not to mention if you are in it for privacy to trust companies like this to use AI to allow potential false positives threats. Same can be said about the blacklists.

The argument is about replacing layer 7 and I don’t think it can be done with the methods you are mentioning. Security is always a hard topic for most because everyone has their own tolerance on the subject. I think having both implemented is better than having one over the other. But in no fashion can the DNS “special sauce” ever going to be better than layer 7 inspection due to actually examination of the packets for malicious activity.

Adam networks does in fact maintain a lot of DNS whitelists for clients to subscribe to and use. It’s far from 100% but it does cover a fair bit. We use the whitelist policies mostly for servers (they can be restricted the most) and finance/payroll employees. It’s tricky and does take regular upkeep, but it’s doable.

They also demoed an AI driven auto whitelisting engine to me a year ago. Using a restrictive whitelist policy, users could go to a site not on the list and it would check the site and allow it in realtime if it was ok. This was also before LLMs / ChatGPT became publicly available. I can only imagine how good the engine is now.

FYI, Its always seemed like a bad idea to me, to break HTTPS encryption and MITM communication between clients and servers that’s supposed to be end to end encrypted. Besides, does it even work for services like Apple/MS/Amazon etc?

We used to use Appliasys Cacheboxes (fancy Squid box). They of course had to MITM traffic as well to cache and HTTPS traffic, but that straight up did not work at all with any Apple or MS 365 services, among others. I don’t have any real experience with any higher end L7 firewall vendors though, so perhaps it’s different with them.

Regardless, it still seems better to me, to not have to break TLS encryption in order to secure traffic. And if you implement Adam:ONE + their DTTS property with whitelist policies, then how useful would L7 inspection really be?

Far less astronomical than URL filtering! My DNS filter for amazon would be an astronomical two entries (amazon.com, *.amazon.com). There would be some script work needed to grab all the domains pulled in from javascript, but that is not insurmountable - and those domains quickly become redundant given they are mostly cdn’s and ad tracking domains, which can be conveniently blocked as needed. Of course, there could be a few more domains I’d need to search out for amazon given they are a huge company, but not an astronomical amount.

However, you are absolutely right, this is a never ending treadmill to be on - for both list types.

Maybe so. But from what I have seen L7 is just URL/domain filtering inside encrypted traffic. They do all this heavy lifting just to do the same thing. Maybe they do some quick virus scanning too.

A setup like this has the potential to shift that list-based filter back down to layer 4 traffic where a beast of a box is not necessary. And privacy is not intruded. And will not be threatened by advances in SSL (v1.3 I believe?).

But there are always trade offs, and this one is friction cost. How much friction do you incur using a big whitelist? As the OP mentioned, this largely depends on the user group.

What I am saying is that its not as simple as blocking a site and keeping privacy. It is about inspecting the traffic for malicious activity and there is the key and reason why DNS filter and IP whitelist can’t compare. You can have your DNS solution allow a site, but if that site is compromised in any way or does some nasty javascripts then your privacy and other stuff goes out the window. It is also about rules like snort community rules, emerging threats rules , Talos rules that monitor the patterns of activities and not strictly keeping a list of URL’s to keep away from.

When it comes to TLS 1.3 I believe Palo Alto (And others) has found a way to still proxy the traffic to do its inspection. I don’t think there is much risk having the traffic decrypted and then re-encrypted with the CA that an organization trusts in their environment.

Hi. I was alerted to this post and thought I’d weigh in. I’m David Redekop, the founder of adam:ONE and grateful to Steve Gibson that he found it worthy to be shared.

Based on the discussion above, there’s a video and demo I created to help shed light on the DNS integration into pf via tables so extremely rapid firewall changes can take place in user space. In essence, firewall rules are created and destroyed rapidly. Here’s the full explanation:

https://adamnet.io/dtts

I hope that helps and we welcome further dialogue on it.

So… All this does is checks to see if there is a valid DNS entry and then allows the IP address? Then it when the TTL expires it adds it back to the table if queried again?

Then you claim that it stops all these attacks and threats? I’d like to know how you got your data for this.

No. Take a look at the show notes at the above URL and it includes a UML.
The data is self-producible as the video shows.

Thanks for posting here and being open for questions.

Does your dns filter work off a default block or allow list? Who manages those lists and how?

I like your approach of forcing all traffic to use internal dns. This is sort of possible with various IP block lists, but your default deny rule is by default a much tighter approach. However, one could argue that if the typical IP block list is good enough, then all of this just comes down to who’s dns filter is the best.

If your dns is a default allow, then you basically have the same cat & mouse game as the next dns provider - with the understanding that circumventing the local dns is much harder (arguably impossible), which on it’s own right is worth a lot.

If you guys have a default dns block list that isn’t too aggressive (too many false positives), then you have something really special. Default deny rules for IP and DNS. Yikes, that would be hard to get out of. Not sure I know if I could do it… I would try to spin up a vm on some public provider and try to use some subdomain of theirs to access my vm. Assuming your filter let’s me resolve to that top level domain (and presumably subdomain).

@liquidjoe the design is purposefully flexible to accommodate different “characters”. In some use cases, blocking known bad is the only practical approach, but if bundled with good awareness that such a policy provides functionality but not protection, then it may be a fit for a researcher, for example. On the other hand, when protecting a hyper-focused CFO, you’d want the filter to work by default on an allow-list, so all unknown threats are automatically blocked. Both methods are possible: start with a block-known-bad, or start with allow-known-good and build from there.

Forcing all traffic to use internal DNS requires several ingredients. Blocklisting known public DNS servers is a whack-a-mole you can never win so when you force-redirect, ie. hijack TCP/UDP port 53 to your server of choice, you address all legacy custom-configured DNS servers to be answered by policy instead. Double win because endpoints get answers, but they are forcefully answered by policy.

DTTS (Don’t Talk To Strangers) has the byproduct that even if there’s some obscure hole through which DNS can be queried, those destination aren’t available if they weren’t queried through the required channels.

As for whose DNS filter is the best, you don’t need to have that argument anymore. With DNSharmony you combine your various DNS threat intel sources into a real-time aggregation. If any of the DNS resolvers you choose deem fqdn.example.com as a threat, it’s blocked.

If Clouflare’s 1.1.1.3 says it’s bad, so be it.
If Quad9’s 9.9.9.9 says it’s bad, so be it.
If CleanBrowsing says it’s bad, fine.

And, if you need to override something because they all dislike example.com that was hijacked yesterday but it’s your own, then you override them all. We love Dr. Paul Vixies’ modicum “My network, my rules”

As for having a blocklist that isn’t too aggressive that’s not up to us to tell. The use cases are so broad that it only makes sense that you make a policy specific to each device role type. Most of our managed clients end up with about 6-8 policies that address the org’s needs. Here’s a sampling of what we consider “protection levels”

Protection Level 0: unfiltered – no DTTS (Don’t Talk To Strangers) is enforced, no DNS is filtered, queries are upstreamed to the likes of 8.8.8.8/8.8.4.4

Protection Level 1: DNSharmony with DTTS bypass – here you combine threat intelligence from the likes of CloudFlare’s 1.1.1.3/1.0.0.3 along with Quad9, CleanBrowsing, AdGuard, etc, but with the DTTS bypass you keep full functionality of legacy IP-based applications that could break with DTTS

Protection Level 2: DNSharmony with DTTS enforcement – simply adding DTTS now offers massive protection but in a business/enterprise environment, if you have any applications on endpoints that are referencing static IPs rather than FQDNs, they’ll need to be provisioned via Enablers (think of Enablers as exemptions to DTTS)

Protection Level 3: Reflex AI with DTTS – now you’re taking advantage of per-FQDN tags, of which there are 74 fairly industry-standard ones from known-bad pornography, phishing, malware, etc and known-ok from education, government, business, economy, etc and you create as many of these policies as you like in order to fit a given role in a business, enterprise, school, church, etc… the key with this protection level is that the typical innocent user doesn’t even know they’re behind a filter of any kind until and unless they stumble on something in a blocked tag. The most important one here is “unknown”, so if a given FQDN has no reputation, it’s a low popularity domain, it’s simply blocked. This is why most DGA-domains simply don’t work to ensnare an intended victim with this protection level and above.

Protection Level 4: Adaptive AI with DTTS – adding strength to the previous one, the existing tags are ignored until the user actually says “unblock request” and then it’s a fresh sandboxed AI-driven review of the destination to make sure it is still safe at time of inspection. A vulnerable CFO or wire transfer agent is a perfect character for this type of policy.

Protection Level 5: Static Allowlist – this is just as it suggests… maybe it only needs to updates its Windows/Linux OS, offer the MSP remote access, make some internet-bound API calls and that’s it. Static, safe, everything works. When a new application is installed, it goes through a workflow to confirm any new requirements are added. This is how we protect Active Directory controllers for example. In a traditional environment where endpoints all ask AD for DNS, it means AD is necessarily the most relaxed policy. Not here. It’s the strictest policy. We describe this in more detail at adamnet.io/dttsad

Protection Level 6: Holding Tank Quarantine – this one is neat because if you make it the default, the OS “thinks” it is online because the likes of captive.apple.com and dns.msftncsi.com and other “connectivity checks” pass, and that’s it. Nothing else. When you make this the default, you get the similar intended effect as we did years ago when the industry designed NAC, but without the complexities. Just apply a default policy to yet-unknown devices so they can’t do damage (including, for example, having access to AD) - think how much fun the Blue teams would have when RED teams try to implant a Pi in a boardroom ethernet port :slight_smile:

Protection Level 7: No internet (no boundary crossing) at all – this will cause mobile endpoints to throw up a typical captive portal page, but for devices requiring a unidirectional firewall, it’s perfect.

You get the picture. These are just examples, but when you take things from basic building blocks of inverting the old “block known bad” to “allow known good”, you get all kinds of possibilities.

Damn, if everything works as advertised you have one impressive setup. Drop any one of us behind one of the more stringent lock down scenarios and I bet nobody would be getting out, even with hands on the keyboard. All with basic blocking and tackling from L4 stuff. How cool.

Hopefully in a world that sells off reputation, size, and prestige, the better technology will win out. Which on the face of it, sounds like you guys have the edge. If I were in my old role I would be calling you, for now I’ll keep your info in my back pocket.

1 Like