I am practicing setting up a large network in Cisco’s Packet Tracer.
In the real world, are computers within a subnet capable of handing more than 500 hosts in a single subnet? What about 1,000 hosts?
The reason why I ask this is because I’m not sure if a single host (computer, laptop, smartphone, etc.) within a subnet can handle hosts of up to 1,000, be it wired or wireless. Would a /21 be too much? What about /20? What about a single dedicated DHCP server? If I were to create a scope with a start address of 10.10.0.0/21 or 10.10.0.0/20, would creating such a large scope use up a lot of memory in a DHCP server?
Our wifi subnet at work is a /20 - that being said client isolation is enabled so the clients basically have access to the internet and that’s it. We run all cisco gear so client traffic is all tunneled back to the WLC over a capwap tunnel.
Most of our workstation networks are a /23 though.
Where are you running your DHCP server? It’s a pretty low overhead protocol, however if you’re running it on a switch you’re running on limited resources, unlike running it in VM.
The questions you asked would have made sense when CIDR and DHCP were new, but the most basic home router now has more CPU and RAM than the most powerful Cisco ISP or datacenter routers back then.
The details related to a DHCP lease are tiny - a 32-bit IP, a 48-bit MAC, and a 32-bit timestamp. 18 bytes. The storage format in RAM or on disk might now be perfectly efficient, so let’s round up to 32 bytes. 1000 leases now takes up 32K Bytes. It’s been a really long time since computer (or router) memory was less than 1MB. Even cheap and slow CPUs used in entry level home routers have more CPU cache than that (actually, that’s an assumption, I don’t know how much cache some of the MMIPS CPUs that are 10 years old but still get used have - but their CPU cache is still going to be massive compared to what used to be total system memory).
The only modern hardware where you see limits on the number of entries in a table are Layer 3 switches. They will often have limits like “max 10000 routes in the routing table”, “max 8096 MACs in the FDB”, “max 8096 hosts in the ARP table”. That’s because they aren’t storing things in general purpose memory, these are fixed-size allocations in the switch fabric chip, which needs to have a lot of things hardcoded so it can run at line speed.
Thank you everyone. In Packet Tracer, I’m going to use a dedicated DHCP server in order to simulate real-world scenarios. One example is something like this:
(I limit the aspect ratio to 4x3 to save on file size.)
That is the network I have made a few days ago and I am building a small Internet consisting of 7 cities in Packet Tracer.
At least I know that I can have as large of a subnet I want.
Even the cheapo $35 router I bought is having no trouble with a /16 network, granted I don’t have thousands of clients connected (and never will), but it should be robust enough for the over 250 things that will need to be attached. And any current server hardware would be far more than robust enough to handle this. Not sure if PT can handle it, I guess it is up to the designers of the software and if they planned for it to handle CCIE classes where the networks can get pretty large.
CCIE! Well I’m going for CCNP next before going for!
Oh, yeah. I did use BGP in Packet Tracer even though I’m CCNA certified as of October 18, 2021. Still have not gotten into CCNP but I’m hoping to get accepted into the Cisco’s CX Apprenticeship Program soon.
None of the CCIE materials I ever encountered in my 26+ years as an active CCIE (#1937, Emeritus) dealt specifically with scale, since most concepts in networking can be mocked up with relatively small numbers of devices (my CCIE lab only had like 3 switches and at most 8 routers). You’re more likely to hit an arbitrary software limit in Packet Tracer than experience any real-world issues with large subnets. Devices that do have limited resources for things like routes, ARP entries, and the MAC Forwarding entries also tend to time them out on a periodic basis anyway, and will remove the oldest entries to make room for new entries should that need ever arise. The biggest issue with large broadcast domains was in the “olden days” before switches so each LAN segment was really only half-duplex, which means only a single device could transmit at any one time. The conventional wisdom was to limit each Ethernet LAN segment to about 30 devices, but a study by some university showed that the limit was larger, like 50 - 60 if I’m remembering correctly). Modern switched networks don’t have nearly as many such limitations, but the fact that broadcast (and often multicast) traffic is replicated to all hosts means that very large network segments can suffer just because of the load of broadcast traffic. Many managed switches include a feature to throttle broadcast traffic to some safe level, and there are other features such as Port Isolation that can make very large segments a reasonable thing to do, but all the network engineers I’ve ever worked with would be extremely wary of a design that included segments larger than a /22.
Sorry for the long, rambling post, but I thought some of the folks here might appreciate the perspective of a network Old Timer like me.
Any ramble from a long-time network engineer is welcome.
Thanks for the comment.
I’m not a super-expert, but I don’t think the CIDR size of the subnet makes any difference whatsoever–it is only the number of actual devices operating that affect your broadcast traffic.
I like to use /20 subnets in the 10.x.x.x range because it gives me lots of flexibility for assigning static addresses to devices that need them so they are easier to remember while still having plenty of room in the DHCP pool.
I also like the practice of using the 2nd number in the subnet range to match the VLAN number.
For instance, I have 4 VLANs 10, 20, 30 and 100, and I assign them addresses:
I guess this matters to me because I have a terrible memory!
If I remember correctly the recommendation I saw was 500 hosts in a given broadcast domain.