I’m sure if you have read any form of wireless article in the last 3-5 years you have heard about the explosion of wireless devices in the network. I recently attended a partner briefing where some are now proposing that by the year 2020 (a mere 3 years from now) there could be as many as 50 billion IoT wireless devices in use. A more amazing fact is that this 50 billion number doesn’t include smartphones, tablets, and computers that could increase the number dramatically. So with all these devices coming into the network, how can you increase your network performance to a point that can handle such a large increase? Even those that started building high density (or very/ultra high density as some say) are now facing issues with being able to properly handle that level of network capacity.
So lets talk about things that play a factor in keeping your network operating at it’s peak capacity.
- Allow for overhead in your network design
Commonly when I evaluate networks, one of the key things I try and understand is what level of overhead was accounted for in the WLAN design. For the most part, everyone understands TCP/IP overhead involved in wired networks, but they sometimes forget about it when it comes to wireless. In fact, the amount of overhead in a wireless network can be up to 10-15% more that what exists in a wired network due to the amount of encapsulation the wireless data is subjected too. If you are unfamiliar with everything that gets added, check out this CWNP article that describes it. While I won’t talk about ACI/CCI (although you can read more about that here) as part of this, I want to focus on a second type of overhead involved in wireless that involves the number of association slots on each AP. This overhead is evaluated by determining a)What is the designed carrying capacity of each AP in the system and b) How many devices above that value can the network safely carry?
When designing HD networks, I typically recommend allowing for as much as 40% overhead in client capacity to account for the movement of users through the network. If you spend time people watching, you will find that during the busy times in HD areas there are always a number of stationary users accompanied by a large number of transient users. While most of the transient users are utilizing a low quantity of bandwidth, the key thing we want to ensure is that they aren’t disconnected during movement. Obviously each time a station must reauth/reassoc to the network, we take valuable airtime away from the total system. In HD environments it is critical that we be good stewards of the RF spectrum we have, and squeeze every bit of performance possible from it.
- Reduce/Eliminate unnecessary traffic
Again, as we just stated we want to be able to gather every bit of performance possible from the network and another way we do this is by reducing and/or eliminating unnecessary traffic from the network. Within 802.11 wireless a key point to remember is that broadcast and multicast traffic is sent at the same data rate. That data rate is the lowest basic (required) rate that clients Tx at in the network due to the fact that these types of traffic are required to be heard by ALL clients. Obviously in some networks multicast/broadcast might be required, and in those situations careful network planning and design can help provide an improved user experience. In some instances these areas might benefit from only providing these networks on dedicated SSIDs. This will eliminate these traffic types from potentially reducing system capacity in other parts of the network where that SSID might not be needed.
In some vendor solutions, features such as Broadcast & Multicast optimization and suppression exist to aid in reducing these traffic types. In the former solution, when possible the broadcast/multicast packet is translated to a unicast packet to help improve network performance. The latter solution is pretty self explanatory in that we “suppress” it. These features help accomplish the next point of creating a flat & fast network.
- Improving network performance at Layer 2
In building HD networks, we like to utilize large subnets that in previous networks were unheard of. I commonly build networks of this type these days using /16 or /18 subnets. This is where the “flat & fast” mantra comes from. 65,000 users on the same layer 2 broadcast domain??? This will immediately raise red flags with some, but the performance increases gained by having all users on the same L2 network are immense. We prevent the need for overcoming layer 3 boundaries during roaming, because they simply don’t exist. While this is a recommendation, be sure that your vendor solution can provide the optimization and suppression features mentioned above before moving to this type of architecture. The most important point to state here is, don’t be scared of using large subnets thinking that broadcast traffic could create a nightmare like they did in wired networks of the past.
- Properly size ALL pieces of the network infrastructure
The final piece of the HD puzzle we need to examine involves the backend systems that often are forgotten about in the rush to deploy new generations of APs etc. Systems that provide IP services such as DHCP & DNS are absolutely critical to the WLAN system as without them, layer 3 connectivity doesn’t exist. When we are designing HD networks, you should account for as many as 1 DNS request per device / per second. From a DHCP request standpoint, the value can be calculated in a separate formula based on anticipated device count with a bit of padding for the unexpected.
Example: In a network with 5000 devices connected, you would be looking at 5000 DNS requests per second. If you have 40000 devices that number jumps to 40000 requests per second.
While these requests are common in every network, when the scale is increased dramatically due to HD station counts the DNS server must be capable of keeping pace. I would HIGHLY encourage you to look into some type of IPAM system (such as Infoblox or Bluecat for example) to provide IP services for your network. IPAM systems are designed with this type of performance in mind, performance that typically can’t be matched by the common Windows Server environment that many have in place for their network today. Even highly visible networks, such as those in professional sports stadiums are susceptible to this. While I will allow them to remain unnamed, there was recently one such stadium that the network crashed during opening day due to the Windows DHCP infrastructure not being able to keep up with the number of requests. You see how important this evaluation is. Don’t be the one that falls victim to this!
Thoughts? Comments? Bring them on 🙂
For more information on device counts in the network, check out this article on the IEEE site.