Recently I was reading an article from one of the many controller-less WLAN vendors that brought up the topic of WLCs being a thing of the past. The article went so far as to say that the “aging architecture of wireless controllers can no longer meet the pace of technology innovation.” What do you think? Are WLCs a thing of the past, with no real value to the end user? I don’t think so. I don’t they’re going away, but merely adapting to the network of today. How so you ask?
When WLCs were first created, wireless networks never dreamed of being the primary method of network access they have become. With the push towards faster office space buildout and more flexible spaces, many businesses are now beginning to truly embrace wireless access. Even the verticals like finance that had long been holdouts due to the “insecure nature of wireless” are accepting the fact that a) wireless isn’t as insecure as once thought and may actually be more secure than a wired port in some cases, and b) business (and life for that matter) without wireless access is a thing of the past. With soaring numbers of access points in operation today, the discussion of centralized (WLC) vs distributed (controller-less) environments continues to appear. Growing pains have long been evident in the controller based world. If you aren’t constantly thinking about your network, the resources present in your WLC could be quickly consumed. Once that happens, your options are pretty slim. One, rip and replace the controller with a bigger box, or two, add another box that creates additional burdens on the network administrator and architect that have to ensure the two boxes work together and not against each other.
So to the initial argument, are WLCs going the way of the dinosaurs? While controllers have grown much like other network hardware over the years and have added capacity etc, the distributed nature of controller-less systems will always win the scalability battle from the point of network management. A cloud based NMS (such as Aerohive HiveManager, Aruba Central, Meraki/Ruckus/Mojo, Mist, etc dashboards) allows growth of the wireless network to be accomplished by simply plugging up additional AP and adding them to the NMS. Cloud based NMS actually presents some unique benefits in the age we’ve entered into where network analytics have become extremely valuable. Since cloud connectivity for management of the distributed network is already in place, the conduit to funnel network information and statistics to big data crunching machines exists as well. But wait you say, you seem to be agreeing that WLC aren’t important anymore? Not exactly. The problem I see with cloud based systems is how do you get alllll of that data out of your network and into the cloud without bogging down your ISP link? As the needs for bandwidth are ever increasing for end users and businesses alike, the bottleneck in most networks is no longer in the LAN/WLAN itself. The problem is an internet pipe large enough to provide for the bandwidth needs of the end users/business, AND handle upload of all the data to be consumed by cloud based NMS & AI systems is prohibitively expensive in some parts of the world. While one can typically find not cheap (but still affordable) large pipes of 1/10 Gbps and higher in major metropolitan areas, that’s not always the case in areas outside of this. NEWS FLASH: not every organization with large bandwidth needs exists in one of these major metropolitan areas! So what does one do?
Welcome to the world of network Artificial Intelligence. That’s right, AI. The ever evolving landscape of technology is doing what it does best. It’s driving innovation around the network in ways that previously weren’t seen as valuable to organizations. Not only are we seeing a push towards AI for troubleshooting issues, but we are starting to see the use of AI in architecture of the network infrastructure. No longer is there a need to be reactive to a situation, but with the use of tools such as Aruba’s NetInsight (formerly known as Rasa Networks) administrators now have the ability to be proactive in deploying network resources in locations that previously were non-existent or had subprime coverage/capacity. The ability to shorten TTR (time to resolution) of issues, as well as proactively enhancing the network before an issue occurs are just the tip of the iceberg in my opinion of what we can do with AI in the network. Yet, all of this can’t rely on simply the cloud alone.
There are companies, such as Nyansa, on the market today that operate solely to gather that data from the network and attempt to identify problems from that data, making the network administrator’s job easier. The problem with such companies is that they lack the ability to simply point Vendor X’s hardware to them for statistical gathering. At some point there will always need to be a collector in place to package up all that data and ship it off (securely) to the cloud for analysis. This is the role that I see WLC evolving into. Its logical evolution turns it into an AI engine, with the ability to process data locally and get answers faster. No it won’t have the full resources of a multi tiered compute platform such as a hadoop cluster, but we don’t need it to process EVERYTHING. Process small things locally like DNS/DHCP/RADIUS issues and hand off packaged data to the cloud for deeper analytics processing. We continue to see pushes by wireless companies to expand the abilities of existing hardware to do more for customers. Why should I have to continue to install multiple boxes in my network that consume energy, cooling, and administrative time when I already have a centralized controller that sees every piece of information coming across my wireless network? This is why I don’t think that controllers will ever go away, at least not in the sense of the physical hardware. While we may continue to see the push towards a distributed management/control plane within the WLAN, there will always be a need for compute to exist on premises. The term “controller” may change, but nothing wrong with a little change in technology right?
Let’s face it, as resilient as the networks we design & build are, there are always chances of an upstream ISP failure and the need for onsite compute and storage for log retention. After all, if we don’t evolve the way of the dinosaur might be the next stop.
Comments always welcome.