I have long been on the fence on which type of infrastructure gear I preferred in a production environment. For a number of years this battle was easy to avoid, as the amount of money required to form a virtual environmental capable of entrusting your production needs was quite high. However, thanks to Moore’s Law that price tag has continued to drop.
I remember walking into a datacenter in Irvine, California circa 2007 while working as a System Administrator for a small company based in Oxford, MS. While working in the DC, I was walking around and found a cage with plastic trays full of memory. Come to find out, Yahoo owned half of the DC floor and the memory was for servers that assisted with the search engine functions. All I could think of was how expensive that tray was, and how many virtual machines it would be capable of running. At the time my employer was just starting to dabble in virtualization, mainly to determine it’s viability in reducing capital expenses in our software development & production environments. I remember discussions with other technical people where the sheer mention of the term “virtualized” caused almost evil looks.
Now we fast forward to 2017 where anything and everything is capable of running in a virtual environment. The advances in storage technology and compute power have provided great stability in a sector where for the first many years, there was none. With improvements over the years, I have had to constantly evaluate my position on running key infrastructure pieces in a virtual environment. In 2017, however that evaluation has become even harder with the larger and larger amounts of memory and compute resources required by infrastructure gear. But wait a minute, wouldn’t larger requirements for resources provide more justification for dedicated physical boxes? It would, but with many organizations already owning large, highly redundant virtualized environments the need for separate boxes are almost nullified by the costs associated with them.
For years I have been on the fence, with a lean toward the physical side. While this still holds true for the most part today, I am becoming more open to the virtual side for certain pieces. While I still don’t think a WLAN controller where the data is tunneled from all APs, or an ISP facing firewall with GB of traffic traversing it, are ready for a virtual home in most networks, I’m sure one day they will be. Even today we are seeing this occur with things like Amazon AWS and Microsoft’s Azure cloud platforms. WLAN companies like Aruba and Ruckus already have started adapting many pieces of their platform to run in the virtual world. They have determined that not only does it make more sense (and cents) to utilize resources already available at most customers, but the needs for complex calculations to be performed for things like controller clustering and radio resource management (RRM) are ever increasing. (Coming soon to this blog will be a dive into the new Aruba AOS8 platform and the features it includes.)
This topic presented itself recently due to my own needs for a larger virtual environment for my home lab. For the past 2 years, my virtual environment has consisted of an Intel NUC with a dual core 1.6GHz processor, 16GB RAM, and a 250GB SSD hard drive. This box has been outstanding in its performance, stability, and for the price almost unbeatable. The downside to this platform was the limitation of a single 1Gb NIC and a single processor that even with hyper threading only allowed for 4 virtual CPUs. With the requirements for many VMs these days to have a minimum of 4 virtual CPUs and 8GB or RAM, the current platform is simply undersized for my needs. Even attempting to run with lesser specs that the listed requirements, no longer works consistently as developers are now implementing checks to prevent this.
So this past weekend I spent my time trying to locate a much improved platform for my home lab, in the price range of $1000. Now obviously any tech geek is going to want top of the line, brand spanking new gear. This is impractical for the budget I have set, and to be honest for a lab is pretty unnecessary. What I decided to settle on actually will hopefully provide enough resources for the next couple years anyway.
Here are the specs on my “new to me” lab box that hopefully will arrive soon!
HP Z820 Workstation
- Dual Xeon E5-2670 8 core processors (capable of hyper threading for 32 possible virtual CPUs)
- 64GB DDR3 RAM (expandable up to 512GB)
- 1TB HDD Included
- Upgrades: 2 SSD hard drives (boot drive of 128gb & vm storage of 480gb)
- Future upgrades: LSI megaraid controller, additional SSD hard drives
In the near future I’ll probably work on the storage side to beef it up with a dedicated RAID controller and additional SSD drives for speed. I settled on this particular box as it has a pretty solid motherboard that provides a lot of room for growth.
I have to say I’m a bit rusty in the server side of things as my days in that world are starting to become a distant memory. However, with the demands of software even in the wireless sector becoming greater and greater I might just have to refresh my internal database on servers again 🙂
Have a VM environment in your home lab? I’d love to hear what you’re running and whether you prioritized cost vs performance!
-Scott