Network Functions Virtualization (NFV) has raced onto the communications industry scene like an overnight sensation. It is a logical approach given that today’s networks contain a vast amount of appliance-based solutions that require proprietary hardware. NFV promises to offer a host of benefits some of which are noble and achievable, and others that require further examination.
NFV Drivers and Goals
Three primary motives and goals have driven NFV’s evolution thus far. First, network operators – i.e. service providers and enterprises – want to be able to use off-the-shelf or commodity hardware for all IT and network functions. Doing so reduces both CAPEX and OPEX, so NFV is intended to provide another iterative path to cost optimization. I would argue that most modern network applications are designed to run on commodity hardware. Operators that are stuck with a large number of applications running on high cost, proprietary hardware are suffering due to legacy architectures and lack of modernization within their infrastructure. Achieving this goal is as much about modernization of their application stack as it is about NFV.
Consistent and simplified administration of network functions is another key driver behind NFV. If network operators can reduce the effort to configure and manage networks, minimize configuration and provisioning errors, and make it easier to administer growing networks with fewer systems and people, they can gain benefits in speed, user experience, time to market and lower costs. In my opinion, this goal is the most tangible and valuable result of moving to virtualization.
The third goal has probably been the most high profile and comes with the highest set of expectations. That is more efficient utilization of hardware resources. In other words, not only should NFV enable network operators to use commodity hardware, but to use less of it to support growing traffic volumes and more extreme deltas between peak and non-peak traffic loads. The question I propose is: can this expectation be fulfilled, or is it misleading?
For any application that efficiently utilizes hardware capacity and requires less processing power and resources than a commodity server provides, then sharing physical hardware will yield benefits in efficiency. On the other hand, if an application is using hardware inefficiently, and I would argue many real-time applications today fall into this category, virtualization will not result in the efficiency gains needed to keep up with traffic growth. There are inherent architectural limitations to real-time 1.0 technology and virtualization of these systems only provides minor relief from issues with scalability and predictability. Virtualizing these functions provides a temporary Band-Aid to a problem whose roots lie in the way that real-time applications have been architected up until now.
More Efficient Software Architecture
Software that is architected to efficiently utilize all available resources will run efficiently in both physical and virtual environments. Our solution has been proven to utilize hardware 100 to 400 times more efficiently than its peers. From the beginning, we took an approach to engineering that addressed some of the same issues as NFV promises to solve – rising CAPEX and OPEX in the face of growing data volumes due to inefficient hardware utilization. As a result, our solution delivers many of the same benefits in terms of hardware costs and utilization whether running on physical hardware or in a virtualized environment.
At MATRIXX, when we work with customers who are looking for virtualization, they find that our platform’s design gives them the ability to use off-the-shelf hardware and to utilize it with extreme efficiency. As a result, our solution inherently gives them the efficiencies they seek while virtualization of our solution gives them additional benefits of streamlined administration and assimilation into their operational environment.