Roughly, a decade ago, VMware released their ESX virtual hosting platform that literally changed the datacenter, as we know it. It came at a time when datacenters were inundated with server sprawl as every new application request resulted in the purchase, racking and configuration of a new server. The arduous hardware purchasing process for servers required substantial lead-time prior to deployment. It was also highly inefficient as enterprise managers found themselves buying for “tomorrow” rather than today. As a result, hardware was vastly underutilized as average workloads barely used more than twenty-five percent utilization. This combination of overprovisioning and underutilization of resources was and remains simply unsustainable for any business long term.
Server virtualization changed all of that. Servers can now be deployed in minutes instead of weeks. Multiple VMs can now reside on a single host, maximizing resource utilization. The concept of pooling virtual hosts together into clusters provided automated failover mechanisms that improved dependability and uptime. Intelligence was then integrated into the management software, which constantly monitored workloads and resource utilization, making the necessary adjustments in order to maximize performance.
While Server virtualization revolutionized the datacenter, it also exposed a great weakness as well – the network infrastructure that the servers depend on. While servers are spun up in automated fashion, the switches, routers and firewalls that make up the network ecosphere of the enterprise still depend on the manual configuration and deployment tasks by IT personnel. In addition, all of this hardware has to be supported and maintained. Multiple studies point out that routine maintenance is currently consuming as much as 80% of IT budgets. This means that IT managers and their staff dissipate their time with routine maintenance of the existing infrastructure rather than focus on strategic value added projects that contribute directly to the success of the organization. Najam Ahmad, Director of Network Engineering for Facebook, put it into perspective when he spoke at Interop in 2013 “The days of managing networks through protocols and command-line interfaces are long gone,” he said. “We feel its the way to build networks. … CLI is dead, it’s over.”
Which brings us to one of the most highly touted technologies of topic today – Software Defined Networking. In the coming years, SDN will revolutionize the enterprise to the same extent that software defined computing did. In an article dated August 2, 2012, Forbes Magazine stated. “SDN holds the promise to impact the global economy to a greater degree than any other development including the browser.”
SDN is about simplifying the network infrastructure of the enterprise by centralizing the control of all of its many devices such as switches and routers, into the software layer, making it application centric rather than hardware centric. Its goal is to deliver self-service network configurations, allowing applications to dynamically route network traffic reconfigure and even create additional network resources based on user-initiated demand. SDN sets out to make the switch and router infrastructure as agile and as flexible as the virtual server and its corresponding data storage infrastructure are today within modern-day network datacenters.
In technical terms, SDN separates the data plane from the control plane. This means that some type of controller or orchestrator manages all of the devices within the data plane. These devices communicate with the controller through what is called the southbound interface. The controller uses an API or northbound interface to communicate with the orchestrating application, which issues provisioning, configuration and trafficking commands to the data plane devices. Because the northbound interface resides at the software level, development time is much faster and the investment costs considerably lower compared to that of the southbound API.
What this technical jargon translates to is an enterprise in which network infrastructure is deployed as quickly as the virtual servers that it supports. VLANs, routing tables, protocols and security profiles will be delivered instantly as natural part of the deployment process of new, vanilla like network resources.
Software defined computing and SDN clearly show that the software defining of a process provides far more advantages, flexibility and value than hardware. We are witnessing the same progression when it comes to network load balancing and failover technologies. Until recently, network load balancing was hardware driven by an appliance hosted within the physical datacenter. These appliances then dispersed traffic amongst multiple IT appliances or servers in order to attain even distribution. Though this hardware driven process was ample to accommodate traditional IT loads that resided totally on premise, their shortcomings were highlighted once companies incorporated hybrid clouds into their enterprise. Today’s enterprises can now utilize Cloud Load Balancing, which is designed to accommodate multiple servers, datacenters and geographic regions. Traffic can be distributed by a multiple of variables such as geographical proximity and application content. Whether it be servers, network infrastructure or load balancing, the era of hardware is ending.
One of Total Uptime’s largest assets is our global cloud platform, deployed in dozens of datacenters around the world with incredible cloud based routing capacity. This platform gives our customers the ability to control and route traffic between the client and the datacenter, in the middle of the Internet. As you can imagine, this provides a […]
As we talk to people during the week, we periodically make suggestions for using Cloud Load Balancing or Failover that are often met with surprise, such as “Oh, I didn’t know it could be used for that”. So we thought it might be helpful to compile a list of 8 potential uses. Of course, it […]
A service provider that offers software-as-a-service or another cloud-based solution should understand what customers are looking for and what compels those very customers to choose an off-premise, “cloud-based” solution vs. the more traditional on-premise, self-hosted solution. As a cloud service provider ourselves, we set out to understand how our customers went about choosing one service […]