There’s a big migration going on – people are moving away from hardware. Oh, hardware has its place. Some companies still like something they can touch, installing and upgrading and surveilling and maintaining it throughout their local data center. But for many managers, moving their IT workloads to a virtualized cloud environment seems to be all the rage. And that includes their application delivery controller functions.
Agility. Scalability. Elasticity. Those words are polar opposites to the attributes of firmware-controlled hardware appliances. That’s not to take away from the incredible improvements developed over decades in the way hardware devices process data. ASIC stands for application-specific integrated circuit. ASICs are customized microchips that are designed and built for specialized applications — such as application delivery. But no matter what great advances are made in hardware, you could make the case that virtualization is going to be better.
Firmware, according to TechTerms, is a software program or set of instructions programmed on a hardware device. Firmware is software that is stored in non-volatile memory (ROM, EPROM, or flash memory). It’s not controlled by the operating system — it’s separate. It may be associated with a CPU, or with an ASIC (which provides helper logic to the CPU), but firmware functions differently from what we normally think of as software.
Components of dedicated hardware devices, such as the CPU, NVRAM, ASICs, and firmware, work together to fulfill the specialized functions of the equipment. But firmware is fairly static. It comes with the hardware device, and firmware updates are generally few and far between. It’s not at all dynamic — like say, the virtualized cloud.
In contrast to inflexible firmware, advanced application technologies on the cloud are all about agility, scalability, and elasticity. Perhaps the summit of this advancement at the time of this writing is the implementation of small snippets of code that are highly portable across IT environments. According to Docker, “A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.” And microservices are a way to segment large software programs into small, manageable chunks.
“SDN and virtualization are driving a shift from hardware-based application delivery controllers toward a microservices model that enables more flexible ADC licensing options.”
What does all this have to do with application delivery controllers? According to NetworkWorld, “The application delivery controller (ADC) market is ripe for disruption.” The summary of the article explains it this way: “SDN and virtualization are driving a shift from hardware-based application delivery controllers toward a microservices model that enables more flexible ADC licensing options.”
So it’s all about software-defined networking. Application delivery does much more than simple load balancing, as we discussed in a previous article. When it comes to the full onslaught of ADC features, such as Layer 7 routing, acceleration, SSL offload, caching, data compression, and DNS firewall, hardware ADCs can’t compare to the new virtualized cloud approach.
And there’s another important point to make about the debate between local hardware ADCs and application delivery in the cloud. Where an ADC is located can be just as important as the nature of its technology. The reason is simple. Distance causes delay or “latency”, as we like to call it.
There’s always an increased delay of transmission that comes with increased distance. Nothing is instantaneous. Even a Star Trek transporter beam of particles from one location to another has to take some time, even if it’s microseconds.
The same is true for data transmission and application delivery. If you are performing all your ADC functions at the local data center, your processes won’t be nearly as quick or efficient as a geographically-dispersed application delivery system. With redundant points-of-presence scattered across the globe, an ADC infrastructure can ensure that interactive application processes are handled as close to the user as possible. This cuts down on the lag time that would be involved in routing all traffic through your local ADC for application processing.
Simple load balancing between servers might still be done locally. But to take advantage of the full array of capabilities for efficient use of your applications across the world, nothing compares to what you can do in the cloud. According to Zeus Kerravala of ZK Research, “Today, we’re seeing more workloads in more places, which is not the same as it used to be with big physical applications that had everything they needed to run…. What’s important is that we need more ADC functions in more places….” A geographically distributed, cloud-based application delivery solution like Total Uptime is the perfect approach to meet the today’s application workload requirements.