Does Convergence Impact Uptime?
One of the biggest trends in data center infrastructure is convergence. Actually it has been happening for some time. Equipment footprint has been getting smaller for years. Functions that used to be handled by huge dedicated machines are now accomplished by modular cards. Specialized servers, switches, routers, and other network devices have been combined into multi-service boxes. Now with the advent of virtualization and the cloud, the footprint is getting even smaller. But with higher convergence comes a significant increase in complexity. And when complexity increases, availability usually suffers.
Less Equipment, More Functions
The ENIAC, the famous computer of the 1940s, was enormous. According to the Encyclopedia Britannica, it had 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches, and 1,500 relays. It occupied 1,500 square feet.
That may have seemed very complex at the time, but things have changed. Today we carry with us an enormous amount of computing power in our smart phones. Not only have computing devices been miniaturized, they can do a lot more, but we all know too well that they aren’t always reliable.
The traditional data center of recent decades has been a warehouse for a wide variety of network, compute, and storage devices. Over time, this equipment became smaller, and more functions were combined into single pieces of equipment.
The Nortel Passport family of switches is a prime example. It was called the Multiservice Switch (MSS). Models included the 15000, 20000, 8600, 7500, and 6400. Nortel found that it could combine many different switching, routing, and other functions into a single MSS.
From Siloed to Converged
Such developments were part of the move from silos in IT infrastructure toward a more converged architecture. Those who work in the corporate world know what silos are in the organizational structure. Different departments are divided and communication must travel up or down the chain of command. The same kind of segmented reality existed in the data center.
The first silo was the computing function, sometimes just called compute. Servers were separate devices, and each had its own individualized purpose. Specialists called systems administrators (sys admins) supported Solaris, Linux, or Windows servers. Sys admins were a separate breed of technical support personnel.
The same was true for network engineers and technicians. The second silo dealt with the network. While there have always been more specialized skills, the support of switches, routers, firewalls, and load balancers were always generally viewed as part of the network arena.
Storage is the fourth silo. Maintaining the integrity of data is important. It requires special knowledge and skills. Storage was always seen as separate from compute and network.
Virtualization and management could be considered two more silos in the traditional data center. As virtual machines became more popular, they found their own space in the industry. And the tools for monitoring and managing all these devices might be considered part of the management silo.
As devices took on combined functions, these silos started to break down. Engineers who specialized in Layer 2 were forced to learn something about routing, for example. Everyone had to keep up with the advances in technology. And while specialists still exist, generalists abound increasing the time it takes to resolve highly specific outages, impacting availability significantly.
From Convergence to Superconvergence
Convergence has its benefits, particularly in the cloud. The silos disappear, and in theory management becomes easier. All of the latest technologies – virtualization, cloud computing, analytics – can be managed through what is called a “single pane of glass”.
The integration of functions through convergence continues to progress. Levels of convergence have been identified, including convergence, hyperconvergence, and superconvergence. Efforts were made to combine compute and storage into a hyperconverged infrastructure. The latest news is that manufacturers have been able to include compute, network, and storage all in a single device.
Think of how far we’ve come. It no longer takes a large amount of floor space to run a full-fledged IT environment. With convergence and virtualization, sometimes it takes just one device. Now that’s progress!
The Cloud and the New Data Center
There are plenty of advantages to cloud services. Much of it has to do with the transparency that users experience when logging onto their favorite applications. Cloud apps can be hosted anywhere in the world. Companies can use public, private, or hybrid cloud deployments to support their IT environment.
As technologies converge, the cloud will adapt. Data center management should become easier. Users who access services from anywhere will not be concerned about the underlying infrastructure, but service providers will entirely take on that burden.
One of the concepts of cloud technology is elastic computing. Resources such as power, storage, memory and network can be accessed on demand. Physical devices automatically allocate resources to meet the needs of users. A converged architecture can reduce processing required for these dynamic changes.
Superconvergence holds great promise for the cloud. Intelligent devices comprise intelligent networks. The latest technologies no longer need to be distributed among a large number of machines. Individual devices will continue to become more compact while they become more capable.
The Impact on Uptime
It’s a new world we live in. Technology is taking us further than we might have dreamed. Converged infrastructure is part of that technology. It is one of the things that is making computing, networking, storage, and other functions of our IT environment more efficient and accessible. But as we continue to adopt more and more cloud technologies that end-users can provision and consume with the push of a virtual button, service providers have an increasing obligation to ensure unprecedented levels of availability. Superconverged infrastructure also means when something goes wrong, much more is at stake because an outage is no longer siloed. With so many moving parts and interconnected components behind-the-scenes making up what is seen as a single deliverable, ensuring availability is a greater challenge than ever before. But with the right automation and user-facing network availability tools like those from Total Uptime, availability doesn’t need to suffer.
Other posts you might like...
The True Costs of Downtime for IT
Downtime is a dirty word in the IT business. Unplanned outages are unacceptable and should not be tolerated. In a universe where customers expect services to be available 99.999% of the time, any time your IT service offering is down is costly to your business.
The Need for Increased Availability is Now
Our predictions for the last half of 2017: Ransomware will keep evolving, the rise of IoT will pave way for increased DDoS Attacks, IPv6 Traffic will continue to grow exponentially, Machine Learning and AI will be applied to enhance security, and the need for increased availability is now.read more
5 Ways to Increase Application Availability
A service provider that offers software-as-a-service or another cloud-based solution should understand what customers are looking for and what compels those very customers to choose an off-premise, “cloud-based” solution vs. the more traditional on-premise, self-hosted solution.read more