One of the biggest trends in data center infrastructure is convergence. Actually it has been happening for some time. Equipment footprint has been getting smaller for years. Functions that used to be handled by huge dedicated machines are now accomplished by modular cards. Specialized servers, switches, routers, and other network devices have been combined into multi-service boxes. Now with the advent of virtualization and the cloud, the footprint is getting even smaller. But with higher convergence comes a significant increase in complexity. And when complexity increases, availability usually suffers.
The ENIAC, the famous computer of the 1940s, was enormous. According to the Encyclopedia Britannica, it had 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches, and 1,500 relays. It occupied 1,500 square feet.
That may have seemed very complex at the time, but things have changed. Today we carry with us an enormous amount of computing power in our smart phones. Not only have computing devices been miniaturized, they can do a lot more, but we all know too well that they aren’t always reliable.
The traditional data center of recent decades has been a warehouse for a wide variety of network, compute, and storage devices. Over time, this equipment became smaller, and more functions were combined into single pieces of equipment.
The Nortel Passport family of switches is a prime example. It was called the Multiservice Switch (MSS). Models included the 15000, 20000, 8600, 7500, and 6400. Nortel found that it could combine many different switching, routing, and other functions into a single MSS.
Such developments were part of the move from silos in IT infrastructure toward a more converged architecture. Those who work in the corporate world know what silos are in the organizational structure. Different departments are divided and communication must travel up or down the chain of command. The same kind of segmented reality existed in the data center.
The first silo was the computing function, sometimes just called compute. Servers were separate devices, and each had its own individualized purpose. Specialists called systems administrators (sys admins) supported Solaris, Linux, or Windows servers. Sys admins were a separate breed of technical support personnel.
The same was true for network engineers and technicians. The second silo dealt with the network. While there have always been more specialized skills, the support of switches, routers, firewalls, and load balancers were always generally viewed as part of the network arena.
Storage is the fourth silo. Maintaining the integrity of data is important. It requires special knowledge and skills. Storage was always seen as separate from compute and network.
Virtualization and management could be considered two more silos in the traditional data center. As virtual machines became more popular, they found their own space in the industry. And the tools for monitoring and managing all these devices might be considered part of the management silo.
As devices took on combined functions, these silos started to break down. Engineers who specialized in Layer 2 were forced to learn something about routing, for example. Everyone had to keep up with the advances in technology. And while specialists still exist, generalists abound increasing the time it takes to resolve highly specific outages, impacting availability significantly.
Convergence has its benefits, particularly in the cloud. The silos disappear, and in theory management becomes easier. All of the latest technologies – virtualization, cloud computing, analytics – can be managed through what is called a “single pane of glass”.
The integration of functions through convergence continues to progress. Levels of convergence have been identified, including convergence, hyperconvergence, and superconvergence. Efforts were made to combine compute and storage into a hyperconverged infrastructure. The latest news is that manufacturers have been able to include compute, network, and storage all in a single device.
Think of how far we’ve come. It no longer takes a large amount of floor space to run a full-fledged IT environment. With convergence and virtualization, sometimes it takes just one device. Now that’s progress!
There are plenty of advantages to cloud services. Much of it has to do with the transparency that users experience when logging onto their favorite applications. Cloud apps can be hosted anywhere in the world. Companies can use public, private, or hybrid cloud deployments to support their IT environment.
As technologies converge, the cloud will adapt. Data center management should become easier. Users who access services from anywhere will not be concerned about the underlying infrastructure, but service providers will entirely take on that burden.
One of the concepts of cloud technology is elastic computing. Resources such as power, storage, memory and network can be accessed on demand. Physical devices automatically allocate resources to meet the needs of users. A converged architecture can reduce processing required for these dynamic changes.
Superconvergence holds great promise for the cloud. Intelligent devices comprise intelligent networks. The latest technologies no longer need to be distributed among a large number of machines. Individual devices will continue to become more compact while they become more capable.
It’s a new world we live in. Technology is taking us further than we might have dreamed. Converged infrastructure is part of that technology. It is one of the things that is making computing, networking, storage, and other functions of our IT environment more efficient and accessible. But as we continue to adopt more and more cloud technologies that end-users can provision and consume with the push of a virtual button, service providers have an increasing obligation to ensure unprecedented levels of availability. Superconverged infrastructure also means when something goes wrong, much more is at stake because an outage is no longer siloed. With so many moving parts and interconnected components behind-the-scenes making up what is seen as a single deliverable, ensuring availability is a greater challenge than ever before. But with the right automation and user-facing network availability tools like those from Total Uptime, availability doesn’t need to suffer.
Total Uptime’s DNS Service along with our DNS Failover solution are often compared to Amazon Route 53, and for good reason. Organizations are increasingly looking for a reliable DNS provider in light of frequent outages at various Domain Registrars like Network Solutions. IT experts understand that because DNS is the first link in the chain, it must be the […]
All over the world, companies are competing with one another in the race towards digital transformation. According to a report by Gartner in 2016, one-half of all CEO’s expect their industries to be substantially or unrecognizably transformed by digital transformation. It is a recurrent digital evolutionary process of embedding technologies into nearly everything around us […]
Service providers do everything they know how to avoid downtime. Generally the best practice is not to touch a live network. If it ain’t broke, don’t fix it. But change is inevitable, and eventually every network or system will need improvements. The trick is to handle these changes with little to no disruption of running […]
As we look forward to even greater advances in technology, sometimes it helps to take a look back. Many of us take for granted the connectivity that we enjoy across a wide variety of applications. Sometimes it is seamless, and other times – well, we know that improvements are on their way. But the applications […]