As we look forward to even greater advances in technology, sometimes it helps to take a look back. Many of us take for granted the connectivity that we enjoy across a wide variety of applications. Sometimes it is seamless, and other times – well, we know that improvements are on their way. But the applications we use every day are undergirded by a whole array of technologies. And when these lower levels are out of service, then nothing works.
Have you ever tried to use your computer when the power is off – and you have no battery backup? You may as well light a candle and forget it. None of the amazing technologies that mankind has developed will be available to you then. When your connection is physically down, everything is down.
The intelligent folks who worked on ARPANET, the forerunner of the internet, discovered the benefits of layered protocols. The result was a widely-adopted system for understanding connectivity called the Open Systems Interconnection (OSI) model. Like the foundation of a building, each layer is dependent on those beneath it. When a lower-layer protocol loses its connectivity, the same happens for those above it.
There are plenty of tutorials on the subject, so we won’t give a full explanation here. Suffice it to list the layers of the OSI model from top to bottom:
So how does this apply to uptime? When your physical devices – desktop, laptop, mobile phone — aren’t functioning properly, your connectivity suffers. The data links to your applications may run across cables, Wi-Fi, or wireless telephony, and your connection is dependent on lower protocols carrying those signals. Core networks are dependent on physical equipment, such as routers and switches, and the links that connect them. Many of the newer I.T. services focus on layers 3-7. But without the first two layers, these services are dead in the water.
Inherent in the OSI model is the concept of multiplexing. This is performed in a broad variety of ways. The idea is that multiple signals are combined into a single signal. In packet-switched networks, that means that data is broken down, individually addressed, placed into a larger stream of data, and then reassembled when it reaches its destination.
A protocol is a conversation. When you and a friend speak on the phone to each other, you might say, “Hello!” This initiates the conversation. When your friend answers, “Hello!”, that is a response. When that same conversation happens in email, the greeting, response, and the banter to follow are carried as payload in data packets across a variety of protocols.
Let’s track an email conversation as it moves across the network. Your email may use the POP3, IMAP, and SMTP protocols to send and receive mail. As data flows through the OSI architecture, each layer or protocol assigns its own header. The internet uses TCP/IP. TCP is at layer 3 and IP is at layer 3. Layer 2 protocols encapsulate the protocols above them. The packets become larger. SDH or SONET packets working at the physical layer are then combined into larger and larger transmission channels. The core network routes and switches a payload of data without any concern for its content. As the “Hello!” message reaches its destination, the process reverses. Streams of communication are de-multiplexed and packets are de-encapsulated.
Network failure can occur anywhere during this process. To protect against data loss, designers attempt to create an environment where unforeseen failures are handled by redundant devices and connections. Failures do occur, but with a well-designed network, the user won’t even notice. All this data processing continues behind the scenes and connectivity should remain transparent to the user.
As network technologies have matured, connectivity has become more of a commodity. Users expect to connect to the applications and services of their choosing without hassle or time-consuming configuration. The focus has moved from network reliability to application availability. New service providers have arisen to cater to clients interested in more sophisticated functionality. With multi-layer load balancing and failover capability, layers 3-7 seem to float mindlessly above layers 1-2.
Perhaps it is time for a new model. Of course, a myriad of protocol stacks have been used to designate the hierarchies and interactions among network components. Beyond the OSI model, the TCP/IP protocol stack is perhaps the most famous. But technologies such as ATM, Bluetooth, Ethernet, and LTE all have their own protocol stacks. Each of these visual displays are helpful in their own way.
But think of the new technologies that have arisen in the past few years: virtualization, cloud computing, convergence, software-defined networking, network functions virtualization, and so on. Has the OSI model lost its effectiveness in the new data center?
Stephen Saunders, of the online publication Light Reading, offers a new framework. He calls it “A Network Model for the 21st Century”. Here is his model:
And the growing technology called software-defined networking has reduced it to three levels:
No matter how you look at it, today’s I.T. professionals must not overlook the importance of a sturdy and reliable network infrastructure at the lower levels. Reliable equipment, robust switching and networking, and adequate redundancy and failover procedures are still necessary to make any network function. We owe a lot to the network pioneers who have brought us to this point, and the reliable infrastructure that they have created. Uptime depends on these strong foundations. We should never take any of it for granted.