Server hardening is a necessary process. And it’s a never-ending one. From the moment you pull the machine out of the box (or create it in the virtual environment), it pays to be thinking about security. But server hardening can do more than keep your machine safe. It will help with performance, and it can even play a part in keeping your machine online and available.
Without getting lost in the weeds, in this article we’ll survey the principles and practices of hardening servers. There are plenty of resources online that provide how-to documentation specific to different server hardware, operating systems, and software. We couldn’t possibly detail all those here. There may be slight variations in Linux solutions, for instance, or command syntax that has changed over time. Methods will vary depending on the platform.
First, let’s consider what server hardening is. We’ll take our first quote from Wikipedia: “…hardening is usually the process of securing a system by reducing its surface of vulnerability…” The focus here is on security.
…hardening is usually the process of securing a system by reducing its surface of vulnerability…
The folks at the Tech Terms site offer some more insight: “The purpose of system hardening is to eliminate as many security risks as possible. This is typically done by removing all non-essential software programs and utilities from the computer.” Well, there’s more to it than that. In fact, server hardening should be a constant process and it may include a wide variety of tasks. A narrow definition may not do the term justice.
So what does server hardening have to do with uptime or availability? Think about this. By removing or disabling unnecessary services or processes, you reduce the attack surface area (effectively open doors and windows, for lack of a better analogy). When the attack surface area is very small, the list of vulnerabilities the device may be susceptible to is significantly reduced. This results in effectively eliminating a large number of tactics an attacker could utilize to take it offline.
The other point to make here is that server hardening can streamline your server to help prevent crashes due to overloads when resources are wasted or consumed by unnecessary tasks. Every seasoned IT professional can relate to receiving an alarm for an overtaxed system, and subsequently logging in only to discover the issue was being caused by an unnecessary or undesired process consuming the resources. Newer systems are much more powerful, but it is still true that doing too much on one device could easily lead to resource overload and a failed system.
Even the best preparation by manufacturers is not enough. Their focus is more on functionality. But if you want to be safe and smart, the best practice is to do some preparation of your own on your new server before even thinking about placing it onto a live network.
Take a look at some of the words used to describe IT security products and you get a picture of what is going on. Windows Defender. Intrusion Detection System. Firewalls. Malware protection. Disaster recovery. IT forensics. There is a whole section of the IT industry dedicated to protecting your infrastructure. And with good reason.
Intruders are constantly looking for ways to access your information. That could be for a lot of reasons. Curious and daring kids may be trying to see how far they can go. Greedy robbers are on the hunt for systems carelessly left unprotected. Competitors are looking for a leg up. Bad characters across the world are ready for mischief.
That means you have to be ready. And waiting until a security breach occurs is too late. Your server should be secure from the beginning. And it should be configured lean. It’s always best to keep it simple.
If you’re not protective of your data from the start, you might regret it. Better to be proactive. Build your server fortress safe and strong to improve your chances of survival and success.
So what does it take to harden your defenses against persistent security and performance risks? It takes a plan. And many systems administrators put their server hardening plan into the form of a checklist. You may want to rush through and write up as many actions as you can think of that would harden your server. But first, it may help to contemplate basic principles for your action list.
Sandra Kay Miller writes for Computer Weekly writes: “New machines should be installed on an isolated network, well protected from possible hostile traffic until the operating system is hardened.” Your server becomes vulnerable the second it’s placed onto an untrusted network.
When creating your checklist, you may want to consider what actions should be taken before ever going online. Would you dare to put a server onto the internet without first setting up a firewall or installing an antivirus program? Would it be safe to go online with telnet enabled? Sure, you may not have important data on your server yet. But that’s no guarantee that hackers won’t be lurking around seeking access.
Computers are multipurpose machines. And they’re good at multi-tasking. No longer do you need dedicated devices for every little function in your IT infrastructure. In fact, you may be able to install a hypervisor so your applications and services can run off a single server in separate environments.
But just because you have a powerful system with the latest technology doesn’t mean that you should see how much you can get out of it. That might be fine for your home computer, but the functions of a server should be kept as lean as possible.
There are two reasons for this. First, the more applications and services running on your server, the greater the chance that a clever hacker can find a backdoor into your system. You may have great trust in the vendors of the programs on your server. But there is no guarantee that some weakness won’t present itself at some point in your system software. The fewer programs, the better. The more you can isolate, by using VMs, for example, the better.
The other reason to keep a lean system is that it will be less taxing on the machine’s resources. Whether it’s CPU, memory, or some other system resource, the fewer processes running on your server, the greater the chances that you can keep it healthy.
The principle of least privilege is the idea that you should give users only the rights that they need to do their jobs. It would be absurd to give everyone in the department access as administrators. It’s like classified intelligence material where access is granted on a “need to know” basis. Users should have a legitimate need before permissions are granted to server files or services.
One way to do that is to develop a sound password security policy. It would help to have a written company policy outlining who should be granted passwords to specific programs or devices and how those passwords should be created, where they should be stored, how complex they should be, how often they should be changed and so on.
Now we’re down to the nitty-gritty. Except we couldn’t possibly list all the potential items for a checklist on your individual system. Instead, let’s point out some lists that are already out there on the web and make some comments.
Writer for Computer World Gus Khawaja offers us “Linux hardening: A 15-step checklist for a secure Linux server”. “Most people assume Linux is secure,” he writes, “and that’s a false assumption.” His focus is on the Kali Linux distribution, but his suggestions will be helpful to any Linux server administrator.
The items on Khawaja’s list could apply to any server – not just Linux. After noting information about your system (computer name, IP address, MAC address, etc.), start with protecting the BIOS. You should lock down the BIOS before you ever do anything with the operating system. Go in and change the BIOS password. Then disable the capability to boot from external devices.
Encryption works wonders for system security. Hard disk encryption is something you can do when setting up your partitions. Freeware solutions like Veracrypt and CipherShed allow you to add encryption to a working system.
Khawaja also recommends disabling USB usage on your server. It makes sense. While your server may be physically secure, there’s always a chance that someone with physical access to your machine might try to access it with USB. Even if booting by USB is not possible, who knows what can be done by someone inserting a USB device. Better to eliminate the risk altogether by disabling USB.
Microsoft’s TechNet gives helpful guidance on server hardening. Microsoft outlines the basic requirements. The base install should come from a trusted source. Servers should be on a trusted network during install and hardening. The base install should include all current service packs and practically all pertinent updates. Servers should be updated after install.
Some server hardening tasks will be Windows specific. On the Microsoft side (if we think of the server world as a Windows-Linux dichotomy), there are security template and group policy considerations. And of course, use NTFS instead of FAT. The Microsoft Baseline Security Analyzer (MBSA) helps identify vulnerabilities on working systems.
TechNet also gives us password tips that other writers mention. Use strong passwords, and set strong password creation rules for users. Rename the administrator account. Create a new administrator account with lower privileges as a decoy. Use a different name for the administrator account on all servers. Disable the guest account. Set an account lockout policy. TechNet also tells us about setting access control on file shares.
What about those unnecessary services? Get rid of them. If your machine is not going to be a DHCP server, then disable DHCP. They say that this practice “reduces the attack surface”. It also helps you run a lean machine to avoid crashes or conflicts. Here are some things you might be able to do without:
And of course, you’ll have to shut down ports that you won’t be using. You can do this manually or with commercial software. You can use a hardware firewall. You can use iptables on Linux. But somehow you have to control the traffic to keep out the riff-raff. There are 65,535 ports, and depending on the purpose of your server, you most likely only need very few.
These ideas are just to get you started. It’s a good practice to put all this into a document. You want to make sure that you cover everything. And you can use it the next time you have to harden a server. You might think about putting the checklist into a fully developed Method of Procedure as part of a larger change control process.
Server hardening is not just an installation task. You will need to look for ways to protect and improve your machine throughout its lifecycle. That means getting all the security updates for the operating system and any installed applications. You should keep your antivirus definitions up-to-date and keep yourself informed about the latest threats.
But to reiterate the full context here, server hardening is both about protection and performance. You are not helping yourself by overloading the system or running any unnecessary processes or services. Keep it spare and lean. One company had to call tech support after they filled the memory of their server to the point that it wouldn’t function. Another firm saw one of its servers crash after a tech launched a major refresh of data that was somehow too taxing for it.
Computers are just machines. They need to be treated with care and not abused. Streamlining services, restricting access, and limiting vulnerabilities will make your server healthier and you happier. If you don’t harden your server now, you may very well be sorry in the long run.
Yes, even the biggest and best organizations can suffer tremendous losses due to something as simple as a DNS issue. Unless you are immersed in DNS and it is one of your core competencies, it is easy to make a mistake, and that may be what transpired at Apple. As reported at ars technica, CNBC and various other […]
A service provider that offers software-as-a-service or another cloud-based solution should understand what customers are looking for and what compels those very customers to choose an off-premise, “cloud-based” solution vs. the more traditional on-premise, self-hosted solution. As a cloud service provider ourselves, we set out to understand how our customers went about choosing one service […]
Service providers do everything they know how to avoid downtime. Generally the best practice is not to touch a live network. If it ain’t broke, don’t fix it. But change is inevitable, and eventually every network or system will need improvements. The trick is to handle these changes with little to no disruption of running […]
Over the years, DNS Failover has become a very popular service primarily due to the fact that it is relatively inexpensive and fairly easy to deploy and manage. But is it the right solution for your organization? In this article, we’ll outline what DNS failover is and provide an overview of the benefits and appropriate uses for […]