These are the steps I took to set-up and harden a Debian web server before being placed into a DMZ and undergoing additional hardening before opening the port from the WWW to it. Most of the steps below are fairly simple to do, and in doing so, remove a good portion of the low hanging fruit for nasty entities wanting to gain a foot-hold on your server->network.
Install and Set-up
Debian wheezy, currently stable (supported by the Debian security team for a year or so).
Creating ESXi 5.1 guest
First thing to do is to setup a virtual switch for the host under the Configuration tab. Now I had several quad port Gbit Ethernet adapters in this server. So I created a virtual switch and assigned a physical adapter to it. Now when you create your VM, you choose the VM Network assigned to the virtual switch you created. Provision your disks. Check the “Edit the virtual machine settings before completion” and Continue. You will now be able to modify your settings before you boot the VM. I chose 512MB of RAM at this stage which is far more than it actually needs. While I’m provisioning and hardening the Debian guest, I have the new virtual switch connected to the clients LAN.
Once we’re done, we can connect the virtual switch up to the new DMZ physical switch or strait into the router. Upload the debian .iso that you downloaded to the ESXi datastore. Then edit the VM settings and select the CD/DVD drive. Select the “Datastore ISO File” option and browse to the .iso file and select the “Connect at power on” option.
Kick the VM in the guts and flick to the VM’s Console tab.
Deleted all the current partitions and added the following. / was added to the start and the rest to the end, in the following order.
/, /var, /tmp, /opt, /usr, /home, swap.
Now the sizes should be setup according to your needs. If you have plenty of RAM, make your swap small, if you have minimal RAM (barely (if) sufficient), you could double the RAM size for your swap. It’s usually a good idea to think about what mount options you want to use for your specific directories. This may shape how you setup your partitions. For example, you may want to have options
nosuid,noexec on /var but you can’t because there are shell scripts in /var/lib/dpkg/info so you could setup four partitions. /var without
nosuid,noexec and /var/tmp, /var/log, /var/account with
nosuid,noexec. Look ahead to the Mounting of Partitions section for more info on this.
In saying this, you don’t need to partition as finely grained as you want options for. You can still mount directories on directories and alter the options at that point. This can be done in the /etc/fstab file and also ad-hoc (using the
mount command) if you want to test options out.
You can think about changing /opt (static data) to mount read-only in the future as another security measure.
Continuing with the Install
When you’re asked for a mirror to pull packages from, if you have an apt-cacher[-ng] proxy somewhere on your network, this is the chance to make it work for you thus speeding up your updates and saving internet bandwidth. Enter the IP address and port and leave the rest as default. From the Software selection screen, select “Standard system utilities” and “SSH server”.
When prompted to boot into your new system, we need to remove our installation media from the VMs settings. Under the Device Status settings for your VM (if you’re using ESXi), Uncheck “Connected” and “Connect at power on”. Make sure no other boot media are connected at power on. Now first thing we do is SSH into our new VM because it’s a right pain working through the VM hosts console. When you first try to SSH to it you’ll be shown the ECDSA key fingerprint to confirm that the machine you think you are SSHing to is in fact the machine you want to SSH to. Follow the directions here but change that command line slightly to the following:
ssh-keygen -lf ssh_host_ecdsa_key.pub
This will print the keys fingerprint from the actual machine. Compare that with what you were given from your remote machine. Make sure they match and accept and you should be in. Now I use terminator so I have a lovely CLI experience. Of course you can take things much further with Screen or Tmux if/when you have the need.
Next I tell apt about the apt-proxy-ng I want it to use to pull it’s packages from. This will have to be changed once the server is plugged into the DMZ. Create the file /etc/apt/apt.conf if it doesn’t already exist and add the following line:
Acquire::http::Proxy "http://[IP address of the machine hosting your apt cache]:[port that the cacher is listening on]";
Replace the apt proxy references in /etc/apt/sources.list with the internet mirror you want to use, so we contain all the proxy related config in one line in one file. This will allow the requests to be proxied and packages cached via the apt cache on your network when requests are made to the mirror of your choosing.
Update the list of packages then upgrade them with the following command line. If your using sudo, you’ll need to add that to each command:
apt-get update && apt-get upgrade # only run apt-get upgrade if apt-get update is successful (exits with a status of 0)
The steps you take to harden a server that will have many user accounts will be considerably different to this. Many of the steps I’ve gone through here will be insufficient for a server with many users.
The hardening process is not a one time procedure. It ends when you decommission the server. Be prepared to stay on top of your defenses. It’s much harder to defend against attacks than it is to exploit a vulnerability.
After a quick look at this, I can in fact verify that we are shadowing our passwords out of the box. It may be worth looking at and modifying /etc/shadow . Consider changing the “maximum password age” and “password warning period”. Consult the man page for shadow for full details. Check that you’re happy with which encryption algorithms are currently being used. The files you’ll need to look at are: /etc/shadow and /etc/pam.d/common-password . The man pages you’ll probably need to read in conjunction with each other are the following:
- crypt 3
Out of the box crypt supports MD5, SHA-256, SHA-512 with a bit more work for blowfish via bcrypt. The default of SHA-512 enables salted passwords. How can you tell which algorithm you’re using, salt size etc? the crypt 3 man page explains it all.
So by default we’re using SHA-512 which is better than MD5 and the smaller SHA-256.
Now by default I didn’t have a “rounds” option in my /etc/pan.d/common-password module-arguments. Having a large iteration count (number of times the encryption algorithm is run (key stretching)) and an attacker not knowing what that number is, will slow down an attack. I’d suggest adding this and re creating your passwords. As your normal user run:
providing your existing password then your new one twice. You should now be able to see your password in the /etc/shadow file with the added rounds parameter
$6$rounds=[chosen number of rounds specified in /etc/pam.d/common-password]$[8 character salt]$0LxBZfnuDue7.n5<rest of string>
Reboot and check you can still log in as your normal user. If all good. Do the same with the root account.
Using bcrypt with slowpoke blowfish is a much slower algorithm, so it’s even better for password encryption, but more work to setup at this stage.
- This post is quite helpful. Especially page 3.
Consider setting a password for GRUB, especially if your server is directly on physical hardware. If it’s on a hypervisor, an attacker has another layer to go through before they can access the guests boot screen. If an attacker can access your VM through the hypervisors management app, you’re pretty well screwed anyway.
Disable Remote Root Logins
Review /etc/pam.d/login so we’re only permitting local root logins. By default this was setup that way.
Review /etc/security/access.conf . Make sure root logins are limited as much as possible. Un-comment rules that you want. I didn’t need to touch this.
Confirm which virtual consoles and text terminal devices you have by reviewing /etc/inittab then modify /etc/securetty by commenting out all the consoles you don’t need (all of them preferably). Or better just issue the following command to fill the file with nothing:
cat /dev/null > /etc/securetty
I back up this file before I do this.
Now test that you can’t log into any of the text terminals listed in /etc/inittab . Just try logging into the likes of your ESX/i vSphere guests console as root. You shouldn’t be able to now.
Make sure if your server is not physical hardware but a VM, then the hosts password is long and made up of a random mix of upper case, lower case, numbers and special characters.
My feeling after a lot of reading is that currently RSA with large keys (The default RSA size is 2048 bits) is a good option for key pair authentication. Personally I like to go for 4096, but with the current growth of processing power (following Moore’s law), 2048 should be good until about 2030.
Create your key pair if you haven’t already and setup key pair authentication. Key-pair auth is more secure and allows you to log in without a password. Your pass-phrase should be stored in your keyring. You’ll just need to provide your local password once (each time you log into your local machine) when the keyring prompts for it. Of course your pass-phrase needs to be kept secret. If it’s compromised, it won’t matter how much you’ve invested into your hardening effort. To tighten security up considerably Make the necessary changes to your servers /etc/ssh/sshd_config file. Start with the changes I’ve listed here.
When you change things like setting up
AllowUsers or any other potential changes that could lock you out of the server. It’s a good idea to be logged in via one shell when you exit another and test it. This way if you have locked yourself out, you’ll still be logged in on one shell to adjust the changes you’ve made. Unless you have a need for multiple users, lock it down to a single user. You can even lock it down to a single user from a specific host.
After a set of changes, issue the following restart command as root or sudo:
service ssh restart
You can check the status of the daemon with the following command:
service ssh status
Consider changing the port that SSH listens on. May slow down an attacker slightly. Consider whether it’s worth adding the extra characters to your SSH command. Consider keeping the port that sshd binds to below 1025 where only root can bind a process to.
We’ll need to tunnel SSH once the server is placed into the DMZ. I’ve discussed that in this post.
Check SSH login attempts. As root or via sudo, type the following to see all failed login attempts:
cat /var/log/auth.log | grep 'sshd.*Invalid'
If you want to see successful logins, type the following:
cat /var/log/auth.log | grep 'sshd.*opened'
Consider installing and configuring denyhosts
Disable Boot Options
All the major hypervisors should provide a way to disable all boot options other than the device you will be booting from. VMware allows you to do this in vSphere Client.
Set BIOS passwords.
Lock Down the Mounting of Partitions
Getting started with your fstab.
Make a backup of your /etc/fstab before you make changes. I ended up needing this later. Read the man page for fstab and also the options section in the mount man page. The Linux File System Hierarchy (FSH) documentation is worth consulting also for directory usages.
Add the noexec mount option to /tmp but not /var because executable shell scripts such as pre, post and removal reside within /var/lib/dpkg/info .
You can also add the
nodev nosuid options.
You can add the
nodev option to /var, /usr, /opt, /home also.
You can also add the
nosuid option to /home .
You can add
ro to /usr
To add mount options
nosuid,noexec to /var/tmp, /var/log, /var/account, we need to bind the target mount onto an existing directory. The following procedure details how to do this for /var/tmp. As usual, you can do all of this without a reboot. This way you can modify until your hearts content, then be confident that a reboot will not destroy anything or lock you out of your system.
Your /etc/fstab unmounted mounts can be tested like this
sudo mount -a
Then check the difference with
mount options can be set up on a directory by directory basis for finer grained control. For example my /var mount in my /etc/fstab may look like this:
UUID=<block device ID goes here> /var ext4 defaults,nodev 0 2
Then add another line below that in your /etc/fstab that looks like this:
/var /var/tmp none nosuid,noexec,bind 0 2
The file system type above should be specified as
none (as stated in the “The bind mounts” section of the mount man page http://man.he.net/man8/mount). The bind option binds the mount. There was a bug with the suidperl package in debian where setting
nosuid created an insecurity. suidperl is no longer available in debian.
If you want this to take affect before a reboot, execute the following command:
sudo mount --bind /var/tmp /var/tmp
Then to pickup the new options from /etc/fstab:
sudo mount -o remount /var/tmp
For further details consult the remount option of the mount man page.
At any point you can check the options that you have your directories mounted as, by issuing the following command:
You can test this by putting a script in /var and copying it to /var/tmp. Then try running each of them. Of course the executable bits should be on. You should only be able to run the one that is in the directory mounted without the
noexec option. My file “kimsTest” looks like this:
#!/bin/sh echo "Testing testing testing kim"
myuser@myserver:/var$ ./kimsTest Testing testing testing kim myuser@myserver:/var$ ./tmp/kimsTest -bash: ./tmp/kimsTest: Permission denied
You can set the same options on the other /var sub-directories (not /var/lib/dpkg/info).
Enable read-only / mount
- Mounting partitions the right way
- mount man page
- Mount /tmp With nodev, nosuid, and noexec Options
- Linux FSH
Work Around for Apt Executing Packages from /tmp
Disable Services we Don’t Need
dpkg-query -l '*portmap*'
portmap is not installed by default, so we don’t need to remove it.
dpkg-query -l '*exim*'
Exim4 is installed.
You can see from the netstat output below (in the “Remove Services” area) that exim4 is listening on localhost and it’s not publicly accessible. Nmap confirms this, but we don’t need it, so lets disable it. We should probably be using ss too.
When a run level is entered, init executes the target files that start with k with a single argument of
stop, followed with the files that start with s with a single argument of
start. So by renaming /etc/rc2.d/s15exim4 to /etc/rc2.d/k15exim4 you’re causing init to run the service with the
stop argument when it moves to run level 2. Just out of interest sake, the scripts at the end of the links with the lower numbers are executed before scripts at the end of links with the higher two digit numbers. Now go ahead and check the directories for run levels 3-5 as well and do the same. You’ll notice that all the links in /etc/rc0.d (which are the links executed on system halt) start with ‘K’. Making sense?
Follow up with
sudo netstat -tlpn
|Active Internet connections (only servers)|
|Proto||Recv-Q||Send-Q||Local Address||Foreign Address||State||PID/Program name|
And that’s all we should see.
Additional resources for the above
Disable Network Information Service (NIS). NIS lets several machines in a network share the same account information, such as the password file (Allows password sharing between machines). Originally known as Yellow Pages (YP). If you needed centralised authentication for multiple machines, you could set-up an LDAP server and configure PAM on your machines in order to contact the LDAP server for user authentication. We have no need for distributed authentication on our web server at this stage.
dpkg-query -l '*nis*'
Nis is not installed by default, so we don’t need to remove it.
Additional resources for the above
First thing I did here was run nmap from my laptop
nmap -p 0-65535 <serverImConfiguring>
Now because I’m using a non default port for SSH, nmap thinks some other service is listening. Although I’m sure if I was a bad guy and really wanted to find out what was listening on that port it’d be fairly straight forward.
To obtain a list of currently running servers (determined by LISTEN) on our web server. Not forgetting that man is your friend.
sudo netstat -tap | grep LISTEN
sudo netstat -tlp
I also like to add the ‘n’ option to see the ports. This output was created before I had disabled exim4 as detailed above.
|tcp||0||0||*:<my ssh port number>||*:*||LISTEN||2247/sshd|
|tcp6||0||0||[::]:<my ssh port number>||[::]:*||LISTEN||2247/sshd|
Here we see that sunrpc is listening on a port and was started by rpcbind with the PID of 1498.
Now Sun Remote Procedure Call is running on port 111 (also the portmapper port) netstat can tell you the port, confirmed with the nmap scan above. This is used by NFS and as we don’t need NFS as our server isn’t a file server, we can get rid of the rpcbind package.
dpkg-query -l '*rpc*'
Shows us that rpcbind is installed and gives us other details. Now if you’ve been following along with me and have made the /usr mount read only, some stuff will be left behind when we try to purge:
sudo apt-get purge rpcbind
Following are the outputs of interest:
The following packages will be REMOVED: nfs-common* rpcbind* 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Do you want to continue [Y/n]? y Removing nfs-common ... [ ok ] Stopping NFS common utilities: idmapd statd. dpkg: error processing nfs-common (--purge): cannot remove `/usr/share/man/man8/rpc.idmapd.8.gz': Read-only file system Removing rpcbind ... [ ok ] Stopping rpcbind daemon.... dpkg: error processing rpcbind (--purge): cannot remove `/usr/share/doc/rpcbind/changelog.gz': Read-only file system Errors were encountered while processing: nfs-common rpcbind E: Sub-process /usr/bin/dpkg returned an error code (1)
dpkg-query -l '*rpc*'
Will result in
pH. That’s a desired action of (
p)urge and a package status of (
Now the easiest thing to do here is rename your /etc/fstab to something else and rename the /etc/fstab you backed up before making changes to it back to /etc/fstab then because you know the fstab is good,
Then try the
netstat commands again to make sure rpcbind is gone and of course no longer listening. I had to actually do the purge twice here as config files were left behind from the fist purge.
Also you can remove unused dependencies now after you get the following message:
The following packages were automatically installed and are no longer required: libevent-2.0-5 libgssglue1 libnfsidmap2 libtirpc1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: rpcbind* sudo apt-get -s autoremove
Because I want to simulate what’s going to be removed because I”m paranoid and have made stupid mistakes with autoremove years ago and that pain has stuck with me. I autoremoved a meta-package which depended on many other packages. A subsequent autoremove for packages that had a sole dependency on the meta-package meant they would be removed. Yes it was a painful experience. /var/log/apt/history.log has your recent apt history. I used this to piece back together my system.
Then follow up with the real thing… Just remove the
-s and run it again. Just remember, the less packages your system has the less code there is for an attacker to exploit.
dpkg-query -l '*telnet*' sudo apt-get remove telnet
dpkg-query -l '*telnet*'
We’ve got scp, why would we want ftp?
dpkg-query -l '*ftp*' sudo apt-get remove ftp
dpkg-query -l '*ftp*'
Don’t forget to swap your new fstab back and test that the mounts are mounted as you expect.
The following provide good guidance on securing what ever is left.
Make sure all data and VM images are backed up routinely. Make sure you test that restoring your backups work. Backup system files and what ever else is important to you. There is a good selection of tools here to help. Also make sure you are backing up the entire VM if your machine is a virtual guest by export / import OVF files. I also like to backup all the VM files. Disk space is cheap. Is there such a thing as being too prepared for disaster? It’s just a matter of time before you’ll be calling on your backups.
Keep up to date
Consider whether it would make sense for you or your admin/s to set-up automatic updates and possibly upgrades. Start out the way you intend to go. Work out your strategy for keeping your system up to date and patched. There are many options here.
From here on, I’ve made it less detailed and more about just getting you to think about things and ways in which you can improve your stance on security. Also if any of the offerings cost money to buy, I make note of it because this is the exception to my rule. Why? Because I prefer free software and especially when it’s Open Source FOSS.
Some of the following cross the “logging” boundaries, so in many cases it’s difficult to put them into categorical boxes.
Attackers like to try and cover their tracks by modifying information that’s distributed to the various log files. Make sure you know who has write access to these files and keep the list small. As a Sysadmin you need to read your log files often and familiarise yourself with them so you get used to what they should look like.
Monitors “a” log file for each instance you run (or schedule), matches your defined patterns and acts. You can define different message types with different font styles. If you want to monitor a lot of log files, it’s going to be a bit messy.
Monitors system log files, emails anomalies to an administrator. Once installed it needs to be set-up to run periodically with cron. Not a bad we run down here. How to use and customise it. Man page and more docs here.
Is more of a performance monitoring tool than a security tool. It has free plans which are OK, It comes into it’s own in larger deployments. I’ve used this and it’s been useful for working out what was causing performance issues on the servers.
Advanced Web Statistics (AWStats)
Unlike NewRelic which is a Software as a Service (SaaS), AWStats is FOSS. It kind of fits a similar market space as NewRelic though, but also has Host Intrusion Prevention System (HIPS) features. Docs here.
Similar to NewRelic but not as feature rich.
Does what its name sounds like. Tails multiple log files at once. Provides realtime multi log file monitoring. Example here. Great for seeing strange happenings before an intruder has time to modify logs, if your watching them that is. Good for a single system if you’ve got a spare screen to throw on the wall.
Targets a similar problem to MultiTail except that it collects logs from as many servers as you want and copies them off-site to PaperTrails service and aggregates them into a single easily searchable web interface. Allows you to set-up alerts on anything. Has a free plan, but you only get 100MB per month. The plans are reasonably cheap for the features it provides and can scale as you grow. I’ve used this and have found it to be excellent.
Monitors system logs. Not continuously, so they could be open to modification without you knowing, like SWatch and Logcheck from above. You can configure it to reduce the number of services that it analyses the logs of. It creates a report of what it finds based on your level of paranoia. It’s easy to set-up and get started though. Source and docs here.
Targets a similar problem to logrotate, but goes a lot further in that it routes and has the ability to translate between protocols. Requires Java to be installed.
Ban hosts that cause multiple authentication errors. or just email events. Of course you need to think about false positives here too. An attacker can spoof many IP addresses potentially causing them all to be banned, thus creating a DoS.
Configure syslog to send copy of the most important data to a secure system. Mitigation for an attacker modifying the logs. See @ option in syslog.conf man page. Check the /etc/(r)syslog.conf file to determine where syslogd is logging various messages. Some important notes around syslog here, like locking down the users that can read and write to /var/log.
Provides a lot more flexibility than just syslogd. Checkout the comprehensive feature-set.
Some Useful Commands
- Checking who is currently logged in to your server and what they are doing with the
- Checking who has recently logged into your server with the
- Checking which user has failed login attempts with the
- Checking the most recent login of all users, or of a given user with the
lastlogcomes from the binary file /var/log/lastlog.
This, is a list of log files and their names/locations and purpose in life.
Is a HIDS that stores a good know state of vital system files of your choosing and can be set-up to notify an administrator upon change in the files. Tripwire stores cryptographic hashes (delta’s) in a database and compares them with the files it’s been configured to monitor changes on. Not a bad tutorial here. Most of what you’ll find with tripwire now are the commercial offerings.
A similar offering to Tripwire. It scans for rootkits, backdoors, checks on the network interfaces and local exploits by running tests such as:
- MD5 hash changes
- Files commonly created by root-kits
- Wrong file permissions for binaries
- Suspicious strings in kernel modules
- Hidden files in system directories
- Optionally scan within plain-text and binary files
Version 1.4.2 (24/02/2014) now checks ssh, sshd and telent (although you shouldn’t have telnet installed). This could be useful for mitigating non-root users running a modified sshd on a 1025-65535 port. You can run ad-hoc scans, then set them up to be run with cron. Debian Jessie has this release in it’s repository. Any Debian distro before Jessie is on 1.4.0-1 or earlier.
The latest version you can install for Linux Mint Qiana (17) and Rebecca (17.1) within the repositories is 1.4.0-3 (01/05/2012)
It’s a good idea to run a couple of these types of scanners. Hopefully what one misses the other will not. Chkrootkit scans for many system programs, some of which are cron, crontab, date, echo, find, grep, su, ifconfig, init, login, ls, netstat, sshd, top and many more. All the usual targets for attackers to modify. You can specify if you don’t want them all scanned. Runs tests such as:
- System binaries for rootkit modification
- If the network interface is in promiscuous mode
- lastlog deletions
- wtmp and utmp deletions (logins, logouts)
- Signs of LKM trojans
- Quick and dirty strings replacement
The idea of Stealth is to do a similar job as the above file integrity scanners, but to leave almost no sediments on the tested computer (called the client). A potential attacker therefore has no clue that Stealth is in fact scanning the integrity of its client files. Stealth is installed on a different machine (called the controller) and scans over SSH.
Is a HIDS that also has some preventative features. This is a pretty comprehensive offering with a lot of great features.
While not strictly a HIDS, this is quite a useful forensics tool for working with your system if you suspect it may have been compromised.
Unhide is a forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. Unhide runs in Unix/Linux and Windows Systems. It implements six main techniques.
- Compare /proc vs /bin/ps output
- Compare info gathered from /bin/ps with info gathered by walking thru the procfs. ONLY for unhide-linux version
- Compare info gathered from /bin/ps with info gathered from syscalls (syscall scanning)
- Full PIDs space ocupation (PIDs bruteforcing). ONLY for unhide-linux version
- Compare /bin/ps output vs /proc, procfs walking and syscall. ONLY for unhide-linux version. Reverse search, verify that all thread seen by ps are also seen in the kernel.
- Quick compare /proc, procfs walking and syscall vs /bin/ps output. ONLY for unhide-linux version. It’s about 20 times faster than tests 1+2+3 but maybe give more false positives.
It includes two utilities: unhide and unhide-tcp.
unhide-tcp identifies TCP/UDP ports that are listening but are not listed in /bin/netstat through brute forcing of all TCP/UDP ports available.
Can also be used by rkhunter in it’s daily scans. Unhide was number one in the top 10 toolswatch.org security tools pole
Web Application Firewalls (WAF’s)
which are just another part in the defense in depth model for web applications, get more specific in what they are trying to protect. They operate at the application layer, so they don’t have to deal with all the network traffic. They apply a set of rules to HTTP conversations. They can also be either Network or Host based and able to block attacks such as Cross Site Scripting (XSS), SQL injection.
Is a mature and feature full WAF that is designed to work with such web servers as IIS, Apache2 and NGINX. Loads of documentation. They also look to be open to committers and challengers a-like. You can find the OWASP Core Rule Set (CRS) here to get you started which has the following:
- HTTP Protocol Protection
- Real-time Blacklist Lookups
- HTTP Denial of Service Protections
- Generic Web Attack Protection
- Error Detection and Hiding
Or for about $500US a year you get the following rules:
- Virtual Patching
- IP Reputation
- Web-based Malware Detection
- Webshell/Backdoor Detection
- Botnet Attack Detection
- HTTP Denial of Service (DoS) Attack Detection
- Anti-Virus Scanning of File Attachments
for Node.js. Although doesn’t look like a lot is happening with this project currently. You could always fork it if you wanted to extend.
The state of the Node.js echosystem in terms of security is pretty poor, which is something I’d like to invest time into.
This is one of the last things you should look at when hardening an internet facing or parameterless system. Why? Because each machine should be hard enough that it doesn’t need a firewall to cover it like a blanket with services underneath being soft and vulnerable. Rather all the services should be either un-exposed or patched and securely configured.
Most of the servers and workstations I’ve been responsible for over the last few years I’ve administered as though there was no firewall and they were open to the internet. Most networks are reasonably easy to penetrate, so we really need to think of the machines behind them as being open to the internet. This is what De-perimeterisation (the concept initialised by the Jericho Forum) is all about.
Some thoughts on firewall logging.
Keep your eye on nftables too, it’s looking good!
Just keep in mind the above links are quite old. A lot of it’s still relevant though.
Machine Now Ready for DMZ
Confirm DMZ has
- Network Intrusion Detection System (NIDS), Network Intrusion Prevention System (NIPS) installed and configured. Snort is a pretty good option for the IDS part, although with some work Snort can help with the Prevention also.
- incoming access from your LAN or where ever you plan on administering it from
- rules for outgoing and incoming access to/from LAN, WAN tightly filtered.
Additional Web Server Preparation
- setup and configure soft web server
- setup and configure caching proxy. Ex:
- deploy application files
- Hopefully you’ve been baking security into your web app right from the start. This is an essential part of defense in depth. Rather than having your application completely rely on other entities to protect it, it should also be standing up for itself and understanding when it’s under attack and actually fighting back.
- set static IP address
- double check that the only open ports on the web server are 80 and what ever you’ve chosen for SSH.
- setup SSH tunnel
- decide on and document VM backup strategy and set it up.
Machine Now In DMZ
Setup your CNAME or what ever type of DNS record you’re using.
Now remember, keeping any machine on (not just the internet, but any) a network requires constant consideration and effort in keeping the system as secure as possible.
Hack your own server and find the holes before someone else does. If you’re not already familiar with the tricks of how systems on the internet get attacked read up on the “Attacks and Threats” Run OpenVAS, Run Web Vulnerability Scanners
From here on is in scope for other blog posts.