Archive for the ‘SSH’ Category

Installation and Hardening of Debian Web Server

December 27, 2014

These are the steps I took to set-up and harden a Debian web server before being placed into a DMZ and undergoing additional hardening before opening the port from the WWW to it. Most of the steps below are fairly simple to do, and in doing so, remove a good portion of the low hanging fruit for nasty entities wanting to gain a foot-hold on your server->network.

Install and Set-up

Debian wheezy, currently stable (supported by the Debian security team for a year or so).

Creating ESXi 5.1 guest

First thing to do is to setup a virtual switch for the host under the Configuration tab. Now I had several quad port Gbit Ethernet adapters in this server. So I created a virtual switch and assigned a physical adapter to it. Now when you create your VM, you choose the VM Network assigned to the virtual switch you created. Provision your disks. Check the “Edit the virtual machine settings before completion” and Continue. You will now be able to modify your settings before you boot the VM. I chose 512MB of RAM at this stage which is far more than it actually needs. While I’m provisioning and hardening the Debian guest, I have the new virtual switch connected to the clients LAN.

ESX Network Configuration

Once we’re done, we can connect the virtual switch up to the new DMZ physical switch or strait into the router. Upload the debian .iso that you downloaded to the ESXi datastore. Then edit the VM settings and select the CD/DVD drive. Select the “Datastore ISO File” option and browse to the .iso file and select the “Connect at power on” option.

6_NewVMSelectIso

Kick the VM in the guts and flick to the VM’s Console tab.

OS Installation

Partitioning

Deleted all the current partitions and added the following. / was added to the start and the rest to the end, in the following order.
/, /var, /tmp, /opt, /usr, /home, swap.

Partitioning Disks

Now the sizes should be setup according to your needs. If you have plenty of RAM, make your swap small, if you have minimal RAM (barely (if) sufficient), you could double the RAM size for your swap. It’s usually a good idea to think about what mount options you want to use for your specific directories. This may shape how you setup your partitions. For example, you may want to have options nosuid,noexec on /var but you can’t because there are shell scripts in /var/lib/dpkg/info so you could setup four partitions. /var without nosuid,noexec and /var/tmp, /var/log, /var/account with nosuid,noexec. Look ahead to the Mounting of Partitions section for more info on this.
In saying this, you don’t need to partition as finely grained as you want options for. You can still mount directories on directories and alter the options at that point. This can be done in the /etc/fstab file and also ad-hoc (using the mount command) if you want to test options out.

You can think about changing /opt (static data) to mount read-only in the future as another security measure.

Continuing with the Install

When you’re asked for a mirror to pull packages from, if you have an apt-cacher[-ng] proxy somewhere on your network, this is the chance to make it work for you thus speeding up your updates and saving internet bandwidth. Enter the IP address and port and leave the rest as default. From the Software selection screen, select “Standard system utilities” and “SSH server”.

10_SoftwareSelection

When prompted to boot into your new system, we need to remove our installation media from the VMs settings. Under the Device Status settings for your VM (if you’re using ESXi), Uncheck “Connected” and “Connect at power on”. Make sure no other boot media are connected at power on. Now first thing we do is SSH into our new VM because it’s a right pain working through the VM hosts console. When you first try to SSH to it you’ll be shown the ECDSA key fingerprint to confirm that the machine you think you are SSHing to is in fact the machine you want to SSH to. Follow the directions here but change that command line slightly to the following:

ssh-keygen -lf ssh_host_ecdsa_key.pub

This will print the keys fingerprint from the actual machine. Compare that with what you were given from your remote machine. Make sure they match and accept and you should be in. Now I use terminator so I have a lovely CLI experience. Of course you can take things much further with Screen or Tmux if/when you have the need.

Next I tell apt about the apt-proxy-ng I want it to use to pull it’s packages from. This will have to be changed once the server is plugged into the DMZ. Create the file /etc/apt/apt.conf if it doesn’t already exist and add the following line:

Acquire::http::Proxy "http://[IP address of the machine hosting your apt cache]:[port that the cacher is listening on]";

Replace the apt proxy references in /etc/apt/sources.list with the internet mirror you want to use, so we contain all the proxy related config in one line in one file. This will allow the requests to be proxied and packages cached via the apt cache on your network when requests are made to the mirror of your choosing.

Update the list of packages then upgrade them with the following command line. If your using sudo, you’ll need to add that to each command:

apt-get update && apt-get upgrade # only run apt-get upgrade if apt-get update is successful (exits with a status of 0)


The steps you take to harden a server that will have many user accounts will be considerably different to this. Many of the steps I’ve gone through here will be insufficient for a server with many users.
The hardening process is not a one time procedure. It ends when you decommission the server. Be prepared to stay on top of your defenses. It’s much harder to defend against attacks than it is to exploit a vulnerability.

Passwords

After a quick look at this, I can in fact verify that we are shadowing our passwords out of the box. It may be worth looking at and modifying /etc/shadow . Consider changing the “maximum password age” and “password warning period”. Consult the man page for shadow for full details. Check that you’re happy with which encryption algorithms are currently being used. The files you’ll need to look at are: /etc/shadow and /etc/pam.d/common-password . The man pages you’ll probably need to read in conjunction with each other are the following:

  • shadow
  • pam.d
  • crypt 3
  • pam_unix

Out of the box crypt supports MD5, SHA-256, SHA-512 with a bit more work for blowfish via bcrypt. The default of SHA-512 enables salted passwords. How can you tell which algorithm you’re using, salt size etc? the crypt 3 man page explains it all.
So by default we’re using SHA-512 which is better than MD5 and the smaller SHA-256.

Now by default I didn’t have a “rounds” option in my /etc/pan.d/common-password module-arguments. Having a large iteration count (number of times the encryption algorithm is run (key stretching)) and an attacker not knowing what that number is, will slow down an attack. I’d suggest adding this and re creating your passwords. As your normal user run:

passwd

providing your existing password then your new one twice. You should now be able to see your password in the /etc/shadow file with the added rounds parameter

$6$rounds=[chosen number of rounds specified in /etc/pam.d/common-password]$[8 character salt]$0LxBZfnuDue7.n5<rest of string>

Check /var/log/auth.log
Reboot and check you can still log in as your normal user. If all good. Do the same with the root account.

Using bcrypt with slowpoke blowfish is a much slower algorithm, so it’s even better for password encryption, but more work to setup at this stage.

Some References

Consider setting a password for GRUB, especially if your server is directly on physical hardware. If it’s on a hypervisor, an attacker has another layer to go through before they can access the guests boot screen. If an attacker can access your VM through the hypervisors management app, you’re pretty well screwed anyway.

Disable Remote Root Logins

Review /etc/pam.d/login so we’re only permitting local root logins. By default this was setup that way.
Review /etc/security/access.conf . Make sure root logins are limited as much as possible. Un-comment rules that you want. I didn’t need to touch this.
Confirm which virtual consoles and text terminal devices you have by reviewing /etc/inittab then modify /etc/securetty by commenting out all the consoles you don’t need (all of them preferably). Or better just issue the following command to fill the file with nothing:

cat /dev/null > /etc/securetty

I back up this file before I do this.
Now test that you can’t log into any of the text terminals listed in /etc/inittab . Just try logging into the likes of your ESX/i vSphere guests console as root. You shouldn’t be able to now.

Make sure if your server is not physical hardware but a VM, then the hosts password is long and made up of a random mix of upper case, lower case, numbers and special characters.

Additional Resources

http://www.debian.org/doc/manuals/securing-debian-howto/ch4.en.html#s-restrict-console-login

SSH

My feeling after a lot of reading is that currently RSA with large keys (The default RSA size is 2048 bits) is a good option for key pair authentication. Personally I like to go for 4096, but with the current growth of processing power (following Moore’s law), 2048 should be good until about 2030. Update: I’m not so sure about the 2030 date for this now.

Create your key pair if you haven’t already and setup key pair authentication. Key-pair auth is more secure and allows you to log in without a password. Your pass-phrase should be stored in your keyring. You’ll just need to provide your local password once (each time you log into your local machine) when the keyring prompts for it. Of course your pass-phrase needs to be kept secret. If it’s compromised, it won’t matter how much you’ve invested into your hardening effort. To tighten security up considerably Make the necessary changes to your servers /etc/ssh/sshd_config file. Start with the changes I’ve listed here.
When you change things like setting up AllowUsers or any other potential changes that could lock you out of the server. It’s a good idea to be logged in via one shell when you exit another and test it. This way if you have locked yourself out, you’ll still be logged in on one shell to adjust the changes you’ve made. Unless you have a need for multiple users, lock it down to a single user. You can even lock it down to a single user from a specific host.
After a set of changes, issue the following restart command as root or sudo:

service ssh restart

You can check the status of the daemon with the following command:

service ssh status

Consider changing the port that SSH listens on. May slow down an attacker slightly. Consider whether it’s worth adding the extra characters to your SSH command. Consider keeping the port that sshd binds to below 1025 where only root can bind a process to.

We’ll need to tunnel SSH once the server is placed into the DMZ. I’ve discussed that in this post.

Additional Resources

Check SSH login attempts. As root or via sudo, type the following to see all failed login attempts:

cat /var/log/auth.log | grep 'sshd.*Invalid'

If you want to see successful logins, type the following:

cat /var/log/auth.log | grep 'sshd.*opened'

Consider installing and configuring denyhosts

Disable Boot Options

All the major hypervisors should provide a way to disable all boot options other than the device you will be booting from. VMware allows you to do this in vSphere Client.

Set BIOS passwords.

Lock Down the Mounting of Partitions

Getting started with your fstab.

Make a backup of your /etc/fstab before you make changes. I ended up needing this later. Read the man page for fstab and also the options section in the mount man page. The Linux File System Hierarchy (FSH) documentation is worth consulting also for directory usages.
Add the noexec mount option to /tmp but not /var because executable shell scripts such as pre, post and removal reside within /var/lib/dpkg/info .
You can also add the nodev nosuid options.
You can add the nodev option to /var, /usr, /opt, /home also.
You can also add the nosuid option to /home .
You can add ro to /usr

To add mount options nosuid,noexec to /var/tmp, /var/log, /var/account, we need to bind the target mount onto an existing directory. The following procedure details how to do this for /var/tmp. As usual, you can do all of this without a reboot. This way you can modify until your hearts content, then be confident that a reboot will not destroy anything or lock you out of your system.
Your /etc/fstab unmounted mounts can be tested like this

sudo mount -a

Then check the difference with

mount

mount options can be set up on a directory by directory basis for finer grained control. For example my /var mount in my /etc/fstab may look like this:

UUID=<block device ID goes here> /var ext4 defaults,nodev 0 2

Then add another line below that in your /etc/fstab that looks like this:

/var /var/tmp none nosuid,noexec,bind 0 2

The file system type above should be specified as none (as stated in the “The bind mounts” section of the mount man page http://man.he.net/man8/mount). The bind option binds the mount. There was a bug with the suidperl package in debian where setting nosuid created an insecurity. suidperl is no longer available in debian.

If you want this to take affect before a reboot, execute the following command:

sudo mount --bind /var/tmp /var/tmp

Then to pickup the new options from /etc/fstab:

sudo mount -o remount /var/tmp

For further details consult the remount option of the mount man page.

At any point you can check the options that you have your directories mounted as, by issuing the following command:

mount

You can test this by putting a script in /var and copying it to /var/tmp. Then try running each of them. Of course the executable bits should be on. You should only be able to run the one that is in the directory mounted without the noexec option. My file “kimsTest” looks like this:

#!/bin/sh
echo "Testing testing testing kim"

Then I…

myuser@myserver:/var$ ./kimsTest
Testing testing testing kim
myuser@myserver:/var$ ./tmp/kimsTest
-bash: ./tmp/kimsTest: Permission denied

You can set the same options on the other /var sub-directories (not /var/lib/dpkg/info).

Enable read-only / mount

There are some contradictions on /run/shm size allocation. Increase the size vs Don’t increase the size

Additional Resources

Work Around for Apt Executing Packages from /tmp

Disable Services we Don’t Need

RPC portmapper

dpkg-query -l '*portmap*'

portmap is not installed by default, so we don’t need to remove it.

Exim

dpkg-query -l '*exim*'

Exim4 is installed.
You can see from the netstat output below (in the “Remove Services” area) that exim4 is listening on localhost and it’s not publicly accessible. Nmap confirms this, but we don’t need it, so lets disable it. We should probably be using ss too.

When a run level is entered, init executes the target files that start with k with a single argument of stop, followed with the files that start with s with a single argument of start. So by renaming /etc/rc2.d/s15exim4 to /etc/rc2.d/k15exim4 you’re causing init to run the service with the stop argument when it moves to run level 2. Just out of interest sake, the scripts at the end of the links with the lower numbers are executed before scripts at the end of links with the higher two digit numbers. Now go ahead and check the directories for run levels 3-5 as well and do the same. You’ll notice that all the links in /etc/rc0.d (which are the links executed on system halt) start with ‘K’. Making sense?

Follow up with

sudo netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0: 0.0.0.0:* LISTEN 1910/sshd
tcp6 0 0 ::: :::* LISTEN 1910/sshd

And that’s all we should see.

Additional resources for the above

Disable Network Information Service (NIS). NIS lets several machines in a network share the same account information, such as the password file (Allows password sharing between machines). Originally known as Yellow Pages (YP). If you needed centralised authentication for multiple machines, you could set-up an LDAP server and configure PAM on your machines in order to contact the LDAP server for user authentication. We have no need for distributed authentication on our web server at this stage.

dpkg-query -l '*nis*'

Nis is not installed by default, so we don’t need to remove it.

Additional resources for the above

Remove Services

First thing I did here was run nmap from my laptop

nmap -p 0-65535 <serverImConfiguring>
PORT STATE SERVICE
23/tcp filtered telnet
111/tcp open rpcbind
/tcp open

Now because I’m using a non default port for SSH, nmap thinks some other service is listening. Although I’m sure if I was a bad guy and really wanted to find out what was listening on that port it’d be fairly straight forward.

To obtain a list of currently running servers (determined by LISTEN) on our web server. Not forgetting that man is your friend.

sudo netstat -tap | grep LISTEN

or

sudo netstat -tlp

I also like to add the ‘n’ option to see the ports. This output was created before I had disabled exim4 as detailed above.

tcp 0 0 *:sunrpc *:* LISTEN 1498/rpcbind
tcp 0 0 localhost:smtp *:* LISTEN 2311/exim4
tcp 0 0 *:57243 *.* LISTEN 1529/rpc.statd
tcp 0 0 *: *:* LISTEN 2247/sshd
tcp6 0 0 [::]:sunrpc [::]:* LISTEN 1498/rpcbind
tcp6 0 0 localhost:smtp [::]:* LISTEN 2311/exim4
tcp6 0 0 [::]:53309 [::]:* LISTEN 1529/rpc.statd
tcp6 0 0 [::]: [::]:* LISTEN 2247/sshd

Rpcbind

Here we see that sunrpc is listening on a port and was started by rpcbind with the PID of 1498.
Now Sun Remote Procedure Call is running on port 111 (also the portmapper port) netstat can tell you the port, confirmed with the nmap scan above. This is used by NFS and as we don’t need NFS as our server isn’t a file server, we can get rid of the rpcbind package.

dpkg-query -l '*rpc*'

Shows us that rpcbind is installed and gives us other details. Now if you’ve been following along with me and have made the /usr mount read only, some stuff will be left behind when we try to purge:

sudo apt-get purge rpcbind

Following are the outputs of interest:

The following packages will be REMOVED:
nfs-common* rpcbind*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
Do you want to continue [Y/n]? y
Removing nfs-common ...
[ ok ] Stopping NFS common utilities: idmapd statd.
dpkg: error processing nfs-common (--purge):
cannot remove `/usr/share/man/man8/rpc.idmapd.8.gz': Read-only file system
Removing rpcbind ...
[ ok ] Stopping rpcbind daemon....
dpkg: error processing rpcbind (--purge):
cannot remove `/usr/share/doc/rpcbind/changelog.gz': Read-only file system
Errors were encountered while processing:
nfs-common
rpcbind
E: Sub-process /usr/bin/dpkg returned an error code (1)

Another

dpkg-query -l '*rpc*'

Will result in pH. That’s a desired action of (p)urge and a package status of (H)alf-installed.
Now the easiest thing to do here is rename your /etc/fstab to something else and rename the /etc/fstab you backed up before making changes to it back to /etc/fstab then because you know the fstab is good,

reboot

Then try the purge, dpkg-query and netstat commands again to make sure rpcbind is gone and of course no longer listening. I had to actually do the purge twice here as config files were left behind from the fist purge.

Also you can remove unused dependencies now after you get the following message:

The following packages were automatically installed and are no longer required:
libevent-2.0-5 libgssglue1 libnfsidmap2 libtirpc1
Use 'apt-get autoremove' to remove them.
The following packages will be REMOVED:
rpcbind*

sudo apt-get -s autoremove

Because I want to simulate what’s going to be removed because I”m paranoid and have made stupid mistakes with autoremove years ago and that pain has stuck with me. I autoremoved a meta-package which depended on many other packages. A subsequent autoremove for packages that had a sole dependency on the meta-package meant they would be removed. Yes it was a painful experience. /var/log/apt/history.log has your recent apt history. I used this to piece back together my system.

Then follow up with the real thing… Just remove the -s and run it again. Just remember, the less packages your system has the less code there is for an attacker to exploit.

Telnet

telnet installed:

dpkg-query -l '*telnet*'
sudo apt-get remove telnet

telnet gone:

dpkg-query -l '*telnet*'

Ftp

We’ve got scp, why would we want ftp?
ftp installed:

dpkg-query -l '*ftp*'
sudo apt-get remove ftp

ftp gone:

dpkg-query -l '*ftp*'

Don’t forget to swap your new fstab back and test that the mounts are mounted as you expect.

Secure Services

The following provide good guidance on securing what ever is left.

Scheduled Backups

Make sure all data and VM images are backed up routinely. Make sure you test that restoring your backups work. Backup system files and what ever else is important to you. There is a good selection of tools here to help. Also make sure you are backing up the entire VM if your machine is a virtual guest by export / import OVF files. I also like to backup all the VM files. Disk space is cheap. Is there such a thing as being too prepared for disaster? It’s just a matter of time before you’ll be calling on your backups.

Keep up to date

Consider whether it would make sense for you or your admin/s to set-up automatic updates and possibly upgrades. Start out the way you intend to go. Work out your strategy for keeping your system up to date and patched. There are many options here.

Logging, Alerting and Monitoring

From here on, I’ve made it less detailed and more about just getting you to think about things and ways in which you can improve your stance on security. Also if any of the offerings cost money to buy, I make note of it because this is the exception to my rule. Why? Because I prefer free software and especially when it’s Open Source FOSS.

Some of the following cross the “logging” boundaries, so in many cases it’s difficult to put them into categorical boxes.

Attackers like to try and cover their tracks by modifying information that’s distributed to the various log files. Make sure you know who has write access to these files and keep the list small. As a Sysadmin you need to read your log files often and familiarise yourself with them so you get used to what they should look like.

SWatch

Monitors “a” log file for each instance you run (or schedule), matches your defined patterns and acts. You can define different message types with different font styles. If you want to monitor a lot of log files, it’s going to be a bit messy.

Logcheck

Monitors system log files, emails anomalies to an administrator. Once installed it needs to be set-up to run periodically with cron. Not a bad we run down here. How to use and customise it. Man page and more docs here.

NewRelic

Is more of a performance monitoring tool than a security tool. It has free plans which are OK, It comes into it’s own in larger deployments. I’ve used this and it’s been useful for working out what was causing performance issues on the servers.

Advanced Web Statistics (AWStats)

Unlike NewRelic which is a Software as a Service (SaaS), AWStats is FOSS. It kind of fits a similar market space as NewRelic though, but also has Host Intrusion Prevention System (HIPS) features. Docs here.

Pingdom

Similar to NewRelic but not as feature rich. Update: Recently stumbled into Monit which is a better alternative. Free and open source. I’ve been writing about it here.

Multitail

Does what its name sounds like. Tails multiple log files at once. Provides realtime multi log file monitoring. Example here. Great for seeing strange happenings before an intruder has time to modify logs, if your watching them that is. Good for a single system if you’ve got a spare screen to throw on the wall.

PaperTrail

Targets a similar problem to MultiTail except that it collects logs from as many servers as you want and copies them off-site to PaperTrails service and aggregates them into a single easily searchable web interface. Allows you to set-up alerts on anything. Has a free plan, but you only get 100MB per month. The plans are reasonably cheap for the features it provides and can scale as you grow. I’ve used this and have found it to be excellent.

Logwatch

Monitors system logs. Not continuously, so they could be open to modification without you knowing, like SWatch and Logcheck from above. You can configure it to reduce the number of services that it analyses the logs of. It creates a report of what it finds based on your level of paranoia. It’s easy to set-up and get started though. Source and docs here.

Logrotate

Use logrotate to make sure your logs will be around long enough to examine them. Some usage examples here. Ships with Debian. It’s just a matter of applying any extra config.

Logstash

Targets a similar problem to logrotate, but goes a lot further in that it routes and has the ability to translate between protocols. Requires Java to be installed.

Fail2ban

Ban hosts that cause multiple authentication errors. or just email events. Of course you need to think about false positives here too. An attacker can spoof many IP addresses potentially causing them all to be banned, thus creating a DoS.

Rsyslog

Configure syslog to send copy of the most important data to a secure system. Mitigation for an attacker modifying the logs. See @ option in syslog.conf man page. Check the /etc/(r)syslog.conf file to determine where syslogd is logging various messages. Some important notes around syslog here, like locking down the users that can read and write to /var/log.

syslog-ng

Provides a lot more flexibility than just syslogd. Checkout the comprehensive feature-set.

Some Useful Commands

  • Checking who is currently logged in to your server and what they are doing with the who and w commands
  • Checking who has recently logged into your server with the last command
  • Checking which user has failed login attempts with the faillog command
  • Checking the most recent login of all users, or of a given user with the lastlog command. lastlog comes from the binary file /var/log/lastlog.

This, is a list of log files and their names/locations and purpose in life.

Host-based Intrusion Detection System (HIDS)

Tripwire

Is a HIDS that stores a good know state of vital system files of your choosing and can be set-up to notify an administrator upon change in the files. Tripwire stores cryptographic hashes (delta’s) in a database and compares them with the files it’s been configured to monitor changes on. Not a bad tutorial here. Most of what you’ll find with tripwire now are the commercial offerings.

RkHunter

A similar offering to Tripwire. It scans for rootkits, backdoors, checks on the network interfaces and local exploits by running tests such as:

  • MD5 hash changes
  • Files commonly created by root-kits
  • Wrong file permissions for binaries
  • Suspicious strings in kernel modules
  • Hidden files in system directories
  • Optionally scan within plain-text and binary files

Version 1.4.2 (24/02/2014) now checks ssh, sshd and telent (although you shouldn’t have telnet installed). This could be useful for mitigating non-root users running a modified sshd on a 1025-65535 port. You can run ad-hoc scans, then set them up to be run with cron. Debian Jessie has this release in it’s repository. Any Debian distro before Jessie is on 1.4.0-1 or earlier.

The latest version you can install for Linux Mint Qiana (17) and Rebecca (17.1) within the repositories is 1.4.0-3 (01/05/2012)

Change-log here.

Chkrootkit

It’s a good idea to run a couple of these types of scanners. Hopefully what one misses the other will not. Chkrootkit scans for many system programs, some of which are cron, crontab, date, echo, find, grep, su, ifconfig, init, login, ls, netstat, sshd, top and many more. All the usual targets for attackers to modify. You can specify if you don’t want them all scanned. Runs tests such as:

  • System binaries for rootkit modification
  • If the network interface is in promiscuous mode
  • lastlog deletions
  • wtmp and utmp deletions (logins, logouts)
  • Signs of LKM trojans
  • Quick and dirty strings replacement

Stealth

The idea of Stealth is to do a similar job as the above file integrity scanners, but to leave almost no sediments on the tested computer (called the client). A potential attacker therefore has no clue that Stealth is in fact scanning the integrity of its client files. Stealth is installed on a different machine (called the controller) and scans over SSH.

Ossec

Is a HIDS that also has some preventative features. This is a pretty comprehensive offering with a lot of great features.

Unhide

While not strictly a HIDS, this is quite a useful forensics tool for working with your system if you suspect it may have been compromised.

Unhide is a forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. Unhide runs in Unix/Linux and Windows Systems. It implements six main techniques.

  1. Compare /proc vs /bin/ps output
  2. Compare info gathered from /bin/ps with info gathered by walking thru the procfs. ONLY for unhide-linux version
  3. Compare info gathered from /bin/ps with info gathered from syscalls (syscall scanning)
  4. Full PIDs space ocupation (PIDs bruteforcing). ONLY for unhide-linux version
  5. Compare /bin/ps output vs /proc, procfs walking and syscall. ONLY for unhide-linux version. Reverse search, verify that all thread seen by ps are also seen in the kernel.
  6. Quick compare /proc, procfs walking and syscall vs /bin/ps output. ONLY for unhide-linux version. It’s about 20 times faster than tests 1+2+3 but maybe give more false positives.

It includes two utilities: unhide and unhide-tcp.

unhide-tcp identifies TCP/UDP ports that are listening but are not listed in /bin/netstat through brute forcing of all TCP/UDP ports available.

Can also be used by rkhunter in it’s daily scans. Unhide was number one in the top 10 toolswatch.org security tools pole

Web Application Firewalls (WAF’s)

which are just another part in the defense in depth model for web applications, get more specific in what they are trying to protect. They operate at the application layer, so they don’t have to deal with all the network traffic. They apply a set of rules to HTTP conversations. They can also be either Network or Host based and able to block attacks such as Cross Site Scripting (XSS), SQL injection.

ModSecurity

Is a mature and feature full WAF that is designed to work with such web servers as IIS, Apache2 and NGINX. Loads of documentation. They also look to be open to committers and challengers a-like. You can find the OWASP Core Rule Set (CRS) here to get you started which has the following:

  • HTTP Protocol Protection
  • Real-time Blacklist Lookups
  • HTTP Denial of Service Protections
  • Generic Web Attack Protection
  • Error Detection and Hiding

Or for about $500US a year you get the following rules:

  • Virtual Patching
  • IP Reputation
  • Web-based Malware Detection
  • Webshell/Backdoor Detection
  • Botnet Attack Detection
  • HTTP Denial of Service (DoS) Attack Detection
  • Anti-Virus Scanning of File Attachments

Fusker

for Node.js. Although doesn’t look like a lot is happening with this project currently. You could always fork it if you wanted to extend.

The state of the Node.js echosystem in terms of security is pretty poor, which is something I’d like to invest time into.

Fire-walling

This is one of the last things you should look at when hardening an internet facing or parameterless system. Why? Because each machine should be hard enough that it doesn’t need a firewall to cover it like a blanket with services underneath being soft and vulnerable. Rather all the services should be either un-exposed or patched and securely configured.

Most of the servers and workstations I’ve been responsible for over the last few years I’ve administered as though there was no firewall and they were open to the internet. Most networks are reasonably easy to penetrate, so we really need to think of the machines behind them as being open to the internet. This is what De-perimeterisation (the concept initialised by the Jericho Forum) is all about.

Some thoughts on firewall logging.

Keep your eye on nftables too, it’s looking good!

Additional Resources

Just keep in mind the above links are quite old. A lot of it’s still relevant though.

Machine Now Ready for DMZ

Confirm DMZ has

  • Network Intrusion Detection System (NIDS), Network Intrusion Prevention System (NIPS) installed and configured. Snort is a pretty good option for the IDS part, although with some work Snort can help with the Prevention also.
  • incoming access from your LAN or where ever you plan on administering it from
  • rules for outgoing and incoming access to/from LAN, WAN tightly filtered.

Additional Web Server Preparation

  • setup and configure soft web server
  • setup and configure caching proxy. Ex:
    • node-http-proxy
    • TinyProxy
    • Varnish
    • nginx
  • deploy application files
  • Hopefully you’ve been baking security into your web app right from the start. This is an essential part of defense in depth. Rather than having your application completely rely on other entities to protect it, it should also be standing up for itself and understanding when it’s under attack and actually fighting back.
  • set static IP address
  • double check that the only open ports on the web server are 80 and what ever you’ve chosen for SSH.
  • setup SSH tunnel
  • decide on and document VM backup strategy and set it up.

Machine Now In DMZ

Setup your CNAME or what ever type of DNS record you’re using.

Now remember, keeping any machine on (not just the internet, but any) a network requires constant consideration and effort in keeping the system as secure as possible.

Work through using the likes of harden and Lynis for your server and harden-surveillance for monitoring your network.

Consider combining “Port Scan Attack Detector” (psad) with fwsnort and Snort.

Hack your own server and find the holes before someone else does. If you’re not already familiar with the tricks of how systems on the internet get attacked read up on the “Attacks and Threats” Run OpenVAS, Run Web Vulnerability Scanners

From here on is in scope for other blog posts.

Procurement & Config of Sun Fire V240 & ALOM

October 25, 2014

This is the sequence of events I took to prepare a Sun Fire V240 for hosting pfSense which is a free and open source FreeBSD based enterprise grade routing solution for a client of mine.

Recently I was tasked with setting up a network with what I considered to be enterprise grade hardware and software as cheaply as possible. When I take on these sorts of tasks, security is forefront in my mind, so I often look toward components that are as open as possible and that don’t sport any known (to me at least) back-doors and are able to be easily upgraded and patched at little to no cost.

A requirement was clean shut-downs on power failure events at least for the critical servers.

Procured Kit

  1. APC Smart-UPS 5000 with batteries in good condition. Worth a little under $6k if you’re buying new. I wouldn’t buy new. If you shop around, these can be picked up at a fraction of that cost. From my experience the APC kit is some of the best UPS gear available.
    APC Smart-UPS 5000
  2. AP9630 UPS network management card $92 new. Most of the details around setting these UPS’s up I’ve already posted on. If you search my blog for “APC UPS” you’ll find it.
    APC AP9630
  3. Enterprise grade router/firewall:
    Sun Fire V240 (RISC architecture). 2 x UltraSparc-IIIi 1.5Ghz CPU. 4Gbit on-board Ethernet ports. Lights-out management port. 4GB RAM. 2U. Dual redundant PSU’s. 2 x 72GB Hot Swap 10k SCSI HDD’s. With rack mount rails. Currently going for around $1.5k on Ebay. Price paid: $160 incl shipping. I doubt you’d find anything of these specifications off the shelf for under a $1000. This is a lot of server for a very small amount of money.
    Sun Fire V240
  4. Firmware: pfSense. Free and open source.

Planning

As part of my planning I evaluated (again) whether or not free software routing solutions are actually up to the task of the enterprise. My research led me to believe some were… based on others that had already been down this route ( PTP 😉 ). Openness is a biggie for me. I like to know that eyes are on the software rather than it being closed up in a proprietary package.

I evaluated m0n0Wall, ipCop (Linux based), smoothwall and pfSense. pfSense had been used in quite a few large environments successfully. When I had made my decision on the firmware to use, I went through the hardware requirements and of course started looking for high quality second-hand gear.

For the router hardware I was going to need at lease 1GHz CPU as I wanted to run Snort as my IDS/IPS. PCI-X or PCI-e network adapters (which of course I didn’t need to worry about with the Sun Fire server). Snort needs 512MB RAM minimum. Preferably at least 1GB.

Gaining Access to the Sun Fire V240

Now I had no idea of how the previous owner had setup the configuration of the ALOM (Advanced Lights Out Management). In fact I hadn’t administered a Sun Fire server before at all. On page 11 of the Sun FireTM V210 and V240 Servers Getting Started Guide it states the following:

The system console is directed to ALOM by default and is configured to show server console information on startup.
ALOM enables you to monitor and control your server over either a serial
connection (using the SERIAL MGT port), or Ethernet connection (using the NET MGT port).
For information about configuring an Ethernet connection, refer to the Sun Advanced Lights Out Manager Software User’s Guide.” The NET MGT port can also be disabled and in my case it turned out it was, but I’ll get to that later. I didn’t have a spare DB-9 to RJ-45 adapter lying around to wire it up and connect to the SERIAL MGT port.

Sun Fire V240 rear

Telnet?

(but didn’t get that far)

Since I was going to go down the path of trying to connect to the ALOM console via the NET MGT Ethernet port, I thought telnet would probably be the path of least resistance.

Page 10 of the “Sun Advanced Lights Outs Manager Software User’s Guide” stated the following:

The 10-Mbyte Ethernet port enables you to access ALOM from within your company
network. You can connect to ALOM remotely using any standard Telnet client”. On the V240, the
ALOM Ethernet port is referred to as the NET MGT port.

Using a laptop with Kali Linux installed (because it has lots of great tools for network reconnaissance), Running

ethtool eth0

told me that my NIC supported:
10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full

Wireshark?

Tried connecting directly to the NET MGT port with wireshark running on my laptop. Didn’t get any packets from the device. At the time I thought it may have been because my laptop’s NIC was using 100baseT, but later on I found out that the NET MGT port was disabled.

Tried pinging my broadcast address ping -b 255.255.255.255 then checked my arp table arp -a. No results that looked like what I was looking for. Of course this strategy would have taken quite some time to complete… and in my case it would have yielded no results anyway.

NMap?

I started with the private IPv4 address spaces. Using Wi-fi on my Kali box, tried the 16 bit block:

nmap -sn 192.168.*.*

Got a false positive of a cable modem. How did I work out that it was a false positive?

nmap -A <falsePositiveIPOfCableModem> # Gave me the model and everything I needed to know about the device to rule it out.

Next up the 20 bit block

nmap -sn 172.16.0.0/12
Nmap done: 1048576 IP addresses (0 hosts up) scanned in 108670.97 seconds

In earlier releases of nmap the -sn switch was known as -sP

I decided I needed to try and speed up the scan, so I connected directly to the V240 NET MGT port with a Cat5 patch cable (ethtool told me my laptop’s NIC had MDI-X on (force crossover mode)) and made sure my network card supported 10baseT which the “Sun Advanced Lights Outs Manager Software User’s Guide” told me it needed for the NET MGT port. Turns out the NET MGT port didn’t support 10baseT. Details a bit further down.

Added a static IP address to the /etc/network/interfaces. Currently it looked like:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet dhcp

So I commented out the auto wlan0 and iface wlan0 inet dhcp and added the following:

auto eth0
iface eth0 inet static
address 10.1.1.6
netmask 255.255.255.0
broadcast 10.1.1.255
#gateway 10.1.1.1 # Make sure you don't add a gateway, as we're connecting directly to the V240

followed by:

service networking restart

then changed my /etc/NetworkManager/NetworkManager.conf
managed=true to be managed=false
So Network manager didn’t keep interfering with my interfaces.

I followed this with a

service network-manager restart

followed with ifconfig to make sure my network interface was using the correct IP address, netmask and broadcast. It wasn’t, so…

ifdown eth0
ifup eth0
ifconfig

Success, it now was.

Now to make sure my network card was communicating in a manner that the V240’s NET MGT port would understand.

Using ethtool

ethtool eth0

told me 10baseT was supported, but it also told me my current speed was 100Bb/s. So I tried changing the speed with

ethtool -s eth0 speed 10

and received Cannot advertise speed 10. So made the following temporary changes as they’ll be lost on reboot. Changed the duplex… Ran the following:

ethtool -s eth0 speed 10 duplex half

Now with a:

ethtool eth0

I got:

Speed: unknown!
Duplex: Unknown! (255)

So turned the auto negotiation off:

ethtool -s eth0 speed 10 duplex half autoneg off

Now with a:

ethtool eth0

I got:

Speed: 10Mb/s
Duplex: Half
Auto-negotiation: off
#and some other settings.

Some useful ethtool resources:

With these settings the NET MGT port didn’t have it’s green link led on. So I kept playing with the settings. Turns out it would only work with speed 100 duplex full contrary to page 10 of the “Sun Advanced Lights Out Manager Software User’s Guide”
These were the settings that gave me link:

Supported pause frame use: No #Don't think I fiddled with this.
Supports auto-negotiation: Yes
Advertised link modes: Not reported #Don't think I fiddled with this.
Advertised pause frame use: Symmetric #Don't think I fiddled with this.
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: full
Port: Twisted Pair #Don't think I fiddled with this.
PHYAD: 1 #Don't think I fiddled with this.
Transceiver: internal #Don't think I fiddled with this.
Auto-negotiation: off
MDI-X: on
Supports Wake-on: g #Don't think I fiddled with this.
Wake-on: d #Don't think I fiddled with this.
Current message level: 0x000000ff (255)
drv prove link timer ifdown ifup rx_err tx_err
Link detected: yes

I was now confident that if the Sun Fire V240 NET MGT port was enabled, we’d find it’s IP address if it was using one from the private space. It was time to try the last and largest private address space. Oh, I also used wireshark to make sure nmap was doing what I expected on my laptop when I ran:

nmap -v -sn 10.0.0.0/8

I was a little confused to start with as nmap told me Scanning 4096 hosts I soon realised after checking the CIDR (Classless InterDomain Routing) and by the output nmap produced, that nmap was doing the scanning in chunks. As there was going to be a lot of results, I setup the output to files:

nmap -v -sn -oA 'scan-%Y-%m-%d_%H-%M 10.0.0.0/8

This produces the output in all three formats as discussed here.

SERIAL MGT Port?

This private address range was going to take a few days to scan, so I decided to have a poke at the SERIAL MGT port on the Sun Fire V240.

To use the SERIAL MGT port, a RJ-45 patch cable connected to a DB-9 adapter ($4.50 from globalpc) is required Unless you get the official Sun adaptor “530-3100-01”, or still have the one that came in the new box. So I splashed out and went with the $4.50 option. It cost me more in gas to get to the shop than buy the part. I Wired it up according to page 25 of the “Sun Fire V210 and V240 Servers Installation Guide“.

RJ-45 to DB-9 Adapter Crossovers
SERIAL MGT Port Adapter (DB-9) Pin
1 (RTS) 8 (CTS)
2 (DTR) 6 (DSR)
3 (TXD) 2 (RXD)
4 (Signal Ground) 5 (Signal Ground)
5 (Signal Ground) 5 (Signal Ground)
6 (RXD) 3 (TXD)
7 (DSR) 4 (DTR)
8 (CTS) 7 (RTS)

Red wire in with green.

RJ45-DB9 RJ45

Installed minicom and setserial and did pretty much the same as I did here. Plugged the console cable in and tried to establish a connection.

Then found that by default ALOM only communicates through the SERIAL MGT port at startup (of ALOM I thought), but it seems that at power on of the server also.

At the {1} ok prompt, I typed #. (that’s hash followed with dot) to escape from the system console sc>

I then entered the showsc command and found that the MGT NET port was disabled.
I then ran a

usershow

to see which user accounts existed and was prompted to set a password for the admin user.
When you connect to ALOM for the first time, you are automatically connected as the admin account.“.
So obviously the seller of the system reset ALOM.

SettingAdminPassword

Also audited the user accounts, and the details on the permission levels are here.

Ran the following script. A nice little dialog from Ramesh here (see step 4) too.

setupsc
  • Turned NET MGT port on
  • Changed the default if_connection from none to ssh
  • Answered no to email alerts (only for logged in users)
  • Yes to configure the network interfaces
  • No to DHCP
  • Entered the IP address for the NET MGT port
  • Entered the netmask for the NET MGT port
  • Entered the gateway for the Net Mgt port
  • Should powerstate memory be enabled [y]? y
  • Enabled power on sequencing

Then we need to restart the ALOM to apply the new settings.

resetsc -y

If you still have minicom running, it’ll show you what happens during the boot sequence and then present you with a login prompt.

Extra Resources

SSH

At this point I plugged the Ethernet cable from my test switch (10 Mbit/s capable) back into the NET MGT port of the Sun Fire V240 and tested that ALOM was responding on the IP address that I set the NET MGT port to.

ping <myNetMgtIP>

It was answering. So I attempted to SSH in on a different machine.

ssh admin@<myNetMgtIP>

I was presented with the hosts key fingerprint

The authenticity of host <myNetMgtIP> (<myNetMgtIP>)' can't be established.
RSA key fingerprint is <myExistingHostKeyInHex>.
Are you sure you want to continue connecting (yes/no)?

I wanted to know I was connecting to what I thought I was connecting to, so answered no.
Then in minicom I queried the hosts key fingerprint

ssh-keygen -l -t rsa

I was provided with the key fingerprint that matched what I was presented with when I attempted to SSH, so I new I was actually communicating with the server I thought I was.

I then regenerated the hosts key fingerprint

ssh-keygen -r -t rsa

and was provided with the new key. A restart of the SSH daemon is required to load the new host key.

sc> restartssh

Then SSH in. Confirm when prompted that the host key matches the newly provided key.

ssh admin@<myNetMgtIP>
The authenticity of host <myNetMgtIP> (<myNetMgtIP>)' can't be established.
RSA key fingerprint is <myNewHostKeyInHex>.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '<myNetMgtIP>' (RSA) to the list of known hosts.

Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

Sun(tm) Advanced Lights Out Manager <versionHere> ()

Please login: admin
Please Enter password: *********

sc>

We’re in!

At any time for a list of commands, you can type help.

logout
Connection to <myNetMgtIP> closed.

We’re out!

Establishing your SSH Server’s Key Fingerprint

February 16, 2013

When you connect to a remote host via SSH that you haven’t established a trust relationship with before,
you’re going to be told that the authenticity of the host your attempting to connect to can’t be established.

me@mybox ~ $ ssh me@10.1.1.40
The authenticity of host '10.1.1.40 (10.1.1.40)' can't be established.
RSA key fingerprint is 23:d9:43:34:9c:b3:23:da:94:cb:39:f8:6a:95:c6:bc.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no':

Do you type yes to continue without actually knowing that it is the host you think it is? Well, if you do, you should be more careful. The fingerprint that’s being put in front of you could be a Man In The Middle (MITM). You can query the target (from “it’s” shell of course) for the fingerprint of it’s key easily. On Debian you’ll find the keys in /etc/ssh/

On

ls /etc/ssh/

you should get a listing that reveals the private and public keys. Run the following command on the appropriate key to reveal it’s fingerprint. For example if SSH is using rsa:

ssh-keygen -lf ssh_host_rsa_key.pub

For example if SSH is using dsa:

ssh-keygen -lf ssh_host_dsa_key.pub

If you try the command on either the private or publick key you’ll be given the public key’s fingerprint, which is exactly what you need for verifying the authenticity from the client side.

Sometimes you may need to force the output of the fingerprint_hash algorithm as ssh-keygen may be displaying it in a different form than it’s shown when you try to SSH for the first time. The default when using ssh-keygen to show the key fingerprint is sha256, but in order to compare apples with apples you may need to specify md5 if that’s what’s being shown when you attempt to login. You would do that like the following:

ssh-keygen -lE md5 -f ssh_host_dsa_key.pub

Details on the man page for the options.

Do not connect remotely and then run the above command, as the machine you’re connected to is still untrusted. The command could be dishing you up any string replacement if it’s an attackers machine. You need to run the command on the physical box or get someone you trust (your network admin) to do this and hand you the fingerprint.

Now when you try to establish your SSH connection for the first time, you can check that the remote host is actually the host you think it is by comparing the output of one of the previous commands with what SSH on your client is telling you the remote hosts fingerprint is. If it’s different it’s time to start tracking down the origin of the host masquerading as the address your trying to hook up with.

Now, when you get the following message when attempting to SSH to your server, due to something or somebody changing the hosts key fingerprint:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
23:d9:43:34:9c:b3:23:da:94:cb:39:f8:6a:95:c6:bc.
Please contact your system administrator.
Add correct host key in /home/me/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/me/.ssh/known_hosts:6
  remove with: ssh-keygen -f "/home/me/.ssh/known_hosts" -R 10.1.1.40
RSA host key for 10.1.1.40 has changed and you have requested strict checking.
Host key verification failed.

The same applies. Check that the fingerprint is indeed the intended target hosts key fingerprint. If it is, run the specified command.

Data Centre in a Rack

June 9, 2012

I recently took the plunge to install some of my more used networking components into a server rack.
I’d been putting this off for a few years.
Most of these components have been projects of mine which I’ve already blogged on in various places on this blog.
The obvious places are the following

There are also many other topics I’ve blogged on that form part of the work gone into these components and set-up of.
Check them out.

There’s also a home made router in an old $30 desktop pc run from a CF card.

Small Data Centre

Home Rack Server

HPR Pod cast on a bunch of good tools useful for setting up and maintaining an Open Source Data Centre.
ep0366 :: The Open Source Data Center

Questions welcome.
I’m happy to provide directions and insights from my experience.

copying with scp

March 25, 2012

I was having some trouble today copying a file (1.5GB .iso) from a notebook to a file server.
The notebook I was using was running Linux Ubuntu.
The server FreeBSD.
I was trying to copy this file using SMB/CIFS via Nautilus.
I tried several times, it failed each time.
Then I thought, what are you doing… drop to the command line.

scp to the rescue

The command I used:

From the directory on my local machine I was copying the file from

scp -P <MyPortNumberHere> MyFile.iso <MyUserName>@<MyServer>:/Path/To/Where/I/Want/MyFile/ToGo/MyFile.iso

This also took about half  the time to copy that SMB took, and SMB didn’t even complete. Not to mention the transfer is secure (SSH)

Some additional resources

http://www.linuxtutorialblog.com/post/ssh-and-scp-howto-tips-tricks

http://amath.colorado.edu/computing/software/man/scp.html

Also don’t forget to check the man page out 😉

man scp

OpenSSH from Linux to Windows 7 via tunneled RDP

December 27, 2011

I recently acquired a new second hand Asus laptop from my work,
that will be performing a handful of responsibilities on one of my networks.

This is the process I took to set up OpenSSH on Cygwin running on the Windows 7 box.

I won’t be going over the steps to tunnel RDP as I’ve already done this in another post

Make sure your LAN Manager Authentication Level is set as high as practical.
Keeping in mind, that some networked printers using SMB may struggle with these permissions set to high.

  1. Windows Firewall -> Allowed Programs -> checked Remote Desktop.
  2. System Properties -> Remote tab -> turn radio button on to at least “Allow connections from computers running any version of Remote Desktop”
    If you like, this can be turned off once SSH is set-up, or you can just turn the firewall rule off that lets RDP in.

CopSSH which I used on my last set of Linux to Windows RDP via SSH set-ups is no longer free.
So I’m not paying for something I can get for free, but with a little extra work involved.

So I looked at some other Windows SSH offerings

  1. freeSSHd which looked like a simple set-up, but it didn’t appear to be currently maintained.
  2. OpenSSH the current latest version of 5.9 released September 6, 2011
    A while back OpenSSH wasn’t being maintained. Looks like that’s changed.

OpenSSH is part of Cygwin, so you need to create a
c:\cygwin directory and download setup.exe into it.

    1. Right click on c:\cygwin\setup.exe and select “Run as Administrator”.
      Click Next.
    2. If Install from Internet is not checked, check it. Then click Next.
    3. Accept the default “Root Directory” of C:\cygwin. Accept the default for “Install For” as All Users.
    4. Accept the default “Local Package Directory” of C:\cygwin.
    5. Accept the default “Select Your Internet Connection” of “Direct Connection”. Click Next.
    6. Select the closest mirror to you. Click Next.
    7. You can expand the list by clicking the View button, or just expand the Net node.
    8. Find openssh and click the Skip text, so that the Bin check box for the item is on.
    9. Find tcp_wrappers and click the Skip text, so that the Bin check box for the item is on.

If you selected tcp_wrappers and get the “ssh-exchange-identification: Connection closed by remote host” error,
you’ll need to edit /etc/hosts.allow and add the following two lines before the PARANOID line.

ALL: 127.0.0.1/32 : allow
 ALL: [::1]/128: allow

These lines were already in the /etc/hosts.allow

(optional) find the package “diffutils”, click on the word “skip” so that an x appears in Column B,
find the package “zlib”, click on the word “skip” (it should be already selected) so that an x appears in Column B.

Click Next to start the install.
Click Next again to… Resolving Dependencies, keep default “Select required packages…” checked.
At the end of the install, I got the “Program compatibility Assistant” stating… This program might not have installed correctly.
I clicked This program installed correctly.

Add an environment variable to your Systems Path variable.
Edit the Path and append ;c:\cygwin\bin

Right click the new Cygwin Terminal shortcut and Run as administrator.
Make sure the following files have the correct permissions.

/etc/passwd -rw-r–r–
/etc/group -rw-r–r–
/var drwxr-xr-x

Create a sshd.log file in /var/log/

touch /var/log/sshd.log
chmod 664 /var/log/sshd.log

Run ssh-host-config

  1. Cygwin will then ask Should privilege separation be used? Answer Yes
  2. Cygwin will then ask Should this script create a local user ‘sshd’ on this machine? Answer Yes
  3. Cygwin will then ask Do you want to install sshd as service? Answer Yes
  4. Cygwin will then ask for the value of CYGWIN for the daemon: []? Answer ntsec tty
  5. Cygwin will then ask Do you want to use a different name? Answer no
  6. Cygwin will then ask Please enter a password for new user cyg_server? Enter a password twice and remember it.

replicate your Windows user credentials with cygwin

mkpasswd -cl &gt; /etc/passwd
mkgroup --local &gt; /etc/group

I think (although I haven’t tried it yet) when you change your user password, which you should do regularly,
you should be able to run the above 2 commands again to update your password.
As I haven’t done this yet, I would take a backup of these files before I ran the commands.

to start the service, type the following:

net start sshd

Test SSH

ssh localhost

When you make changes to the /etc/sshd_config,
because it’s owned by cyg_server, you’ll need to make any changes as the owner.
I added the following line to the end of the file:

Ciphers blowfish-cbc,aes128-cbc,3des-cbc

As it sounds like Blowfish runs faster than the default AES-128

There are also a collection of changes to be made to the /etc/sshd_config

for example:

  • Change the LoginGraceTime to as small as possible number.
  • PermitRootLogin no
  • Set PasswordAuthentication to no once you get key pair auth set-up.
  • PermitEmptyPasswords no
  • You can also setup AllowUsers and DenyUsers.

The options available are here in the man page (link updated 2013-10-06).
This is also helpful, I used this for my CopSSH setup.

Open firewalls TCP port 22 and close the RDP port once SSH is working.

As my blog post says:
ssh-copy-id MyUserName@MyWindows7Box

I already had a key pair with pass phrase, so I used that.
Now we should be able to ssh without being prompted for a password, but instead using key pair auth.

http://pigtail.net/LRP/printsrv/cygwin-sshd.html
http://www.petri.co.il/setup-ssh-server-vista.htm
http://www.scottmurphy.info/open-ssh-server-sshd-cygwin-windows

Centerim, Irssi, Alpine on Screen

November 27, 2011

I’ve recently acquired access to my own shell from anapnea.net

This allows me to carry out development, testing, and any on-line activity anonymously.
All via SSH.

One of the tasks I needed to do,
was to set up my date/time to my local time zone.
Rather than set the system wide time,
because there are many users on this machine,
I needed to set the time zone on a per user basis.

The behaviour of your interactive shell is defined by your ~/.bashrc and ~/.bash_profile files.
Edit one of these files and append or alter the TZ as follows:

 vim /home/myuser/.bashrc

where myuser is just that, my user name.

Append the following:

export TZ="/usr/share/zoneinfo/yourcountry"

Where yourcountry is one of the country files in /usr/share/zoneinfo/

Screen

Screen is a Linux shell session manager.
It’s great, because you can leave multiple sessions running and switch between them,
all in a single console.
Then you can just detatch from screen, leaving your programmes running on it.
Terminate your SSH session, and re-connect from another machine,
re-attach to screen, and carry on working where you left off,
with your programmes all still running.

This is a quick run down on what it is and how to use it.

Create a new screen session:

screen

List screens:

screen -ls

Detaching:

Ctrl-a, d

To re-attach to a screen:

screen -r

Or

screen -raAd

Reattach (-r), do some sizing stuff (a,A), and detach (d) before reattaching if necessary.
If your screen session is attached elsewhere, using -raAd will detach that session, and reattach it here.

Cycle through each screen:

Ctrl-a n
Ctrl-a p

You can kill a screen by typing exit.

Terminate a screen:

screen -X -S ID kill

Where ID is the id of the screen you want to terminate.

Useful links
http://quadpoint.org/articles/irssi
Full list of commands and their usage http://www.math.utah.edu/docs/info/screen_5.html

CenterIM

CenterIM is a Linux command line instant messenger client.
Getting started
with CenterIM

Setting up GTalk in CenterIM:
Assuming you have centerim installed.
cd into your .centerim directory and edit the config file.

vim config

Add the following to the file:

jab_nick MyUser@gmail.com
jab_pass
jab_server talk.google.com:5223
jab_osinfo 1
jab_prio 4
jab_ssl 1

Enter the command mode by pressing the Esc key.

:wq

This will write and quit.
run centerim:

centerim

or better, run it in screen…

screen centerim

Press F4 for the general menu.
Select Accounts..

Under the Jab protocol, you will now see the connection details reflected.

Irssi

Irssi is a Linux command line IRC client.
When I use Irssi,
these are the links I use most commonly.
http://pthree.org/2010/02/02/irssis-channel-network-server-and-connect-what-it-means/
http://quadpoint.org/articles/irssi
http://linuxreviews.org/software/irc/irssi/#toc6
IRC command reference http://www.ircle.com/reference/commands.shtml
and full help for commands http://static.quadpoint.org/irssi-docs/help-full.html
For the beginner
The Full manual
Splitting Windows

I’ll probably end up adding more to this.

Alpine

Alpine is a Linux command line mail client.
Here
is an accurate guide on how to setup your GMail accounts using IMAP in alpine.
I used this for my first account setup.

When you need to setup multiple accounts,
you have to do a little bit more configuration.
I followed this.

Then create a Role.

I run all my external shell apps on screen.
So I run the following command…

screen alpine

You should be presented with the Main Menu.

Press S (Setup), L (collectionLists)

Press A (Add Cltn)
Add a Nickname that makes sense to you to reference your account by,
and the Server, as you did in the initial account setup,
save as you did in the initial setup.
Your Setup Collection List should look similar to the following.

From the Main Menu, press S (Setup), C (Config).
Scroll down until you find “Enable Incoming Folders Collection” and turn the radio button on.

Press E (Exit), and Y (Yes) to the Commit changes prompt.
You should be back on the Main Menu now.
Now you need to add a role for each account you’ve just setup.
Press S (Setup), R (Rules).

Then choose R (Roles).
Press A (Add).
Setup each role like the following.

Press E (Exit Setup), and Y to the save prompt.

Again in the S (Setup), C (Config).
Some of the settings that need to be turned on are:

  • alternate-compose-menu is optional
  • confirm-role-even-for-default

I set the following fields, so they show up in new messages you are composing.

Create a new message

There are a few ways you can compose a new email message.
This depends on where you start the process from.
If you’re in one of your mail folders,
you can press C (Compose).
You’ll be asked which role you would like to use to compose the message.
These are the role’s you set up before,
each one applies to one of your email accounts.
Once you choose one,
you’ll see a template with the fields you set up before.
Fill out the fields.
When your done composing your message,
press Ctrl-X to send.

Move a message from folder to another folder

  1. Select the message you want to move.
  2. Press the S (Save) key.
  3. If you have multiple email accounts, press Ctrl+N (Next Collection) or Ctrl+P (Prev Collection) to cycle through your accounts.
  4. Press Ctrl+T (To Folders).
    You will be presented with the collection of your email folders for your account.
  5. Select Which folder you want to put your message into.
  6. Press enter, unless you have to move the message down another level.
  7. If this is the case, press ‘/’ (the slash key).
  8. Then either the Tab key twice, or Ctrl+X (List matches).
    This will show you the next layer of folders to choose from.
    Either select the folder you want to move your message to and press Enter,
    or to go to another level, repeat steps 5 to 8.
  9. Once you’ve located the target folder (and selected it) to save (move) your message to,
    you’ll be provided with the path that you are about to save to.
  10. Press Enter. The message [Saving DONE] will be displayed.
    You message is now moved.
    When you return to the source folder,
    you will be asked if you want the message that is there deleted,
    so that you have moved, not copied the message.
    You have the option to either copy or move.

Multi selecting (Selecting multiple emails)

  1. Select the email and press the ‘;’ (semicolon) key.
  2. You will be prompted chose a selection criteria.
    I selected C (just select current message).
    When you do this, zoom will come into effect.
    So you will only see the currently selected messages.
  3. To un-zoom, so you can see all messages from the folder you were in, just press Z
    You will now see an ‘X’ next to the messages you have multi selected.
  4. Press the Z key again to zoom to the selected messages.
  5. Press A (Apply), then select the command you want to apply and that’s it.
  1. Select the link.
  2. Press Enter.
  3. Right click the link and select “Open link”.

Enable Spell Check in Alpine

First check that it’s not enabled

When composing a message, press  Ctrl+T
If you don’t get spell check, you’ll need to do the following.

Make sure you have aspell installed

On a debian based system, you can run

dpkg-query -l '*aspell*'

This will show you the aspell components installed

Or more precisely, just search for aspell

dpkg -l aspell

Once you find it, you can run

dpkg-query -W -f='${Status} ${Version}\n' aspell

This will tell you whether or not it’s installed.
If it’s not, you’ll need to install it:

sudo apt-get install aspell

From the Main menu in Alpine, S (Setup), C (Config).
Look for “spell”.
You can press ‘W’ to search and type in “spell” without the quotes.
Press Enter.
The first option you will find should be “Spell Check Before Sending”.
You can turn this on if you like.
Press ‘W’ again, accept the default, press Enter.
You should now see the option “Speller”.
Press Enter, and type in

aspell -c

Press Enter to accept.
Press ‘E’ to exit config.
Press ‘Y’ to the Commit changes prompt.

If you run the following at the command prompt

aspell

You should get a little information about what the -c switch does.

rsync over SSH from Linux workstation to FreeNAS

March 6, 2011

I’ve been intending for quite some time to setup an automated or at least a thoughtless
one click backup procedure from my family members PC’s to a file server.
Now if you put files directories in the place where we are going to rsync to, and run the command we’re going to setup, those new files directories will be deleted.
So in this case, we have a master / slave model.
You can also set it up so that no files directories are automatically deleted. That’s not what I’m doing here though.

Links I found helpful

rsync man page
SSH man page
Ken Fallon’s “A private data cloud” podcast

I wanted to setup the script to mirror the local disk or several directories on it to the file server.
So the local disk would be the master.
I often use the file server as an intermediate step to pass files around my network.
So I just need to be aware not to put files in the directories that are going to get written to on the file server, but use alternative ones instead.
Otherwise they will be overwritten when rsync runs.

Objective

Provide a regular (on the hour) or one click sync of files (once the fileserver is on a decent UPS) from:

  1. My external drive to the file server.
  2. My wife’s thumb drive to the file server.
  1. /media/EXTERNAL/Applications to MyFileServer/MyShare/ExternalBackup/Applications
    /media/EXTERNAL/Documents to MyFileServer/MyShare/ExternalBackup/Documents
    /media/EXTERNAL/media/Books to MyFileServer/media/Books
    /media/EXTERNAL/media/EducationalMedia to MyFileServer/media/EducationalMedia
    /media/EXTERNAL/media/Images to MyFileServer/media/Images
  2. /media/disk to MyFileServer/WifesShare/disk

——via SSH

Until the file server is being powered by a UPS I can set up shutdown scripts for,
so when we’re not about, it will still shutdown gracefully on power outage,
we’ll be running the rsync scripts manually.
As I don’t want an hourly script syncing data to the file server when the power gets cut.
Why? because RAID arrays often get destroyed by being written to when they loose power.
Currently if we loose power the file server is on a small UPS and we can halt any sync scripts interacting with the file server before she goes down.
We can manually shutdown the file server gracefully.

You need to take good precaution with rsync as you can erase data easily.
I like to use the –dry-run or -n until I’m happy that the command I’ve got is going to actually do what I think it is.
You can use -v the verbose option with levels of verbosity up to -vvv for debugging rsync. Generally -vv is heaps.
Archive mode -a is actually -rlptgoD. Check the man page for details.
–delete delete extraneous files from dest dirs that are not on the source.
–force will delete directories from dest even if not empty
It’s a good idea to setup some test directories for source and dest.
You can also (if you want to be extra careful) mount your dest and source or just your dest directory read only.
Put a copy of some files and directories in each, and make some changes to source and/or dest.
Then once you run the command, you can check that the sync has done what you expected.

My initial test command after I created the rsyncTestSource and rsyncTestDest dirs:

rsync -vva --dry-run --delete --force /media/EXTERNAL/rsyncTestSource/ /media/EXTERNAL/rsyncTestDest/

Perform checks.

Then remove the –dry-run.

Perform checks again.

Now to file server:

You’ll have to, if you havn’t already, setup SSH on your file server.
You can follow the steps on my post here for that if you like…

The initial command I used:

rsync -vva --dry-run --delete --force -e 'ssh -p 2222' /media/EXTERNAL/rsyncTestSource/ myUser@myFileServer:/mnt/FileServer/myUserDir/rsyncTestDest/

You can specify the -e option followed by the remote shell.
rsync must be installed on both source and dest machines.
By default FreeNAS already has rsync, as does a standard debian install.

Then remove the –dry-run

Perform checks again.

Now for the first real backup, add the dry run to start with:

rsync -vva --delete --force -e 'ssh -p 2222' /media/EXTERNAL/Applications/ myUser@myFileServer:/mnt/FileServer/myUserDir/External-Backup/Applications/

Then remove the –dry-run.

Perform checks again.

I added a collection of these commands to a file (rsync_EXTERNAL_to_fileserver) to run for each directory and saved to my ~ directory.

Turn the executable bit on.
Make sure owner and group is correct.

chmod 750 rsync_EXTERNAL_to_fileserver
chown MyUserName:MyGroupName rsync_EXTERNAL_to_fileserver

Add a command drawer to the task bar.
Add a Custom Application Launcher to the drawer that points to the rsync_EXTERNAL_to_fileserver file.
You can even add an image that makes sense to the drawer.
Mine looks like this, with 1 command launcher.

 

 

 

 

 

 

Ok, it’s 2 clicks for me, but you don’t have to use a drawer 🙂

There are also other ways to do this.
Like this video.

Installation of SSH on 64bit Windows 7 to tunnel RDP

August 26, 2010

This post covers two scenarios.

Scenario one

With this setup I have a Windows 7 VM (the server) on the same network segment as the client PC which will be taking over any work I would normally do on my Windows XP box.
My existing XP box is used for any development that is easier to do on a Windows machine than a *nix machine.
Mostly .Net development.

Scenario two

Includes tunneling to a NATed Windows 7 machine on a different network

Access to my existing Windows XP box:
Is by way of RDP session tunneled through SSH.
SSH link being established from one of my Debian eeepc’s (The computer I use most of the time) to the existing Windows XP machine.

Used OpenSSH for the existing Windows XP machine.
http://sshwindows.sourceforge.net/ which is no longer supported.
Couldn’t get key pair authentication working though when I set it up.

I thought I’d give OpenSSH a try on the Windows 7 machine and see how far we could get.
Once followed all directions in the ssh readme.txt and comparing with the setup on my existing Windows XP box.
The OpenSSH Server service wouldn’t start.
Followed directions here.
Tried everything I could think of and still couldn’t get the service to start.

So going on some others advise, decided to give copSSH a try, as it is an implementation of OpenSSH, but currently being maintained.
Thanks to Tevfik Karagülle.
This worked out well and was a very easy setup.
The version of CopSSH used for this was 4.1.0 from here.

Initial sites used for copSSH install

http://www.sevenforums.com/customization/19864-ssh-windows-7-a.html
http://www.itefix.no/i2/copssh

Installation of copSSH

When you add a user to the CopSSH Control Panel, make sure you run the CopSSH Control Panel as an administrator (probably best to runas administrator for any actions),
else the user appears to be added, but when you try to SSH to the server, you get something along the lines of…
Unable to authenticate
Failed password for invalid user
See http://www.itefix.no/i2/node/12494#comments

Setup for the tunnel

Create a file in your ~ dir. TunnelToWin7Box for example, and put the following command in it.

ssh -v -f -L 3391:localhost:3389 -N MyUserName@MyWindows7Box

Turn the executable bit on.
Make sure owner and group is correct.

chmod 750 TunnelToWin7Box
chown MyUserName:MyGroupName TunnelToWin7Box

Add a command drawer to the task bar.
Add a Custom Application Launcher to the drawer that points to the TunnelToWin7Box file.
You can even add an image that makes sense to the drawer.
Mine looks like this, with 3 command launchers…

The first port there can be any port not currently in use.
The second port is the port that RDP listens on in Windows.
You also need to add an inbound rule to open port 22 or a port of your choosing on the Windows Firewall.
Also close the Remote Desktop port TCP 3389 on the Windows box.
If the server you are trying to tunnel to is behind a NAT and not on your network, I.E. you are trying to tunnel to your work machine from home for example, There is a little more involved in setting up the firewall rule and a change to the sshd_config.
You’ll need to add an inbound rule. I called it SSH. In the Programs and Services tab… selected “All programs that meet the specified conditions”.
For the Service Settings, only one that would work was “Apply to services only”. I thought it would be best to select only the ssh service, but this wouldn’t allow SSH in.
General tab just had Enabled on. Computers tab was untouched. Users and Scope was untouched. Advanced tab only needed to select Private check box.
“Protocols and ports” tab Protocol type is TCP, Local port is port 22, Remote port is All Ports.
Edit the C:\Program Files (x86)\ICW\etc\sshd_config as an administrator.
Add the line… GatewayPorts yes
Or uncomment it and set to yes rather than no if it already exists.

Command I used for the NATed scenario

ssh -v -f -L 3392:localhost:3389 -N User@YourWorksGateway.com -p 2222

The port is the port that your network admin has setup for you to forward to the machine you want to tunnel to.

When I run the command to try establish the tunnel I was getting an error message.
I made a post here.
So I un-installed copSSH and re-installed a few times trying different things.
Before last un-install, I removed the users that copSSH adds, because it doesn’t remove them on un-install,
and deleted the OpenSSHServer service using the “sc delete OpenSSHServer” command in cmd.exe shell running as administrator.
Installed again using all defaults.
It appears as even though SSH gives the message that it won’t tunnel, if you then try and open the port forwarded RDP session, it works.
In saying that, sometimes it didn’t work.
This happens if you click the command launcher more than once and you end up with more than one tunnel established.
In which case you just kill one of them and your away laughing.

Setup your Remote Desktop Session now

I’ve been using Gnome-RDP for my RDP sessions.
Set the session up to look like this.

Once done, click Connect, and you should have your RDP session from your Linux box to your Windows 7 box secured courtesy of SSH

Setup Key pair authentication

On Debian epc, or any other Debian machine for that matter

Copy the existing public key I used for SSHing to other servers to MyWindows7Box.
This is considerably more difficult if you want to scp the key to a NATed machine on another network.
Read the likes of this if your interested.
It’s the public key, so sniffing it is not such a big deal.

scp ~/.ssh/id_rsa.pub MyUserName@MyWindows7Box:

Make sure you have the Colan at the end of the above command, else the file won’t be copied.
You may receive a prompt that the authenticity of the server you are trying to scp to can’t be established and you want to continue.
The server you are trying to connect to is added to the list of known hosts on the local machine.
Thats /home/MyUserName/.ssh/known_hosts
I didn’t get that with scp’ing to MyWindows7Box because my known_hosts already knew about MyWindows7Box from my previous OpenSSH install.

On MyWindows7Box

In the dir C:\Program Files (x86)\ICW\home\MyUserName\.ssh\
I copied the authorized_keys file to authorized_keys-OrigWithInstall (rename).
Wasn’t allowed to edit the authorized_keys file for some reason, so opened a Bash shell that comes with copSSH
and edited ~/.ssh/authorized_keys with nano. Deleting the public key.
When I tried to open this file in file explorer, it didn’t appear to have been edited.
This is because the file I thought I had edited (C:\Program Files (x86)\ICW\home\MyUserName\.ssh\authorized_keys)
was actually C:\Users\MyUserName\AppData\Local\VirtualStore\Program Files (x86)\ICW\home\MyUserName

From C:\Program Files (x86)\ICW\home\MyUserName\.ssh (or at least what I thought was there),
the public key needs to be put into the list of authorized clients that may connect to the ssh daemon.
Can do this using the Bash shell that comes with copSSH.

$ cat id_rsa.pub >> .ssh/authorized_keys

You can now delete the id_rsa.pub on the target machine.

Copied C:\Users\MyUserName\AppData\Local\VirtualStore\Program Files (x86)\ICW\home\MyUserName\authorized_keys
to C:\Program Files (x86)\ICW\home\MyUserName\.ssh\authorized_keys

With scenario two, there were a few differences.
I’m thinking some of which were probably due to a more recent version of CopSSH (4.1.0).
For starters there was no authorized_keys file anywhere, so I created one (in C:\Program Files (x86)\ICW\home\User\.ssh).
As stated above, it’s considerably more difficult to scp the id_rsa.pub from a remote pc to a NATed server.
Put id_rsa.pub in C:\Program Files (x86)\ICW\home\User\.ssh along with the authorized_keys I created, and from the bash shell
(accessible from the Copssh folder in the start menu) who’s root dir is C:\Program Files (x86)\ICW\
ran the cat command shown above.

This is probably a better way to copy the public key:

ssh-copy-id MyUserName@MyWindows7Box

Anapnea showed me this.

Could now connect via key pair auth

Made the usual changes to C:\Program Files (x86)\ICW\etc\sshd_config on MyWindows7Box

I.E. turn root access off, password auth off,
set
AllowUsers MyUserName
Although this is done by the CopSSH Control Panel in version 4.1.0
I think a service restart is required to reload changes.
When you make changes to the sshd_config, you’ll need to do them as an administrator (similar to how you would on a *nix system as root).
This site has example of setting up SSH to be even more secure by modifying the sshd_config.
It’s specific to copSSH.
There are many items on the net that show and describe the options when it comes to the sshd_config.
The available options are in the man page http://unixhelp.ed.ac.uk/CGI/man-cgi?sshd_config+5

Enjoy!

A few steps to secure a FreeNAS server

April 6, 2010

Change the web gui admin user name in System|General under WebGUI->Username.

Change the default password in System|General|Password.

Setup key pair authentication for SSH and secure FreeNAS.

Clean out any existing files in ~/.ssh on your client machine.
At command prompt on client:

$ ssh-keygen -t rsa

agree to location that ssh-keygen wants to store the keys… ~/.ssh
Enter a pass phrase twice to confirm. This is the pass phrase for the public key.
Keys are now in ~/.ssh

I created the home directory in /mnt/FileServer and chown’d it to root:wheel.

mkdir /mnt/FileServer/home
chown root:wheel /mnt/FileServer/home

Created the myuser directory in /mnt/FileServer/home.
In the web UI Access|Users|Edit for my user. I set the Home directory to /mnt/FileServer/home/myuser/
The reason we can’t use the default ~ directory of /mnt is because everything in front of /mnt/FileServer (the mount point of my RAID) is part of the FreeNAS ROM.
It’s destroyed on each reboot. Matt Rude brought this to my attention here
Log in to FreeNAS using SSH

ssh myuser@nameoffileserver

create the .ssh directory on /mnt/FileServer/home/myuser/
as myuser, create the authorized_keys file in /mnt/FileServer/home/myuser/.ssh if it doesn’t already exist

$ touch authorized_keys

Copy the public key to the file server

scp ~/.ssh/id_rsa.pub myuser@nameoffileserver:

Make sure you have the collan at the end of the above command, else the file won’t be copied.
Type yes to the prompt that the authenticity of the server you are tryign to scp to can’t be established and you want to continue.
The server you are trying to connect to is added to the list of known hosts on the local machine.
Thats /home/myuser/.ssh/known_hosts
On the server, from the ~ directory (thats /mnt/FileServer/home/myuser in our case)
The public key needs to be put into the list of authorized clients that may connect to the sshd.

$ cat id_rsa.pub >> .ssh/authorized_keys

Although this is a better way to copy the public key:

ssh-copy-id MyUserName@MyWindows7Box

We need to change some permissions on…
your home directory on the server (/mnt/FileServer/home/myuser) may have the wrong permissions. We need to remove the write perms for group and other.

$ su root
# chmod go-w /mnt/FileServer/home/myuser

The /mnt/FileServer/home/myuser/.ssh currently had 755 so

# chmod go-w /mnt/FileServer/home/myuser/.ssh

had no effect.
/mnt/FileServer/home/myuser/.ssh/authorized_keys needed to be chmod 600. In fact anything/everything in the ~/.ssh dir (if there is anything else) needs to be chmod 600

Also need to

nameoffileserver:/mnt/FileServer/home/myuser/.ssh# chown myuser authorized_keys

We can now remove the ~/id_rsa.pub from the server, now that the key is in ~/.ssh/authorized_keys

$ rm ~/id_rsa.pub

Should now be able to log in using key pair authentication.

Turn password authentication off, and changed the default ssh port in the web gui Services|SSH.

Turned ssl on to access the web gui in System|General Setup.

When I open up the FreeNAS server to the internet, it’ll be by way of SSH tunnel rather than just opening up the firewall to https to the server.

Looks like there is a pretty simple guide here to do that.

Used the following resources:

http://www.learnfreenas.com/blog/
http://phanvinhthinh.blogspot.com/2010/02/how-to-secure-your-freenas-server.html
http://www.freenaskb.info/kb/?View=entry&EntryID=257
http://www.learnfreenas.com/blog/2009/07/22/how-to-connect-to-your-freenas-server-via-ssh-without-a-password-password-free-logins-via-public-key-authentication/
http://www.freebsd.org/doc/en/articles/committers-guide/ssh.guide.html