Archive for the ‘FreeBSD’ Category

Keeping Your Linux Server/s In Time With Your Router

March 28, 2015

Your NTP Server

With this set-up, we’ve got one-to-many Linux servers in a network that all want to be synced with the same up-stream Network Time Protocol (NTP) server/s that your router (or what ever server you choose to be your NTP authority) uses.

On your router or what ever your NTP server host is, add the NTP server pools. Now how you do this really depends on what your using for your NTP server, so I’ll leave this part out of scope. There are many NTP pools you can choose from. Pick one or a collection that’s as close to you’re NTP server as possible.

If your NTP daemon is running on your router, you’ll need to decide and select which router interfaces you want the NTP daemon supplying time to. You almost certainly won’t want it on the WAN interface (unless you’re a pool member) if you have one on your router.

Make sure you restart your NTP daemon.

Your Client Machines

If you have ntpdate installed, /etc/default/ntpdate says to look at /etc/ntp.conf which doesn’t exist without ntp being installed. It looks like this:

# Set to "yes" to take the server list from /etc/ntp.conf, from package ntp,
# so you only have to keep it in one place.
NTPDATE_USE_NTP_CONF=yes

but you’ll see that it also has a default NTPSERVERS variable set which is overridden if you add your time server to /etc/ntp.conf. If you enter the following and ntpdate is installed:

dpkg-query -W -f='${Status} ${Version}\n' ntpdate

You’ll get output like:

install ok installed 1:4.2.6.p5+dfsg-3ubuntu2

Otherwise install it:

apt-get install ntp

The public NTP server/s can be added straight to the bottom of the /etc/ntp.conf file, but because we want to use our own NTP server, we add the IP address of our server that’s configured with our NTP pools to the bottom of the file.

server <IP address of your local NTP server here>

Now if your NTP daemon is running on your router, hopefully you have everything blocked on its interface/s by default and are using a white-list for egress filtering.

In which case you’ll need to add a firewall rule to each interface of the router that you want NTP served up on.

NTP talks over UDP and listens on port 123 by default.

After any configuration changes to your ntpd make sure you restart it. On most routers this is done via the web UI.

On the client (Linux) machines:

sudo service ntp restart

Now issuing the date command on your Linux machine will provide the current time, yes with seconds.

Trouble-shooting

The main two commands I use are:

sudo ntpq -c lpeer

Which should produce output like:

            remote                       refid         st t when poll reach delay offset jitter
===============================================================================================
*<server name>.<domain name> <upstream ntp ip address> 2  u  54   64   77   0.189 16.714 11.589

and the standard NTP query program followed by the as argument:

ntpq

Which will drop you at ntpq’s prompt:

ntpq> as

Which should produce output like:

ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1 15720  963a   yes   yes  none  sys.peer    sys_peer  3

Now in the first output, the * in front of the remote means the server is getting it’s time successfully from the upstream NTP server/s which needs to be the case in our scenario. Often you may also get a refid of .INIT. which is one of the “Kiss-o’-Death Codes” which means “The association has not yet synchronized for the first time”. See the NTP parameters. I’ve found that sometimes you just need to be patient here.

In the second output, if you get a condition of reject, it’s usually because your local ntp can’t access the NTP server you set-up. Check your firewall rules etc.

Now check all the times are in sync with the date command.

Procurement & Config of Sun Fire V240 & ALOM

October 25, 2014

This is the sequence of events I took to prepare a Sun Fire V240 for hosting pfSense which is a free and open source FreeBSD based enterprise grade routing solution for a client of mine.

Recently I was tasked with setting up a network with what I considered to be enterprise grade hardware and software as cheaply as possible. When I take on these sorts of tasks, security is forefront in my mind, so I often look toward components that are as open as possible and that don’t sport any known (to me at least) back-doors and are able to be easily upgraded and patched at little to no cost.

A requirement was clean shut-downs on power failure events at least for the critical servers.

Procured Kit

  1. APC Smart-UPS 5000 with batteries in good condition. Worth a little under $6k if you’re buying new. I wouldn’t buy new. If you shop around, these can be picked up at a fraction of that cost. From my experience the APC kit is some of the best UPS gear available.
    APC Smart-UPS 5000
  2. AP9630 UPS network management card $92 new. Most of the details around setting these UPS’s up I’ve already posted on. If you search my blog for “APC UPS” you’ll find it.
    APC AP9630
  3. Enterprise grade router/firewall:
    Sun Fire V240 (RISC architecture). 2 x UltraSparc-IIIi 1.5Ghz CPU. 4Gbit on-board Ethernet ports. Lights-out management port. 4GB RAM. 2U. Dual redundant PSU’s. 2 x 72GB Hot Swap 10k SCSI HDD’s. With rack mount rails. Currently going for around $1.5k on Ebay. Price paid: $160 incl shipping. I doubt you’d find anything of these specifications off the shelf for under a $1000. This is a lot of server for a very small amount of money.
    Sun Fire V240
  4. Firmware: pfSense. Free and open source.

Planning

As part of my planning I evaluated (again) whether or not free software routing solutions are actually up to the task of the enterprise. My research led me to believe some were… based on others that had already been down this route ( PTP 😉 ). Openness is a biggie for me. I like to know that eyes are on the software rather than it being closed up in a proprietary package.

I evaluated m0n0Wall, ipCop (Linux based), smoothwall and pfSense. pfSense had been used in quite a few large environments successfully. When I had made my decision on the firmware to use, I went through the hardware requirements and of course started looking for high quality second-hand gear.

For the router hardware I was going to need at lease 1GHz CPU as I wanted to run Snort as my IDS/IPS. PCI-X or PCI-e network adapters (which of course I didn’t need to worry about with the Sun Fire server). Snort needs 512MB RAM minimum. Preferably at least 1GB.

Gaining Access to the Sun Fire V240

Now I had no idea of how the previous owner had setup the configuration of the ALOM (Advanced Lights Out Management). In fact I hadn’t administered a Sun Fire server before at all. On page 11 of the Sun FireTM V210 and V240 Servers Getting Started Guide it states the following:

The system console is directed to ALOM by default and is configured to show server console information on startup.
ALOM enables you to monitor and control your server over either a serial
connection (using the SERIAL MGT port), or Ethernet connection (using the NET MGT port).
For information about configuring an Ethernet connection, refer to the Sun Advanced Lights Out Manager Software User’s Guide.” The NET MGT port can also be disabled and in my case it turned out it was, but I’ll get to that later. I didn’t have a spare DB-9 to RJ-45 adapter lying around to wire it up and connect to the SERIAL MGT port.

Sun Fire V240 rear

Telnet?

(but didn’t get that far)

Since I was going to go down the path of trying to connect to the ALOM console via the NET MGT Ethernet port, I thought telnet would probably be the path of least resistance.

Page 10 of the “Sun Advanced Lights Outs Manager Software User’s Guide” stated the following:

The 10-Mbyte Ethernet port enables you to access ALOM from within your company
network. You can connect to ALOM remotely using any standard Telnet client”. On the V240, the
ALOM Ethernet port is referred to as the NET MGT port.

Using a laptop with Kali Linux installed (because it has lots of great tools for network reconnaissance), Running

ethtool eth0

told me that my NIC supported:
10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full

Wireshark?

Tried connecting directly to the NET MGT port with wireshark running on my laptop. Didn’t get any packets from the device. At the time I thought it may have been because my laptop’s NIC was using 100baseT, but later on I found out that the NET MGT port was disabled.

Tried pinging my broadcast address ping -b 255.255.255.255 then checked my arp table arp -a. No results that looked like what I was looking for. Of course this strategy would have taken quite some time to complete… and in my case it would have yielded no results anyway.

NMap?

I started with the private IPv4 address spaces. Using Wi-fi on my Kali box, tried the 16 bit block:

nmap -sn 192.168.*.*

Got a false positive of a cable modem. How did I work out that it was a false positive?

nmap -A <falsePositiveIPOfCableModem> # Gave me the model and everything I needed to know about the device to rule it out.

Next up the 20 bit block

nmap -sn 172.16.0.0/12
Nmap done: 1048576 IP addresses (0 hosts up) scanned in 108670.97 seconds

In earlier releases of nmap the -sn switch was known as -sP

I decided I needed to try and speed up the scan, so I connected directly to the V240 NET MGT port with a Cat5 patch cable (ethtool told me my laptop’s NIC had MDI-X on (force crossover mode)) and made sure my network card supported 10baseT which the “Sun Advanced Lights Outs Manager Software User’s Guide” told me it needed for the NET MGT port. Turns out the NET MGT port didn’t support 10baseT. Details a bit further down.

Added a static IP address to the /etc/network/interfaces. Currently it looked like:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet dhcp

So I commented out the auto wlan0 and iface wlan0 inet dhcp and added the following:

auto eth0
iface eth0 inet static
address 10.1.1.6
netmask 255.255.255.0
broadcast 10.1.1.255
#gateway 10.1.1.1 # Make sure you don't add a gateway, as we're connecting directly to the V240

followed by:

service networking restart

then changed my /etc/NetworkManager/NetworkManager.conf
managed=true to be managed=false
So Network manager didn’t keep interfering with my interfaces.

I followed this with a

service network-manager restart

followed with ifconfig to make sure my network interface was using the correct IP address, netmask and broadcast. It wasn’t, so…

ifdown eth0
ifup eth0
ifconfig

Success, it now was.

Now to make sure my network card was communicating in a manner that the V240’s NET MGT port would understand.

Using ethtool

ethtool eth0

told me 10baseT was supported, but it also told me my current speed was 100Bb/s. So I tried changing the speed with

ethtool -s eth0 speed 10

and received Cannot advertise speed 10. So made the following temporary changes as they’ll be lost on reboot. Changed the duplex… Ran the following:

ethtool -s eth0 speed 10 duplex half

Now with a:

ethtool eth0

I got:

Speed: unknown!
Duplex: Unknown! (255)

So turned the auto negotiation off:

ethtool -s eth0 speed 10 duplex half autoneg off

Now with a:

ethtool eth0

I got:

Speed: 10Mb/s
Duplex: Half
Auto-negotiation: off
#and some other settings.

Some useful ethtool resources:

With these settings the NET MGT port didn’t have it’s green link led on. So I kept playing with the settings. Turns out it would only work with speed 100 duplex full contrary to page 10 of the “Sun Advanced Lights Out Manager Software User’s Guide”
These were the settings that gave me link:

Supported pause frame use: No #Don't think I fiddled with this.
Supports auto-negotiation: Yes
Advertised link modes: Not reported #Don't think I fiddled with this.
Advertised pause frame use: Symmetric #Don't think I fiddled with this.
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: full
Port: Twisted Pair #Don't think I fiddled with this.
PHYAD: 1 #Don't think I fiddled with this.
Transceiver: internal #Don't think I fiddled with this.
Auto-negotiation: off
MDI-X: on
Supports Wake-on: g #Don't think I fiddled with this.
Wake-on: d #Don't think I fiddled with this.
Current message level: 0x000000ff (255)
drv prove link timer ifdown ifup rx_err tx_err
Link detected: yes

I was now confident that if the Sun Fire V240 NET MGT port was enabled, we’d find it’s IP address if it was using one from the private space. It was time to try the last and largest private address space. Oh, I also used wireshark to make sure nmap was doing what I expected on my laptop when I ran:

nmap -v -sn 10.0.0.0/8

I was a little confused to start with as nmap told me Scanning 4096 hosts I soon realised after checking the CIDR (Classless InterDomain Routing) and by the output nmap produced, that nmap was doing the scanning in chunks. As there was going to be a lot of results, I setup the output to files:

nmap -v -sn -oA 'scan-%Y-%m-%d_%H-%M 10.0.0.0/8

This produces the output in all three formats as discussed here.

SERIAL MGT Port?

This private address range was going to take a few days to scan, so I decided to have a poke at the SERIAL MGT port on the Sun Fire V240.

To use the SERIAL MGT port, a RJ-45 patch cable connected to a DB-9 adapter ($4.50 from globalpc) is required Unless you get the official Sun adaptor “530-3100-01”, or still have the one that came in the new box. So I splashed out and went with the $4.50 option. It cost me more in gas to get to the shop than buy the part. I Wired it up according to page 25 of the “Sun Fire V210 and V240 Servers Installation Guide“.

RJ-45 to DB-9 Adapter Crossovers
SERIAL MGT Port Adapter (DB-9) Pin
1 (RTS) 8 (CTS)
2 (DTR) 6 (DSR)
3 (TXD) 2 (RXD)
4 (Signal Ground) 5 (Signal Ground)
5 (Signal Ground) 5 (Signal Ground)
6 (RXD) 3 (TXD)
7 (DSR) 4 (DTR)
8 (CTS) 7 (RTS)

Red wire in with green.

RJ45-DB9 RJ45

Installed minicom and setserial and did pretty much the same as I did here. Plugged the console cable in and tried to establish a connection.

Then found that by default ALOM only communicates through the SERIAL MGT port at startup (of ALOM I thought), but it seems that at power on of the server also.

At the {1} ok prompt, I typed #. (that’s hash followed with dot) to escape from the system console sc>

I then entered the showsc command and found that the MGT NET port was disabled.
I then ran a

usershow

to see which user accounts existed and was prompted to set a password for the admin user.
When you connect to ALOM for the first time, you are automatically connected as the admin account.“.
So obviously the seller of the system reset ALOM.

SettingAdminPassword

Also audited the user accounts, and the details on the permission levels are here.

Ran the following script. A nice little dialog from Ramesh here (see step 4) too.

setupsc
  • Turned NET MGT port on
  • Changed the default if_connection from none to ssh
  • Answered no to email alerts (only for logged in users)
  • Yes to configure the network interfaces
  • No to DHCP
  • Entered the IP address for the NET MGT port
  • Entered the netmask for the NET MGT port
  • Entered the gateway for the Net Mgt port
  • Should powerstate memory be enabled [y]? y
  • Enabled power on sequencing

Then we need to restart the ALOM to apply the new settings.

resetsc -y

If you still have minicom running, it’ll show you what happens during the boot sequence and then present you with a login prompt.

Extra Resources

SSH

At this point I plugged the Ethernet cable from my test switch (10 Mbit/s capable) back into the NET MGT port of the Sun Fire V240 and tested that ALOM was responding on the IP address that I set the NET MGT port to.

ping <myNetMgtIP>

It was answering. So I attempted to SSH in on a different machine.

ssh admin@<myNetMgtIP>

I was presented with the hosts key fingerprint

The authenticity of host <myNetMgtIP> (<myNetMgtIP>)' can't be established.
RSA key fingerprint is <myExistingHostKeyInHex>.
Are you sure you want to continue connecting (yes/no)?

I wanted to know I was connecting to what I thought I was connecting to, so answered no.
Then in minicom I queried the hosts key fingerprint

ssh-keygen -l -t rsa

I was provided with the key fingerprint that matched what I was presented with when I attempted to SSH, so I new I was actually communicating with the server I thought I was.

I then regenerated the hosts key fingerprint

ssh-keygen -r -t rsa

and was provided with the new key. A restart of the SSH daemon is required to load the new host key.

sc> restartssh

Then SSH in. Confirm when prompted that the host key matches the newly provided key.

ssh admin@<myNetMgtIP>
The authenticity of host <myNetMgtIP> (<myNetMgtIP>)' can't be established.
RSA key fingerprint is <myNewHostKeyInHex>.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '<myNetMgtIP>' (RSA) to the list of known hosts.

Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

Sun(tm) Advanced Lights Out Manager <versionHere> ()

Please login: admin
Please Enter password: *********

sc>

We’re in!

At any time for a list of commands, you can type help.

logout
Connection to <myNetMgtIP> closed.

We’re out!

copying with scp

March 25, 2012

I was having some trouble today copying a file (1.5GB .iso) from a notebook to a file server.
The notebook I was using was running Linux Ubuntu.
The server FreeBSD.
I was trying to copy this file using SMB/CIFS via Nautilus.
I tried several times, it failed each time.
Then I thought, what are you doing… drop to the command line.

scp to the rescue

The command I used:

From the directory on my local machine I was copying the file from

scp -P <MyPortNumberHere> MyFile.iso <MyUserName>@<MyServer>:/Path/To/Where/I/Want/MyFile/ToGo/MyFile.iso

This also took about half  the time to copy that SMB took, and SMB didn’t even complete. Not to mention the transfer is secure (SSH)

Some additional resources

http://www.linuxtutorialblog.com/post/ssh-and-scp-howto-tips-tricks

http://amath.colorado.edu/computing/software/man/scp.html

Also don’t forget to check the man page out 😉

man scp

rsync over SSH from Linux workstation to FreeNAS

March 6, 2011

I’ve been intending for quite some time to setup an automated or at least a thoughtless
one click backup procedure from my family members PC’s to a file server.
Now if you put files directories in the place where we are going to rsync to, and run the command we’re going to setup, those new files directories will be deleted.
So in this case, we have a master / slave model.
You can also set it up so that no files directories are automatically deleted. That’s not what I’m doing here though.

Links I found helpful

rsync man page
SSH man page
Ken Fallon’s “A private data cloud” podcast

I wanted to setup the script to mirror the local disk or several directories on it to the file server.
So the local disk would be the master.
I often use the file server as an intermediate step to pass files around my network.
So I just need to be aware not to put files in the directories that are going to get written to on the file server, but use alternative ones instead.
Otherwise they will be overwritten when rsync runs.

Objective

Provide a regular (on the hour) or one click sync of files (once the fileserver is on a decent UPS) from:

  1. My external drive to the file server.
  2. My wife’s thumb drive to the file server.
  1. /media/EXTERNAL/Applications to MyFileServer/MyShare/ExternalBackup/Applications
    /media/EXTERNAL/Documents to MyFileServer/MyShare/ExternalBackup/Documents
    /media/EXTERNAL/media/Books to MyFileServer/media/Books
    /media/EXTERNAL/media/EducationalMedia to MyFileServer/media/EducationalMedia
    /media/EXTERNAL/media/Images to MyFileServer/media/Images
  2. /media/disk to MyFileServer/WifesShare/disk

——via SSH

Until the file server is being powered by a UPS I can set up shutdown scripts for,
so when we’re not about, it will still shutdown gracefully on power outage,
we’ll be running the rsync scripts manually.
As I don’t want an hourly script syncing data to the file server when the power gets cut.
Why? because RAID arrays often get destroyed by being written to when they loose power.
Currently if we loose power the file server is on a small UPS and we can halt any sync scripts interacting with the file server before she goes down.
We can manually shutdown the file server gracefully.

You need to take good precaution with rsync as you can erase data easily.
I like to use the –dry-run or -n until I’m happy that the command I’ve got is going to actually do what I think it is.
You can use -v the verbose option with levels of verbosity up to -vvv for debugging rsync. Generally -vv is heaps.
Archive mode -a is actually -rlptgoD. Check the man page for details.
–delete delete extraneous files from dest dirs that are not on the source.
–force will delete directories from dest even if not empty
It’s a good idea to setup some test directories for source and dest.
You can also (if you want to be extra careful) mount your dest and source or just your dest directory read only.
Put a copy of some files and directories in each, and make some changes to source and/or dest.
Then once you run the command, you can check that the sync has done what you expected.

My initial test command after I created the rsyncTestSource and rsyncTestDest dirs:

rsync -vva --dry-run --delete --force /media/EXTERNAL/rsyncTestSource/ /media/EXTERNAL/rsyncTestDest/

Perform checks.

Then remove the –dry-run.

Perform checks again.

Now to file server:

You’ll have to, if you havn’t already, setup SSH on your file server.
You can follow the steps on my post here for that if you like…

The initial command I used:

rsync -vva --dry-run --delete --force -e 'ssh -p 2222' /media/EXTERNAL/rsyncTestSource/ myUser@myFileServer:/mnt/FileServer/myUserDir/rsyncTestDest/

You can specify the -e option followed by the remote shell.
rsync must be installed on both source and dest machines.
By default FreeNAS already has rsync, as does a standard debian install.

Then remove the –dry-run

Perform checks again.

Now for the first real backup, add the dry run to start with:

rsync -vva --delete --force -e 'ssh -p 2222' /media/EXTERNAL/Applications/ myUser@myFileServer:/mnt/FileServer/myUserDir/External-Backup/Applications/

Then remove the –dry-run.

Perform checks again.

I added a collection of these commands to a file (rsync_EXTERNAL_to_fileserver) to run for each directory and saved to my ~ directory.

Turn the executable bit on.
Make sure owner and group is correct.

chmod 750 rsync_EXTERNAL_to_fileserver
chown MyUserName:MyGroupName rsync_EXTERNAL_to_fileserver

Add a command drawer to the task bar.
Add a Custom Application Launcher to the drawer that points to the rsync_EXTERNAL_to_fileserver file.
You can even add an image that makes sense to the drawer.
Mine looks like this, with 1 command launcher.

 

 

 

 

 

 

Ok, it’s 2 clicks for me, but you don’t have to use a drawer 🙂

There are also other ways to do this.
Like this video.

Distributed Version Control the solution?

October 3, 2010

Due to the fact that I am starting to need a Version Control System at home for my own work and the company I currently work for during the day could potentially benefit from a real Version Control System.

I’ve set out to do an R&D spike on what is available and would best suite the above mentioned needs.
I’ve looked at a large range of products available.

At this stage, due to my research and in talking to some highly regarded technical friends and other people about their experiences with different systems, I’ve narrowed them down to the following.

Subversion, Git and Mercurial (or hg)
Subversion is server based.
Git and hg are distributed (Distributed Version Control System (DVCS)).

The two types of VCS and some of their attributes.

Centralised (or traditional)

  • Is better than no version control.
  • Serves as a single backup.
  • Server maintenance can be time consuming and costly.
  • You should be able to be confident that the server has your latest changeset.

Distributed

  • Maintenance needs are significantly reduced, due to a number of reasons. One of which is… No central server is required.
  • Each peer’s working copy of the codebase is a complete clone.
  • There is no need to be connected to a central network. Which means users can work productively, even when network connectivity is unavailable.
  • Uses a peer-to-peer approach rather than a client-server approach that the likes of Subversion use.
  • Removes the need to rely on a single machine as a single point of failure.
    Although it is often a good idea to have a server that is always online and ready to accept changesets.
    As you don’t always know whether another peer has accepted all your changes or is online.
  • Most operations are much faster than the centralised model, as no network is involved.
  • Each copy of the repository effectively acts as a remote backup. Which has multiple benefits.
  • There is no canonical code base, only working copies.
  • Operations such as commits, viewing history and rolling back are fast, because there is no need to communicate with a central server.
  • A web of trust is used to merge code from disparate repositories.
  • Branching and Merging made easier.
  • No forced structure: a central server can be implemented or peers can control the codebase.
  • Although I don’t see huge benefits for a central server in my target scenario.
  • Buddy builds. A team member can pass a change set to another member to try before committing to a central location.
    This would stop broken CI builds.
  • There is a huge amount of flexibility with your layout.
  • With a well planned layout a Distributed Version Control System can do anything a Centralised system can do, with the additional benefit of easy merges.

In weighing up the pros and cons of distributed versus the centralised model.

I think for my target requirements,
a distributed system has more to offer in the way of time savings and hardware savings.
This page has a good explanation of the differences between Centralised and Distributed.
Here is a detailed list of comparisons of some of the more common systems.

Mercurial is ticking quite a few boxes for me.
Mercurial has a VisualStudio plug-in.
There is a GUI available for windows platforms and others that integrates Mercurial directly into your explorer.
It’s free, open, and being actively maintained.
Projects using Mercurial.

Mercurial is written in Python, which is another plus for me.
Binaries are freely available for Windows, GNU/Linux, Mac OS X, OpenSolaris.
The source is also available, so you can build it for most platforms.

Plenty of documentation here, plus the book.

Installation and Configuration. Covering Windows, Debian and more.
TortoiseHg has binaries for windows and debian, but only for Squeeze onwards by the look of it.
If your running Lenny, you can just use hg. apt-get install mercurial.
When I downloaded and installed the 64 bit version of TortoiseHg (v1.1.3 hg v1.6.3), it came with 4 comprehensive documents.

  1. Mercurial: The Definitive Guide 2010-02-21 as pdf
  2. TortoiseHg v1.1.3 Documentation in both pdf and chm
  3. Mercurial Command Reference

Very nice!
Turn off the indexing service on the working copies and repositories, and exclude them from virus scans
.
Can also get TortoiseHg here (For Debian, TortoiseHq isn’t available for Lenny).
Click the Tutorial link for the Quick start guide to TortoiseHg.

Once installed, start working through the following links.
http://tortoisehg.bitbucket.org/manual/1.1/quick.html
http://mercurial.aragost.com/kick-start/basic.html

Comments or thoughts?

Setting up a NFS share in FreeNAS

May 16, 2010

This setup is quite different to how you would normally setup NFS on a *nix server.
I only use NFS in read only mode due to security concerns with NFS.
There are very few options you can configure and there is no point in modifying the /etc/rc.conf /etc/exports and there is no point in adding /etc/hosts.deny, /etc/hosts.allow

as they will be removed on server reboot. Hopefully these options will be added in the future or at least a work around made available.
Ideally I’d like to add the

-mapall=myuser:myusergroup

option to the /etc/exports but there is no point as it’s not persisted to hard disk.

In the Web UI under Services|NFS leave Number of servers as default of 4 and check the enable box. This options will allow 4 concurrent users to be logged into the share.

In the Web UI under Services|NFS|Shares add a share with Path of /mnt/FileServer/myNFSshare Network 192.168.0.0/24

Have to set Map all users to root to Yes. This is the same as including the no_root_squash option that can be put in the /etc/exports on a *nix box, but normally I’d choose root_squash, but this doesn’t work well for mounting at boot without the

-mapall=myuser:myusergroup

option in the /etc/exports
Setup my authorised network, All dirs and Read only to yes.

Added the following lines to /etc/rc.conf in FreeNAS as per this link

rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_flags="-r"

Didn’t need the below line adding to the client machines /etc/rc.conf, although this said I did.

nfs_client_enable="YES"

After I restarted the server, the

mountd_flags="-r"

line was removed and the /mnt/.ssh dir was removed.
I no longer had key pair auth for SSH.
So had to go through the process of setting up that again.
The problem was any changes to /etc are not persisted to disk, so after a reboot it’s reset as it’s the FreeNAS ROM.
Matt Rude helped out with this

What I did was copy the /etc/rc.conf to my ~ which is /mnt/FileServer/home/myuser
Add the options again in /mnt/FileServer/home/myuser/rc.conf
Only the last option was actually not present and needed to be added.
Create a link from /etc/rc.conf to /mnt/FileServer/home/myuser/rc.conf

ln -s /mnt/FileServer/home/myuser/rc.conf /etc/rc.conf

Renamed the /etc/exports on the file server
Check the exports man page for the options…
Created an exports in /mnt/FileServer/home/myuser/ and added the following lines:

/mnt/FileServer/media -alldirs,ro -mapall=myuser:family -network 192.168.0.0 -mask 255.255.255.0
/mnt/FileServer/media -alldirs,ro -mapall=otheruser:family -network 192.168.0.0 -mask 255.255.255.0

Link the /etc/exports to /mnt/FileServer/home/myuser/exports

ln -s /mnt/FileServer/home/myuser/exports /etc/exports

None of the above links worked as they are removed on server reboot.
So basically the only options you have are on the Services|NFS web UI.

From here I created the /mnt/myfileserver/media directory on my client machines and set the myfileserver and media dir and perms to
/mnt/myfileserver was drwxrw—- myuser myusergroup
/mnt/myfileserver/media was drwxr-x— myuser users

Tried to mount the exported nfs share:

# mount myfreenasservername:/mnt/FileServer/media /mnt/myfileserver/media

This worked. So unmounted it.

# umount /mnt/myfileserver/media

Updated the /etc/fstab on the client machines so myfreenasservername:/mnt/FileServer/media would be mounted to /mnt/myfileserver/media on the client machines at boot.
add this to your client machines /etc/fstab

myfileservername:/mnt/FileServer/media /mnt/myfileserver/media nfs ro,hard,intr 0 0

A few steps to secure a FreeNAS server

April 6, 2010

Change the web gui admin user name in System|General under WebGUI->Username.

Change the default password in System|General|Password.

Setup key pair authentication for SSH and secure FreeNAS.

Clean out any existing files in ~/.ssh on your client machine.
At command prompt on client:

$ ssh-keygen -t rsa

agree to location that ssh-keygen wants to store the keys… ~/.ssh
Enter a pass phrase twice to confirm. This is the pass phrase for the public key.
Keys are now in ~/.ssh

I created the home directory in /mnt/FileServer and chown’d it to root:wheel.

mkdir /mnt/FileServer/home
chown root:wheel /mnt/FileServer/home

Created the myuser directory in /mnt/FileServer/home.
In the web UI Access|Users|Edit for my user. I set the Home directory to /mnt/FileServer/home/myuser/
The reason we can’t use the default ~ directory of /mnt is because everything in front of /mnt/FileServer (the mount point of my RAID) is part of the FreeNAS ROM.
It’s destroyed on each reboot. Matt Rude brought this to my attention here
Log in to FreeNAS using SSH

ssh myuser@nameoffileserver

create the .ssh directory on /mnt/FileServer/home/myuser/
as myuser, create the authorized_keys file in /mnt/FileServer/home/myuser/.ssh if it doesn’t already exist

$ touch authorized_keys

Copy the public key to the file server

scp ~/.ssh/id_rsa.pub myuser@nameoffileserver:

Make sure you have the collan at the end of the above command, else the file won’t be copied.
Type yes to the prompt that the authenticity of the server you are tryign to scp to can’t be established and you want to continue.
The server you are trying to connect to is added to the list of known hosts on the local machine.
Thats /home/myuser/.ssh/known_hosts
On the server, from the ~ directory (thats /mnt/FileServer/home/myuser in our case)
The public key needs to be put into the list of authorized clients that may connect to the sshd.

$ cat id_rsa.pub >> .ssh/authorized_keys

Although this is a better way to copy the public key:

ssh-copy-id MyUserName@MyWindows7Box

We need to change some permissions on…
your home directory on the server (/mnt/FileServer/home/myuser) may have the wrong permissions. We need to remove the write perms for group and other.

$ su root
# chmod go-w /mnt/FileServer/home/myuser

The /mnt/FileServer/home/myuser/.ssh currently had 755 so

# chmod go-w /mnt/FileServer/home/myuser/.ssh

had no effect.
/mnt/FileServer/home/myuser/.ssh/authorized_keys needed to be chmod 600. In fact anything/everything in the ~/.ssh dir (if there is anything else) needs to be chmod 600

Also need to

nameoffileserver:/mnt/FileServer/home/myuser/.ssh# chown myuser authorized_keys

We can now remove the ~/id_rsa.pub from the server, now that the key is in ~/.ssh/authorized_keys

$ rm ~/id_rsa.pub

Should now be able to log in using key pair authentication.

Turn password authentication off, and changed the default ssh port in the web gui Services|SSH.

Turned ssl on to access the web gui in System|General Setup.

When I open up the FreeNAS server to the internet, it’ll be by way of SSH tunnel rather than just opening up the firewall to https to the server.

Looks like there is a pretty simple guide here to do that.

Used the following resources:

http://www.learnfreenas.com/blog/
http://phanvinhthinh.blogspot.com/2010/02/how-to-secure-your-freenas-server.html
http://www.freenaskb.info/kb/?View=entry&EntryID=257
http://www.learnfreenas.com/blog/2009/07/22/how-to-connect-to-your-freenas-server-via-ssh-without-a-password-password-free-logins-via-public-key-authentication/
http://www.freebsd.org/doc/en/articles/committers-guide/ssh.guide.html

Adding disks, CIFS/SMB shares to FreeNAS

March 27, 2010

Add Disks:

What I did, was add a disk at a time (one each week, and stressed it for the entire week).
This way the wear on the disk should be staggered and we are less likely to have all drives fail at the same time.
Once I’d physically added all disks (ended up adding 4 x WD7500AACS for now).

Follow directions here.
This set of directions is also useful: http://freenas.org/contrib/sloan/freenas1.htm
I used software RAID 5.
I was keen to setup a raid-z using ZFS, but it’s still only an experiemental release.
Plus when I install the new RAID card, I’ll have to rebuild the array anyway, and by then, hopefully ZFS will be production ready (thanks to Olivier Cochard-Labbé and iXsystems).
Each disk I added I chose to set the Hard disk standby time to 60 minutes.
Turned the S.M.A.R.T. monitoring on.
Chose Unformated for the Preformatted file system for each disk I added as they were new disks.

Format Disks:

Format each of the disks for Software RAID.
Again following directions here

Create the software RAID array:

While the RAID is building you can continue to the next step.
It took about 12 hrs to build the array.

Format the software RAID array:

Format the array as UFS (GPT and Soft Updates).
This is BSD’s native file system.

Create the mount point:

Partition type set to GPT partition.
File system set to UFS.
Called my Share Name “FileServer”.
This will mount the array on /mnt/FileServer

Add the groups and users in the Web GUI

Access|Users:

groups:

family, sons-name, my-name, wifes-name

users:

guest:
Primary group
——guest
Additional group
——none
Other settings as default.
——enter passwords.
sons-name:
Primary group
——sons-name
Additional group
——family
Other settings as default.
——enter passwords.
my-name:
Primary group
——my-name
Additional group
——family, wheel (wheel is like admin in windows)
Other settings as default.
——enter passwords.
——enable bash Shell so I can ssh
wifes-name:
Primary group
——wifes-name
other settings same as sons-name

Enable SSH in web gui:

Services|SSH

Login to the file server and create the directories you will be sharing:

You can do this via the Web GUI (Advanced|File Manager (make sure you login as admin)) or just SSH to the shell.
I find going directly to the shell easier.

ssh [your user name]@<hostname>
Create the directories (family, media, etc) I want to share and set appropriate ownership and permissions.
I set my ownerships and perms up the same as my existing file server. I also had these recorded in a text document.

Enable CIFS/SMB In the Settings:

Authentication set to Local User.
Local Master Browser set to Yes.
Time server set to No, as I have another server doing the honors.
In Auxilary parameters, I added some of the params I used in a smb.conf file from my existing file server.
Some of these parameters in the global section.

Create the smb shares on top of FileServer (family, media, etc).
As is stated in this thread:

Set permissions in the following places:

Disk mount point, set file/directory creation masks, override inheritable permissions option in the CIFS/SMB share itself.
The creation masks I used from a smb.conf I already had setup on another file server (mouse).
These go into Auxiliary parameters on each share.

Setup Email alerts on disk failure and disk heat:

This is done in Disks|Management|S.M.A.R.T.
Heat on each of my first 3 disks Only gets to around 30 tops (in summer (room temp 24 deg c)). The bottle kneck is the 100Mb port on the switch. This only allows 100Mb total to/from the file server.
So the disks never really get a chance to heat up at this stage.
The last (4th) disk I added was getting to around 33 deg c, as it wasn’t sitting behind a fan. So I added an old 80/20mm fan I had, and stuck it in front of it, now the drive runs cooler than all the others.
Enable self monitoring.
Set Check interval to 300 (5 min).
Power mode Standby. I only want the disks checked if they are spining).
Temperature monitoring
Difference set to 5 deg c
Informal set to 33 deg c
Critical 36 deg c
Setup Scheduled self-tests in order to receive email alerts if a disk is offline.
If it’s off line I need to add another disk and re-build the array.
Directions for replacing a failed hard drive here.
Add each disk and select all hours, all days, all months, all week days and choose Offline Immediate Test.
Set the email address you want alearts to be sent to and select the Send TEST warning email on startup until your happy you have it all set up correctly.
You’ll also need to setup the email settings in System|Advanced|Email
The From email is the same as the email recipient.
If using gmail…
Outgoing mail server: smtp.googlemail.com
Port: 465
Security SSL
Username: this will be your email address.
Enter password.
Authentication method: Login
Save and Send test email.
Then back in Disks|Management|S.M.A.R.T.
Save and Restart samba.

Tested this configuration over a week.
Disks never seemed to spin down.
According to Diagnostics|Information|Disks (ATA)
APM (Advanced Power Management) is not supported on my disks (WD750AACS)
In which case there is no point in setting the Advanced Power Management or Acoustic level on Disks|Management|Disk|Edit for each disk.

Initial setup of (FreeNAS) file server

February 7, 2010

Components used:

AData Speedy Compact Flash card: NZ$30
Lian Li PC-A06FB Aluminium Case: NZ$170
ASUS p5kpl/epu Mobo: NZ$96.40
Celeron 1.8Ghz single core #430: NZ$70
Corsair 2GB KIT (2x1GB) DDR2 800Mhz DIMM PC6400 – Desktop RAM – TWIN2X2048-6400C4: NZ$104
P/S ZM750-HP NZ$257:33
2 x HDD swap trays. 3 SATA 3.5″ in 2 5.25″ bays just under NZ$300 incl shipping (havn’t got these yet).
5 x WD7500AACS HDD’s (already had these)(only using 3 for now).
Cold cathode tubes that were lying around.

The Lian Li case I chose had 4 x 5.25″ bays for HDD trays.
Using the 3 in 2 hot swap trays, I can get 6 HDD’s in to 4 5.25″ bays.

At this stage I didn’t get the 3 in 2 hot swap trays due to lack of funds.
Plus I’ll only install 3 750GB drives (I already have) at this stage.
I’ll put more drives in once I acquire a decent RAID card.
Something similar to the Adaptec RAID 3805.
The p5kpl/epu has a Gbit LAN interface, which is essential for me, as my ESX server guests will have most of their data on it.
Further down the track I’d like to get another Gbit NIC (maybe with several ports) and use LACP (Link Aggregation Control Protocol) to share the load between the NIC’s.
My current Cisco switch only has 2 Gbit ports though so I’ll need an extra Gbit switch that supports LACP.
Or may use Roundrobin or Loadbalance as the aggregation protocol in FreeNAS.
Yet to be decided.

Had quite a bit of trouble trying to install to a AData Compact Flash 2GB in a CF Card to IDE HDD adapter.

The BIOS (latest revision) wouldn’t detect it.
Tried another adapter/16MB CF SanDisk from one of my other embedded project machines in the file server and it was recognised fine.
Tried previous adapter (the one I purchased for this project) and another 16MB CF SanDisk from one of my other machines in the file server and it was recognised fine.
Tried the previous adapter (the one I purchased for this project) and 2GB CF card in another old machine and it was recognised fine.
So looks like the P5KPL/EPU BIOS has a problem detecting the 2GB AData CF card.

I had an old USB 1GB thumb drive I decided to use, this worked.
I’d rather use a CF Card to IDE HDD adapter with CF card as it’s all hidden inside the case.
May end up trying a SanDisk 128MB CF card.
They have 2 x packs on ebay for US $24 incl shippping.
All I have to do once I acquire a compatible CF card is redo the install (10 seconds)
and replace the config file that I’ll save once I’ve got FreeNAS setup and configured.

The file I used to do the install was from source forge.
You can find it from http://freenas.org/downloads -> http://sourceforge.net/projects/freenas/files/
I got a copy of the FreeNAS-amd64-LiveCD-*.iso.
Burnt the image to a CD.
And used an old CD ROM drive to do the honors.
I chose option 9) Install/Upgrade to hard drive/flash device, etc.
Then option 1) Install ’embedded’ OS on HDD/Flash/USB
I don’t need swap as I have 2GB of RAM, and I don’t want to be writing to my flash memory.
Installed in aprx 10 seconds.
Removed CD and rebooted to FreeNAS.

Now to setup the NIC’s and set the LAN IP address.
Choose option 1) Assign Interface and follow the prompts.
Choose option 2) Set LAN IP address.
Once you’ve done this, you can login to the Web UI. http://<the ip address you chose>
Default username is admin. Default password is freenas.
Make sure you change these credentials as soon as you can.
I was using an old version of the installer, so I downloaded the FreeNAS-amd64-embedded-*.img from sourceforge.
From the FreeNAS WebUI System menu -> Firmware I choose the img I downloaded and hit Upgrade firmware.
It’s important not to interupt the upgrade while it’s working.

Once you have everything setup and configured, you can save the FreeNAS config to a safe place for a restoration at a later stage if the need arises.

Most of the details I used were here:

Informative videos on setting up FreeNAS:


http://freenas.org/contrib/sloan/freenas1.htm

Informative video for ZFS on FreeNAS: