Posts Tagged ‘wireshark’

Web Server Log Management

April 25, 2015

As part of the ongoing work around preparing a Debian web server to host applications accessible from the WWW I performed some research, analysis, made decisions along the way and implemented a first stage logging strategy. I’ve done similar set-ups many times before, but thought it worth sharing my experience for all to learn something from it and/or provide input, recommendations, corrections to the process so we all get to improve.

The main system loggers I looked into

  • GNU syslogd which I don’t think is being developed anymore? Correct me if I’m wrong. Most Linux distributions no longer ship with this. Only supports UDP. It’s also a bit lacking in features. From what I gather is single-threaded. I didn’t spend long looking at this as there wasn’t much point. The following two offerings are the main players.
  • rsyslog: which ships with Debian and most other Linux distros now I believe. I like to do as little as possible and rsyslog fits this description for me. The documentation seems pretty good. Rainer Gerhards wrote rsyslog and his blog provides some good insights. Supports UDP, TCP. Can send over TLS. There is also the Reliable Event Logging Protocol (RELP) which Rainer created.
    rsyslog is great at gathering, transporting, storing log messages and includes some really neat functionality for dividing the logs. It’s not designed to alert on logs. That’s where the likes of Simple Event Correlator (SEC) comes in. Rainer discusses why TCP isn’t as reliable as many think here.
  • syslog-ng: I didn’t spend to long here, as I didn’t see any features that I needed that were better than the default of rsyslog. Can correlate log messages, both real-time and off-line. Supports reliable and encrypted transport using TCP and TLS. message filtering, sorting, pre-processing, log normalisation.

There are are few comparisons around. Most of the ones I’ve seen are a bit biased and often out of date.

Aims

  • Record events and have them securely transferred to another syslog server in real-time, or as close to it as possible, so that potential attackers don’t have time to modify them on the local system before they’re replicated to another location
  • Reliability (resilience / ability to recover connectivity)
  • Extensibility: ability to add more machines and be able to aggregate events from many sources on many machines
  • Receive notifications from the upstream syslog server of specific events. No HIDS is going to remove the need to reinstall your system if you are not notified in time and an attacker plants and activates their root-kit.
  • Receive notifications from the upstream syslog server of lack of events. The network is down for example.

Environmental Considerations

A couple of servers in the mix:

FreeNAS File Server

Recent versions can send their syslog events to a syslog server. With some work, it looks like FreeNAS can be setup to act as a syslog server.

pfSense Router

Can send log events, but only by UDP by the look of it.

Following are the two strategies that emerged. You can see by the detail that I went down the path of the first one initially. It was the path of least resistance / quickest to setup. I’m going to be moving away from papertrail toward strategy two. Mainly because I’ve had a few issues where messages have been getting lost that have been very hard to track down (I’ve spent over a week on it). As the sender, you have no insight into what papertrail is doing. The support team don’t provide a lot of insight into their service when you have to trouble-shoot things. They have been as helpful as they can be, but I’ve expressed concern around them being unable to trouble-shoot their own services.

Outcomes

Strategy One

Rsyslog, TCP, local queuing, TLS, papertrail for your syslog server (PT doesn’t support RELP, but say that’s because their clients haven’t seen any issues with reliability in using plain TCP over TLS with local queuing). My guess is they haven’t looked hard enough. I must be the first then. Beware!

As I was setting this up and watching both ends. We had an internet outage of just over an hour. At that stage we had very few events being generated, so it was trivial to verify both ends. I noticed that once the ISP’s router was back on-line and the events from the queue moved to papertrail, that there was in fact one missing.

Why did Rainer Gerhards create RELP if TCP with queues was good enough? That was a question that was playing on me for a while. In the end, it was obvious that TCP without RELP isn’t good enough.
At this stage it looks like the queues may loose messages. Rainer says things like “In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that the action is not ready, which means the remote server is off-line. This can be detected with plain TCP syslog and RELP“, but it can be detected without RELP.

You can aggregate log files with rsyslog or by using papertrails remote_syslog daemon.

Alerting is available, including for inactivity of events.

Papertrails documentation is good and support is reasonable. Due to the huge amounts of traffic they have to deal with, they are unable to trouble-shoot any issues you may have. If you still want to go down the papertrail path, to get started, work through this which sets up your rsyslog to use UDP (specified in the /etc/rsyslog.conf by a single ampersand in front of the target syslog server). I want something more reliable than that, so I use two ampersands, which specifies TCP.

As we’re going to be sending our logs over the internet for now, we need TLS. Check papertrails CA server bundle for integrity:

curl https://papertrailapp.com/tools/papertrail-bundle.pem | md5sum

Should be: c75ce425e553e416bde4e412439e3d09

If all good throw the contents of that URL into a file called papertrail-bundle.pem.
Then scp the papertrail-bundle.pem into the web servers /etc dir. The command for that will depend on whether you’re already on the web server and you want to pull, or whether you’re somewhere else and want to push. Then make sure the ownership is correct on the pem file.

chown root:root papertrail-bundle.pem

install rsyslog-gnutls

apt-get install rsyslog-gnutls

Add the TLS config

$DefaultNetstreamDriverCAFile /etc/papertrail-bundle.pem # trust these CAs
$ActionSendStreamDriver gtls # use gtls netstream driver
$ActionSendStreamDriverMode 1 # require TLS
$ActionSendStreamDriverAuthMode x509/name # authenticate by host-name
$ActionSendStreamDriverPermittedPeer *.papertrailapp.com

to your /etc/rsyslog.conf. Create egress rule for your router to let traffic out to dest port 39871.

sudo service rsyslog restart

To generate a log message that uses your system syslogd config /etc/rsyslog.conf, run:

logger "hi"

should log “hi” to /var/log/messages and also to papertrail, but it wasn’t.

# Show a live update of the last 10 lines (by default) of /var/log/messages
sudo tail -f [-n <number of lines to tail>] /var/log/messages

OK, so lets run rsyslog in config checking mode:

/usr/sbin/rsyslogd -f /etc/rsyslog.conf -N1

Output all good looks like:

rsyslogd: version <the version number>, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.

Trouble-shooting

  1. https://www.loggly.com/docs/troubleshooting-rsyslog/
  2. http://help.papertrailapp.com/
  3. http://help.papertrailapp.com/kb/configuration/troubleshooting-remote-syslog-reachability/
  4. /usr/sbin/rsyslogd -version will provide the installed version and supported features.

Which didn’t help a lot, as I don’t have telnet installed. I can’t ping from the DMZ as ICMP is not allowed out and I’m not going to install tcpdump or strace on a production server. The more you have running, the more surface area you have, the greater the opportunities to exploit.

So how do we tell if rsyslogd is actually running if it doesn’t appear to be doing anything useful?

pidof rsyslogd

or

/etc/init.d/rsyslog status

Showing which files rsyslogd has open can be useful:

lsof -p <rsyslogd pid>

or just combine the results of pidof rsyslogd

sudo lsof -p $(pidof rsyslogd)

To start with I had a line like:

rsyslogd 3426 root 8u IPv4 9636 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (SYN_SENT)

Which obviously showed rsyslogd‘s SYN packets were not getting through. I’ve had some discussion with Troy from PT support around the reliability of plain TCP over TLS without RELP. I think if the server is business critical, then strategy two “maybe” the better option. Troy has assured me that they’ve never had any issues with logs being lost due to lack of reliability with out RELP. Troy also pointed me to their recommended local queue options. After adding the queue tweaks and a rsyslogd restart, it resulted in:

rsyslogd 3615 root 8u IPv4 9766 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (ESTABLISHED)

I could now see events in the papertrail web UI in real-time.

Socket Statistics (ss)(the better netstat) should also show the established connection.

By default papertrail accepts TCP over TLS (TLS encryption check-box on, Plain text check-box off) and UDP. So if your TLS isn’t setup properly, your events won’t be accepted by papertrail. I later confirmed this to be true.

Check that our Logs are Commuting over TLS

Now without installing anything on the web server or router, or physically touching the server sending packets to papertrail or the router. Using a switch (ubiquitous) rather than a hub. No wire tap or multi-network interfaced computer. No switch monitoring port available on expensive enterprise grade switches (along with the much needed access). We’re basically down to two approaches I can think of and I really couldn’t be bothered getting up out of my chair.

  1. MAC flooding with the help of macof which is a utility from the dsniff suite. This essentially causes your switch to go into a “failopen mode” where it acts like a hub and broadcasts it’s packets to every port.

    MAC Flooding

    Or…
  2. Man in the Middle (MiTM) with some help from ARP spoofing or poisoning. I decided to choose the second option, as it’s a little more elegant.

    ARP Spoofing

On our MitM box, I set a static IP: address, netmask, gateway in /etc/network/interfaces and add domain, search and nameservers to the /etc/resolv.conf.

Follow that up with a service network-manager restart

On the web server run:

ifconfig -a

to get MAC: <MitM box MAC> On MitM box run the same command to get MAC: <web server MAC>
On web server run:

ip neighbour

to find MACs associated with IP’s (the local ARP table). Router was: <router MAC>.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> REACHABLE
<router IP> dev eth0 lladdr <router MAC> REACHABLE

Now you need to turn your MitM box into a router temporarily. On the MitM box run

cat /proc/sys/net/ipv4/ip_forward

You’ll see a ‘1’ if forwarding is on. If it’s not, throw a ‘1’ into the file:

echo 1 > /proc/sys/net/ipv4/ip_forward

and check again to make sure. Now on the MitM box run

arpspoof -t <web server IP> <router IP>

This will continue to notify <web server IP> that our (MitM box) MAC address belongs to <router IP>. Essentially… we (MitM box) are <router IP> to the <web server IP> box, but our IP address doesn’t change. Now on the web server you can see that it’s ARP table has been updated and because arpspoof keeps running, it keeps telling <web server IP> that our MitM box is the router.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> STALE
<router IP> dev eth0 lladdr <MitM box MAC> REACHABLE

Now on our MitM box, while our arpspoof continues to run, we start Wireshark listening on our eth0 interface or what ever interface your using, and you can see that all packets that the web server is sending, we are intercepting and forwarding (routing) on to the gateway.

Now Wireshark clearly showed that the data was encrypted. I commented out the five TLS config lines in the /etc/rsyslog.conf file -> saved -> restarted rsyslog -> turned on “Plain text” in papertrail and could now see the messages in clear text. Now when I turned off “Plain text” papertrail would no longer accept syslog events. Excellent!

One of the nice things about arpspoof is that it re-applies the original ARP’s once it’s done.

You can also tell arpspoof to poison the routers ARP table. This way any traffic going to the web server via the router, not originating from the web server will be routed through our MitM box also.

Don’t forget to revert the change to /proc/sys/net/ipv4/ip_forward.

Exporting Wireshark Capture

You can use the File->Save As… option here for a collection of output types, or the way I usually do it is:

  1. First completely expand all the frames you want visible in your capture file
  2. File->Export Packet Dissections->as “Plain Text” file…
  3. Check the “All packets” check-box
  4. Check the “Packet summary line” check-box
  5. Check the “Packet details:” check-box and the “As displayed”
  6. OK

Trouble-shooting messages that papertrail never shows

To run rsyslogd in debug

Check to see which arguments get passed into rsyslogd to run as a daemon in /etc/init.d/rsyslog and /etc/default/rsyslog. You’ll probably see a RSYSLOGD_OPTIONS="". There may be some arguments between the quotes.

sudo service rsyslog stop
sudo /usr/sbin/rsyslogd [your options here] -dn >> ~/rsyslog-debug.log

The debug log can be quite useful for trouble-shooting. Also keep your eye on the stderr as you can see if it’s writing anything out (most system start-up scripts throw this away).
Once you’ve finished collecting log:
ctrl+C

sudo service rsyslog start

To see if rsyslog is running

pidof rsyslogd
# or
/etc/init.d/rsyslog status
Turn on the impstats module

The stats it produces show when you run into errors with an output, and also the state of the queues.
You can also run impstats on the receiving machine if it’s in your control. Papertrail obviously is not.
Put the following into your rsyslog.conf file at the top and restart rsyslog:

# Turn on some internal counters to trouble-shoot missing messages
module(load="impstats"
interval="600"
severity="7"
log.syslog="off"

# need to turn log stream logging off
log.file="/var/log/rsyslog-stats.log")
# End turn on some internal counters to trouble-shoot missing messages

Now if you get an error like:

rsyslogd-2039: Could not open output pipe '/dev/xconsole': No such file or directory [try http://www.rsyslog.com/e/2039 ]

You can just change the /dev/xconsole to /dev/console
xconsole is still in the config file for legacy reasons, it should have been cleaned up by the package maintainers.

GnuTLS error in rsyslog-debug.log

By running rsyslogd manually in debug mode, I found an error when the message failed to send:

unexpected GnuTLS error -53 in nsd_gtls.c:1571

Standard Error when running rsyslogd manually produces:

GnuTLS error: Error in the push function

With some help from the GnuTLS mailing list:

That means that send() returned -1 for some reason.” You can enable more output by adding an environment variable GNUTLS_DEBUG_LEVEL=9 prior to running the application, and that should at least provide you with the errno. This didn’t actually provide any more detail to stderr. However, thanks to Rainer we do now have debug.gnutls parameter in the rsyslog code that if you specify this global variable in the rsyslog.conf and assign it a value between 0-10 you’ll have gnutls debug output going to rsyslog’s debug log.

Strategy Two

Rsyslog, TCP, local queuing, TLS, RELP, SEC, syslog server on local network. Notification for inactivity of events could be performed by cron and SEC?
LogAnalyzer also created by Rainer Gerhards (rsyslog author), but more work to setup than an on-line service you don’t have to setup. In saying that. You would have greater control and security which for me is the big win here.
Normalisation also looks like Rainer has his finger in this pie.

In theory Adding RELP to TCP with local queues is a step-up in terms of reliability. Others have said, the reliability of TCP over TLS with local queues is excellent anyway. I’ve yet to confirm it’s excellence. At the time of writing this post,I’m seriously considering moving toward RELP to help solve my reliability issues.

Additional Resource

gentoo rsyslog wiki

Procurement & Config of Sun Fire V240 & ALOM

October 25, 2014

This is the sequence of events I took to prepare a Sun Fire V240 for hosting pfSense which is a free and open source FreeBSD based enterprise grade routing solution for a client of mine.

Recently I was tasked with setting up a network with what I considered to be enterprise grade hardware and software as cheaply as possible. When I take on these sorts of tasks, security is forefront in my mind, so I often look toward components that are as open as possible and that don’t sport any known (to me at least) back-doors and are able to be easily upgraded and patched at little to no cost.

A requirement was clean shut-downs on power failure events at least for the critical servers.

Procured Kit

  1. APC Smart-UPS 5000 with batteries in good condition. Worth a little under $6k if you’re buying new. I wouldn’t buy new. If you shop around, these can be picked up at a fraction of that cost. From my experience the APC kit is some of the best UPS gear available.
    APC Smart-UPS 5000
  2. AP9630 UPS network management card $92 new. Most of the details around setting these UPS’s up I’ve already posted on. If you search my blog for “APC UPS” you’ll find it.
    APC AP9630
  3. Enterprise grade router/firewall:
    Sun Fire V240 (RISC architecture). 2 x UltraSparc-IIIi 1.5Ghz CPU. 4Gbit on-board Ethernet ports. Lights-out management port. 4GB RAM. 2U. Dual redundant PSU’s. 2 x 72GB Hot Swap 10k SCSI HDD’s. With rack mount rails. Currently going for around $1.5k on Ebay. Price paid: $160 incl shipping. I doubt you’d find anything of these specifications off the shelf for under a $1000. This is a lot of server for a very small amount of money.
    Sun Fire V240
  4. Firmware: pfSense. Free and open source.

Planning

As part of my planning I evaluated (again) whether or not free software routing solutions are actually up to the task of the enterprise. My research led me to believe some were… based on others that had already been down this route ( PTP 😉 ). Openness is a biggie for me. I like to know that eyes are on the software rather than it being closed up in a proprietary package.

I evaluated m0n0Wall, ipCop (Linux based), smoothwall and pfSense. pfSense had been used in quite a few large environments successfully. When I had made my decision on the firmware to use, I went through the hardware requirements and of course started looking for high quality second-hand gear.

For the router hardware I was going to need at lease 1GHz CPU as I wanted to run Snort as my IDS/IPS. PCI-X or PCI-e network adapters (which of course I didn’t need to worry about with the Sun Fire server). Snort needs 512MB RAM minimum. Preferably at least 1GB.

Gaining Access to the Sun Fire V240

Now I had no idea of how the previous owner had setup the configuration of the ALOM (Advanced Lights Out Management). In fact I hadn’t administered a Sun Fire server before at all. On page 11 of the Sun FireTM V210 and V240 Servers Getting Started Guide it states the following:

The system console is directed to ALOM by default and is configured to show server console information on startup.
ALOM enables you to monitor and control your server over either a serial
connection (using the SERIAL MGT port), or Ethernet connection (using the NET MGT port).
For information about configuring an Ethernet connection, refer to the Sun Advanced Lights Out Manager Software User’s Guide.” The NET MGT port can also be disabled and in my case it turned out it was, but I’ll get to that later. I didn’t have a spare DB-9 to RJ-45 adapter lying around to wire it up and connect to the SERIAL MGT port.

Sun Fire V240 rear

Telnet?

(but didn’t get that far)

Since I was going to go down the path of trying to connect to the ALOM console via the NET MGT Ethernet port, I thought telnet would probably be the path of least resistance.

Page 10 of the “Sun Advanced Lights Outs Manager Software User’s Guide” stated the following:

The 10-Mbyte Ethernet port enables you to access ALOM from within your company
network. You can connect to ALOM remotely using any standard Telnet client”. On the V240, the
ALOM Ethernet port is referred to as the NET MGT port.

Using a laptop with Kali Linux installed (because it has lots of great tools for network reconnaissance), Running

ethtool eth0

told me that my NIC supported:
10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full

Wireshark?

Tried connecting directly to the NET MGT port with wireshark running on my laptop. Didn’t get any packets from the device. At the time I thought it may have been because my laptop’s NIC was using 100baseT, but later on I found out that the NET MGT port was disabled.

Tried pinging my broadcast address ping -b 255.255.255.255 then checked my arp table arp -a. No results that looked like what I was looking for. Of course this strategy would have taken quite some time to complete… and in my case it would have yielded no results anyway.

NMap?

I started with the private IPv4 address spaces. Using Wi-fi on my Kali box, tried the 16 bit block:

nmap -sn 192.168.*.*

Got a false positive of a cable modem. How did I work out that it was a false positive?

nmap -A <falsePositiveIPOfCableModem> # Gave me the model and everything I needed to know about the device to rule it out.

Next up the 20 bit block

nmap -sn 172.16.0.0/12
Nmap done: 1048576 IP addresses (0 hosts up) scanned in 108670.97 seconds

In earlier releases of nmap the -sn switch was known as -sP

I decided I needed to try and speed up the scan, so I connected directly to the V240 NET MGT port with a Cat5 patch cable (ethtool told me my laptop’s NIC had MDI-X on (force crossover mode)) and made sure my network card supported 10baseT which the “Sun Advanced Lights Outs Manager Software User’s Guide” told me it needed for the NET MGT port. Turns out the NET MGT port didn’t support 10baseT. Details a bit further down.

Added a static IP address to the /etc/network/interfaces. Currently it looked like:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet dhcp

So I commented out the auto wlan0 and iface wlan0 inet dhcp and added the following:

auto eth0
iface eth0 inet static
address 10.1.1.6
netmask 255.255.255.0
broadcast 10.1.1.255
#gateway 10.1.1.1 # Make sure you don't add a gateway, as we're connecting directly to the V240

followed by:

service networking restart

then changed my /etc/NetworkManager/NetworkManager.conf
managed=true to be managed=false
So Network manager didn’t keep interfering with my interfaces.

I followed this with a

service network-manager restart

followed with ifconfig to make sure my network interface was using the correct IP address, netmask and broadcast. It wasn’t, so…

ifdown eth0
ifup eth0
ifconfig

Success, it now was.

Now to make sure my network card was communicating in a manner that the V240’s NET MGT port would understand.

Using ethtool

ethtool eth0

told me 10baseT was supported, but it also told me my current speed was 100Bb/s. So I tried changing the speed with

ethtool -s eth0 speed 10

and received Cannot advertise speed 10. So made the following temporary changes as they’ll be lost on reboot. Changed the duplex… Ran the following:

ethtool -s eth0 speed 10 duplex half

Now with a:

ethtool eth0

I got:

Speed: unknown!
Duplex: Unknown! (255)

So turned the auto negotiation off:

ethtool -s eth0 speed 10 duplex half autoneg off

Now with a:

ethtool eth0

I got:

Speed: 10Mb/s
Duplex: Half
Auto-negotiation: off
#and some other settings.

Some useful ethtool resources:

With these settings the NET MGT port didn’t have it’s green link led on. So I kept playing with the settings. Turns out it would only work with speed 100 duplex full contrary to page 10 of the “Sun Advanced Lights Out Manager Software User’s Guide”
These were the settings that gave me link:

Supported pause frame use: No #Don't think I fiddled with this.
Supports auto-negotiation: Yes
Advertised link modes: Not reported #Don't think I fiddled with this.
Advertised pause frame use: Symmetric #Don't think I fiddled with this.
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: full
Port: Twisted Pair #Don't think I fiddled with this.
PHYAD: 1 #Don't think I fiddled with this.
Transceiver: internal #Don't think I fiddled with this.
Auto-negotiation: off
MDI-X: on
Supports Wake-on: g #Don't think I fiddled with this.
Wake-on: d #Don't think I fiddled with this.
Current message level: 0x000000ff (255)
drv prove link timer ifdown ifup rx_err tx_err
Link detected: yes

I was now confident that if the Sun Fire V240 NET MGT port was enabled, we’d find it’s IP address if it was using one from the private space. It was time to try the last and largest private address space. Oh, I also used wireshark to make sure nmap was doing what I expected on my laptop when I ran:

nmap -v -sn 10.0.0.0/8

I was a little confused to start with as nmap told me Scanning 4096 hosts I soon realised after checking the CIDR (Classless InterDomain Routing) and by the output nmap produced, that nmap was doing the scanning in chunks. As there was going to be a lot of results, I setup the output to files:

nmap -v -sn -oA 'scan-%Y-%m-%d_%H-%M 10.0.0.0/8

This produces the output in all three formats as discussed here.

SERIAL MGT Port?

This private address range was going to take a few days to scan, so I decided to have a poke at the SERIAL MGT port on the Sun Fire V240.

To use the SERIAL MGT port, a RJ-45 patch cable connected to a DB-9 adapter ($4.50 from globalpc) is required Unless you get the official Sun adaptor “530-3100-01”, or still have the one that came in the new box. So I splashed out and went with the $4.50 option. It cost me more in gas to get to the shop than buy the part. I Wired it up according to page 25 of the “Sun Fire V210 and V240 Servers Installation Guide“.

RJ-45 to DB-9 Adapter Crossovers
SERIAL MGT Port Adapter (DB-9) Pin
1 (RTS) 8 (CTS)
2 (DTR) 6 (DSR)
3 (TXD) 2 (RXD)
4 (Signal Ground) 5 (Signal Ground)
5 (Signal Ground) 5 (Signal Ground)
6 (RXD) 3 (TXD)
7 (DSR) 4 (DTR)
8 (CTS) 7 (RTS)

Red wire in with green.

RJ45-DB9 RJ45

Installed minicom and setserial and did pretty much the same as I did here. Plugged the console cable in and tried to establish a connection.

Then found that by default ALOM only communicates through the SERIAL MGT port at startup (of ALOM I thought), but it seems that at power on of the server also.

At the {1} ok prompt, I typed #. (that’s hash followed with dot) to escape from the system console sc>

I then entered the showsc command and found that the MGT NET port was disabled.
I then ran a

usershow

to see which user accounts existed and was prompted to set a password for the admin user.
When you connect to ALOM for the first time, you are automatically connected as the admin account.“.
So obviously the seller of the system reset ALOM.

SettingAdminPassword

Also audited the user accounts, and the details on the permission levels are here.

Ran the following script. A nice little dialog from Ramesh here (see step 4) too.

setupsc
  • Turned NET MGT port on
  • Changed the default if_connection from none to ssh
  • Answered no to email alerts (only for logged in users)
  • Yes to configure the network interfaces
  • No to DHCP
  • Entered the IP address for the NET MGT port
  • Entered the netmask for the NET MGT port
  • Entered the gateway for the Net Mgt port
  • Should powerstate memory be enabled [y]? y
  • Enabled power on sequencing

Then we need to restart the ALOM to apply the new settings.

resetsc -y

If you still have minicom running, it’ll show you what happens during the boot sequence and then present you with a login prompt.

Extra Resources

SSH

At this point I plugged the Ethernet cable from my test switch (10 Mbit/s capable) back into the NET MGT port of the Sun Fire V240 and tested that ALOM was responding on the IP address that I set the NET MGT port to.

ping <myNetMgtIP>

It was answering. So I attempted to SSH in on a different machine.

ssh admin@<myNetMgtIP>

I was presented with the hosts key fingerprint

The authenticity of host <myNetMgtIP> (<myNetMgtIP>)' can't be established.
RSA key fingerprint is <myExistingHostKeyInHex>.
Are you sure you want to continue connecting (yes/no)?

I wanted to know I was connecting to what I thought I was connecting to, so answered no.
Then in minicom I queried the hosts key fingerprint

ssh-keygen -l -t rsa

I was provided with the key fingerprint that matched what I was presented with when I attempted to SSH, so I new I was actually communicating with the server I thought I was.

I then regenerated the hosts key fingerprint

ssh-keygen -r -t rsa

and was provided with the new key. A restart of the SSH daemon is required to load the new host key.

sc> restartssh

Then SSH in. Confirm when prompted that the host key matches the newly provided key.

ssh admin@<myNetMgtIP>
The authenticity of host <myNetMgtIP> (<myNetMgtIP>)' can't be established.
RSA key fingerprint is <myNewHostKeyInHex>.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '<myNetMgtIP>' (RSA) to the list of known hosts.

Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

Sun(tm) Advanced Lights Out Manager <versionHere> ()

Please login: admin
Please Enter password: *********

sc>

We’re in!

At any time for a list of commands, you can type help.

logout
Connection to <myNetMgtIP> closed.

We’re out!

Running Wireshark as non-root user

April 13, 2013

As part of my journey with Node.js I decided I wanted to see exactly what was happening on the wire. I decided to use Burp Suite as the Http proxy interceptor and Wireshark as the network sniffer (not an interceptor). Wireshark can’t alter the traffic, it can’t decrypt SSL traffic unless the encryption key can be provided and Wireshark is compiled against GnuTLS.

This post is targeted at getting Wireshark running on Linux. If you’re a windows user, you can check out the Windows notes here.

When you first install Wireshark and try to start capturing packets, you will probably notice the error “You didn’t specify an interface on which to capture packets.”

When you try to specify an interface from which to capture, you will probably notice the error “There are no interfaces on which a capture can be done.”

You can try running Wireshark as root: gksudo wireshark

Wireshark as root

This will work, but of course it’s not a good idea to run a comprehensive tool like Wireshark (over 1’500’000 lines of code) as root.

So what’s actually happening here?

We have dumpcap and we have wireshark. dumpcap is the executable responsible for the low level data capture of your network interface. wireshark uses dumpcap. Dumpcap needs to run as root, wireshark does not need to run as root because it has Privilege Separation.

If you look at the above suggested “better way” here, this will make a “little” more sense. In order for it to make quite a lot more sense, I’ll share what I’ve just learnt.

Wireshark has implemented Privilege Separation which means that the Wireshark GUI (or the tshark CLI) can run as a normal user while the dumpcap capture utility runs as root. Why can’t this just work out of the box? Well there is a discussion here on that. It doesn’t appear to be resolved yet. Personally I don’t think that anybody wanting to use wireshark should have to learn all these intricacies to “just use it”. As the speed of development gets faster, we just don’t have time to learn everything. Although on the other hand, a little understanding of what’s actually happening under the covers can help in more ways than one. Anyway, enough ranting.

How do we get this to all “just work”

from your console:

sudo dpkg-reconfigure wireshark-common

You’ll be prompted:

Configuring wireshark-common

Respond yes.

The wireshark group will be added

If the Linux Filesystem Capabilities are not present at the time of installing wireshark-common (Debian GNU/kFreeBSD, Debian GNU/Hurd), the installer will fall back to set the set-user-id bit to allow non-root users to capture packets. Custom built kernels may lack Linux Capabilities.

The help text also warns about a security risk which isn’t an issue because setuid isn’t used. Rather what actually happens is the following:

addgroup --quiet --system wireshark
chown root:wireshark /usr/bin/dumpcap
setcap cap_net_raw,cap_net_admin=eip /usr/bin/dumpcap

You will then have to manually add your user to the wireshark group.

sudo adduser kim wireshark # replacing kim with your user

or

usermod -a -G wireshark kim # replacing kim with your user

log out then back in again.

I wanted to make sure that what I thought was happening was actually happening. You’ll notice that if you run the following before and after the reconfigure:

ls -liah /usr/bin/dumpcap | less

You’ll see:

-rwxr-xr-x root root /usr/bin/dumpcap initially
-rwxr-xr-x root wireshark /usr/bin/dumpcap after

And a before and after of my users and groups I ran:

cat /etc/passwd | cut -d: -f1
cat /etc/group | cut -d: -f1

Alternatively to using the following as shown above, which gives us a nice abstraction (if that’s what you like):

sudo dpkg-reconfigure wireshark-common

We could just run the following:

addgroup wireshark
sudo chgrp wireshark /usr/bin/dumpcap
sudo chmod 750 /usr/bin/dumpcap
sudo setcap cap_net_raw,cap_net_admin+eip /usr/bin/dumpcap

The following will confirm the capabilities you just set.

getcap /usr/bin/dumpcap

What’s with the setcap?

For full details, run:

man setcap
man capabilities

setcap sets the capabilities of each specified filename to the capabilities specified (thank you man ;-))

For sniffing we need two of the capabilities listed in the capabilities man page.

  1. CAP_NET_ADMIN Perform various network-related operations (e.g., setting privileged socket options, enabling multicasting, interface configuration, modifying routing tables). This allows dumpcap to set interfaces to promiscuous mode.
  2. CAP_NET_RAW Use RAW and PACKET sockets. Gives dumpcap raw access to an interface.

For further details check out Jeremy Stretch’s explanation on Linux Filesystem Capabilities and using setcap. There’s also some more info covering the “eip” in point 2 here and the following section.

man capabilities | grep -A24 "File Capabilities"

Lets run Wireshark as our usual low privilege user

Now that you’ve done the above steps including the log off/on, you should be able to run wireshark as your usual user and configure your listening interfaces and start capturing packets.

Also before we forget… Ensure Wireshark works only from root and from a user in the “wireshark” group. You can add a temp user (command shown above).

Log in as them and try running wireshark. You should have the same issues as you had initially. Remove the tempuser:

userdel -r tempuser