Holistic Info-Sec for Web Developers

July 24, 2015

Most of my spare energy is going to be going into my new book for a while. I’m going to be tweeting as I write it, so please follow @binarymist. You can also keep up with my change-sets at github. You can also discuss progress or even what you would find helpful as a web developer with a focus on information security, where it’s all happening.

Holistic InfoSec for Web Developers

Lack of Visibility in Web Applications

November 26, 2015


I see this as an indirect risk to the asset of web application ownership (That’s the assumption that you will always own your web application).

Not being able to introspect your application at any given time or being able to know how the health status is, is not a comfortable place to be in and there is no reason you should be there.

Insufficient Logging and Monitoring


Can you tell at any point in time if someone or something is:

  • Using your application in a way that it was not intended to be used
  • Violating policy. For example circumventing client side input sanitisation.

How easy is it for you to notice:

  • Poor performance and potential DoS?
  • Abnormal application behaviour or unexpected logic threads
  • Logic edge cases and blind spots that stake holders, Product Owners and Developers have missed?


As Bruce Schneier said: “Detection works where prevention fails and detection is of no use without response“. This leads us to application logging.

With good visibility we should be able to see anticipated and unanticipated exploitation of vulnerabilities as they occur and also be able to go back and review the events.

Insufficient Logging


When it comes to logging in NodeJS, you can’t really go past winston. It has a lot of functionality and what it does not have is either provided by extensions, or you can create your own. It is fully featured, reliable and easy to configure like NLog in the .NET world.

I also looked at express-winston, but could not see why it needed to exist.

   "dependencies": {
      "config": "^1.15.0",
      "express": "^4.13.3",
      "morgan": "^1.6.1",
      "//": "nodemailer not strictly necessary for this example,",
      "//": "but used later under the node-config section.",
      "nodemailer": "^1.4.0",
      "//": "What we use for logging.",
      "winston": "^1.0.1",
      "winston-email": "0.0.10",
      "winston-syslog-posix": "^0.1.5",

winston-email also depends on nodemailer.

Opening UDP port

with winston-syslog seems to be what a lot of people are using. I think it may be due to the fact that winston-syslog is the first package that works well for winston and syslog.

If going this route, you will need the following in your /etc/rsyslog.conf:

$ModLoad imudp
# Listen on all network addresses. This is the default.
# Listen on localhost.
$UDPServerRun 514
# Or the new style configuration.
Address <IP>
Port <port>
# Logging for your app.
local0.* /var/log/yourapp.log

I Also looked at winston-rsyslog2 and winston-syslogudp, but they did not measure up for me.

If you do not need to push syslog events to another machine, then it does not make much sense to push through a local network interface when you can use your posix syscalls as they are faster and safer. Line 7 below shows the open port.

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0015s latency).
514/udp open|filtered syslog no-response
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Using Posix

The winston-syslog-posix package was inspired by blargh. winston-syslog-posix uses node-posix.

If going this route, you will need the following in your /etc/rsyslog.conf instead of the above:

# Logging for your app.
local0.* /var/log/yourapp.log

Now you can see on line 7 below that the syslog port is no longer open:

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0014s latency).
514/udp closed syslog port-unreach
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Logging configuration should not be in the application startup file. It should be in the configuration files. This is discussed further under the Store Configuration in Configuration files section.

Notice the syslog transport in the configuration below starting on line 39.

module.exports = {
   logger: {
      colours: {
         debug: 'white',
         info: 'green',
         notice: 'blue',
         warning: 'yellow',
         error: 'yellow',
         crit: 'red',
         alert: 'red',
         emerg: 'red'
      // Syslog compatible protocol severities.
      levels: {
         debug: 0,
         info: 1,
         notice: 2,
         warning: 3,
         error: 4,
         crit: 5,
         alert: 6,
         emerg: 7
      consoleTransportOptions: {
         level: 'debug',
         handleExceptions: true,
         json: false,
         colorize: true
      fileTransportOptions: {
         level: 'debug',
         filename: './yourapp.log',
         handleExceptions: true,
         json: true,
         maxsize: 5242880, //5MB
         maxFiles: 5,
         colorize: false
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'debug',
         identity: 'yourapp_winston'
         //facility: 'local0' // default
            // /etc/rsyslog.conf also needs: local0.* /var/log/yourapp.log
            // If non posix syslog is used, then /etc/rsyslog.conf or one
            // of the files in /etc/rsyslog.d/ also needs the following
            // two settings:
            // $ModLoad imudp // Load the udp module.
            // $UDPServerRun 514 // Open the standard syslog port.
            // $UDPServerAddress // Interface to bind to.
      emailTransportOptions: {
         handleExceptions: true,
         level: 'crit',
         from: 'yourusername_alerts@fastmail.com',
         to: 'yourusername_alerts@fastmail.com',
         service: 'FastMail',
         auth: {
            user: "yourusername_alerts",
            pass: null // App specific password.
         tags: ['yourapp']

In development I have chosen here to not use syslog. You can see this on line 3 below. If you want to test syslog in development, you can either remove the logger object override from the devbox1-development.js file or modify it to be similar to the above. Then add one line to the /etc/rsyslog.conf file to turn on. As mentioned in a comment above in the default.js config file on line 44.

module.exports = {
   logger: {
      syslogPosixTransportOptions: null

In production we log to syslog and because of that we do not need the file transport you can see configured starting on line 30 above in the default.js configuration file, so we set it to null as seen on line 6 below in the prodbox-production.js file.

I have gone into more depth about how we handle syslogs here, where all of our logs including these ones get streamed to an off-site syslog server. Thus providing easy aggregation of all system logs into one user interface that DevOpps can watch on their monitoring panels in real-time and also easily go back in time to visit past events. This provides excellent visibility as one layer of defence.

There were also some other options for those using Papertrail as their off-site syslog and aggregation PaaS, but the solutions were not as clean as simply logging to local syslog from your applications and then sending off-site from there.

module.exports = {
   logger: {
      consoleTransportOptions: {
         level: {},
      fileTransportOptions: null,
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'info',
         identity: 'yourapp_winston'
// Build creates this file.
module.exports = {
   logger: {
      emailTransportOptions: {
         auth: {
            pass: 'Z-o?(7GnCQsnrx/!-G=LP]-ib' // App specific password.

The logger.js file wraps and hides extra features and transports applied to the logging package we are consuming.

var winston = require('winston');
var loggerConfig = require('config').logger;

winston.emitErrs = true;

var logger = new winston.Logger({
   // Alternatively: set to winston.config.syslog.levels
   exitOnError: false,
   // Alternatively use winston.addColors(customColours); There are many ways
   // to do the same thing with winston
   colors: loggerConfig.colours,
   levels: loggerConfig.levels

// Add transports. There are plenty of options provided and you can add your own.

logger.addConsole = function(config) {
   logger.add (winston.transports.Console, config);
   return this;

logger.addFile = function(config) {
   logger.add (winston.transports.File, config);
   return this;

logger.addPosixSyslog = function(config) {
   logger.add (winston.transports.SyslogPosix, config);
   return this;

logger.addEmail = function(config) {
   logger.add (winston.transports.Email, config);
   return this;

logger.emailLoggerFailure = function (err /*level, msg, meta*/) {
   // If called with an error, then only the err param is supplied.
   // If not called with an error, level, msg and meta are supplied.
   if (err) logger.alert(
         'error-code:' + err.code + '. '
         + 'error-message:' + err.message + '. '
         + 'error-response:' + err.response + '. logger-level:'
         + err.transport.level + '. transport:' + err.transport.name

logger.init = function () {
   if (loggerConfig.fileTransportOptions)
      logger.addFile( loggerConfig.fileTransportOptions );
   if (loggerConfig.consoleTransportOptions)
      logger.addConsole( loggerConfig.consoleTransportOptions );
   if (loggerConfig.syslogPosixTransportOptions)
      logger.addPosixSyslog( loggerConfig.syslogPosixTransportOptions );
   if (loggerConfig.emailTransportOptions)
      logger.addEmail( loggerConfig.emailTransportOptions );

module.exports = logger;
module.exports.stream = {
   write: function (message, encoding) {

When the app first starts it initialises the logger on line 7 below.

var express = require('express');
var morganLogger = require('morgan');
var logger = require('./util/logger'); // Or use requireFrom module so no relative paths.
var app = express();
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
// In order to utilise connect/express logger module in our third party logger,
// Pipe the messages through.
app.use(morganLogger('combined', {stream: logger.stream}));
app.use(express.static(path.join(__dirname, 'public')));

if ('development' == app.get('env')) {
   app.use(errorHandler({ dumpExceptions: true, showStack: true }));
if ('production' == app.get('env')) {

http.createServer(app).listen(app.get('port'), function(){
      "Express server listening on port " + app.get('port') + ' in '
      + process.env.NODE_ENV + ' mode'

* You can also optionally log JSON metadata
* You can provide an optional callback to do any work required, which will be called once all transports have logged the specified message.

Here are some examples of how you can use the logger. The logger.log(<level> can be replaced with logger.<level>( where level is any of the levels defined in the default.js configuration file above:

// With string interpolation also.
logger.log('info', 'test message %s', 'my string');
logger.log('info', 'test message %d', 123);
logger.log('info', 'test message %j', {aPropertyName: 'Some message details'}, {});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);

Also consider hiding cross cutting concerns like logging using Aspect Oriented Programing (AOP)

Insufficient Monitoring


There are a couple of ways of approaching monitoring. You may want to see the health of your application even if it is all fine, or only to be notified if it is not fine (sometimes called the dark cockpit approach).

Monit is an excellent tool for the dark cockpit approach. It’s easy to configure. Has excellent short documentation that is easy to understand and the configuration file has lots of examples commented out ready for you to take as is and modify to suite your environment. I’ve personally had excellent success with Monit.


Risks that Solution Causes

Lack of Visibility

With the added visibility, you will have to make decisions based on the new found information you now have. There will be no more blissful ignorance if there was before.

Insufficient Logging and Monitoring

There will be learning and work to be done to become familiar with libraries and tooling. Code will have to be written around logging as in wrapping libraries, initialising and adding logging statements or hiding them using AOP.


Costs and Trade-offs

Insufficient Logging and Monitoring

You can do a lot for little cost here. I would rather trade off a few days work in order to have a really good logging system through your code base that is going to show you errors fast in development and then show you different errors in the places your DevOps need to see them in production.

Same for monitoring. Find a tool that you find working with a pleasure. There are just about always free and open source tools to every commercial alternative. If you are working with a start-up or young business, the free and open source tools can be excellent to keep ongoing costs down. Especially mature tools that are also well maintained like Monit.

Additional Resources

Consuming Free and Open Source

October 29, 2015



This is where A9 (Using Components with Known Vulnerabilities) of the 2013 OWASP Top 10 comes in.
We are consuming far more free and open source libraries than we have ever before. Much of the code we are pulling into our projects is never intentionally used, but is still adding surface area for attack. Much of it:

  • Is not thoroughly tested (for what it should do and what it should not do). We are often relying
    on developers we do not know a lot about to have not introduced defects. Most developers are more focused on building than breaking, they do not even see the defects they are introducing.
  • Is not reviewed evaluated. That is right, many of the packages we are consuming are created by solo developers with a single focus of creating and little to no focus of how their creations can be exploited. Even some teams with a security hero are not doing a lot better.
  • Is created by amateurs that could and do include vulnerabilities. Anyone can write code and publish to an open source repository. Much of this code ends up in our package management repositories which we consume.
  • Does not undergo the same requirement analysis, defining the scope, acceptance criteria, test conditions and sign off by a development team and product owner that our commercial software does.

Many vulnerabilities can hide in these external dependencies. It is not just one attack vector any more, it provides the opportunity for many vulnerabilities to be sitting waiting to be exploited. If you do not find and deal with them, I can assure you, someone else will. See Justin Searls talk on consuming all the open source.

Running install or any scripts from non local sources without first downloading them and inspecting can destroy or modify your and any other reachable systems, send sensitive information to an attacker, or any number of other criminal activities.


prevention easy


Dibbe Edwards discusses some excellent initiatives on how they do it at IBM. I will attempt to paraphrase some of them here:

  • Implement process and governance around which open source libraries you can use
  • Legal review: checking licenses
  • Scanning the code for vulnerabilities, manual and automated code review
  • Maintain a list containing all the libraries that have been approved for use. If not on the list, make request and it should go through the same process.
  • Once the libraries are in your product they should become as part of your own code so that they get the same rigour over them as any of your other code written in-house
  • There needs to be automated process that runs over the code base to check that nothing that is not on the approved list is included
  • Consider automating some of the suggested tooling options below

There is an excellent paper by the SANS Institute on Security Concerns in Using Open Source Software for Enterprise Requirements that is well worth a read. It confirms what the likes of IBM are doing in regards to their consumption of free and open source libraries.

Consumption is Your Responsibility

As a developer, you are responsible for what you install and consume. Malicious NodeJS packages do end up on NPM from time to time. The same goes for any source or binary you download and run. The following commands are often encountered as being “the way” to install things:

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(wget https://raw.github.com/account/repo/install.sh -O -)"
# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(curl -fsSL https://raw.github.com/account/repo/install.sh)"

Below is the official way to install NodeJS. Do not do this. wget or curl first, then make sure what you have just downloaded is not malicious.

Inspect the code before you run it.

1. The repository could have been tampered with
2. The transmission from the repository to you could have been intercepted and modified.

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Keeping Safe

wget, curl, etc

Please do not wget, curl or fetch in any way and pipe what you think is an installer or any script to your shell without first verifying that what you are about to run is not malicious. Do not download and run in the same command.

The better option is to:

  1. Verify the source that you are about to download, if all good
  2. Download it
  3. Check it again, if all good
  4. Only then should you run it

npm install

As part of an npm install, package creators, maintainers (or even a malicious entity intercepting and modifying your request on the wire) can define scripts to be run on specific NPM hooks. You can check to see if any package has hooks (before installation) that will run scripts by issuing the following command:
npm show [module-you-want-to-install] scripts

Recommended procedure:

  1. Verify the source that you are about to download, if all good
  2. npm show [module-you-want-to-install] scripts
  3. Download the module without installing it and inspect it. You can download it from

The most important step here is downloading and inspecting before you run.

Doppelganger Packages

Similarly to Doppelganger Domains, People often miss-type what they want to install. If you were someone that wanted to do something malicious like have consumers of your package destroy or modify their systems, send sensitive information to you, or any number of other criminal activities (ideally identified in the Identify Risks section. If not already, add), doppelganger packages are an excellent avenue for raising the likelihood that someone will install your malicious package by miss typing the name of it with the name of another package that has a very similar name. I covered this in my “0wn1ng The Web” presentation, with demos.

Make sure you are typing the correct package name. Copy -> Pasting works.


For NodeJS developers: Keep your eye on the nodesecurity advisories. Identified security issues can be posted to NodeSecurity report.

RetireJS Is useful to help you find JavaScript libraries with known vulnerabilities. RetireJS has the following:

  1. Command line scanner
    • Excellent for CI builds. Include it in one of your build definitions and let it do the work for you.
      • To install globally:
        npm i -g retire
      • To run it over your project:
        retire my-project
        Results like the following may be generated:

        ↳ jquery 1.4.4.min has known vulnerabilities:
    • To install RetireJS locally to your project and run as a git precommit-hook.
      There is an NPM package that can help us with this, called precommit-hook, which installs the git pre-commit hook into the usual .git/hooks/pre-commit file of your projects repository. This will allow us to run any scripts immediately before a commit is issued.
      Install both packages locally and save to devDependencies of your package.json. This will make sure that when other team members fetch your code, the same retire script will be run on their pre-commit action.

      npm install precommit-hook --save-dev
      npm install retire --save-dev

      If you do not configure the hook via the package.json to run specific scripts, it will run lint, validate and test by default. See the RetireJS documentation for options.

         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"

      Adding the pre-commit property allows you to specify which scripts you want run before a successful commit is performed. The following package.json defines that the lint and validate scripts will be run. validate runs our retire command.

         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         "pre-commit": ["lint", "validate"]

      Keep in mind that pre-commit hooks can be very useful for all sorts of checking of things immediately before your code is committed. For example running security tests using the OWASP ZAP API.

  2. Chrome extension
  3. Firefox extension
  4. Grunt plugin
  5. Gulp task
  6. Burp and OWASP ZAP plugin
  7. On-line tool you can simply enter your web applications URL and the resource will be analysed


provides “intentful auditing as a stream of intel for bithound“. I guess watch this space, as in speaking with Adam Baldwin, there doesn’t appear to be much happening here yet.


In regards to NPM packages, we know the following things:

  1. We know about a small collection of vulnerable NPM packages. Some of which have high fan-in (many packages depend on them).
  2. The vulnerable packages have published patched versions
  3. Many packages are still consuming the vulnerable unpatched versions of the packages that have published patched versions
    • So although we could have removed a much larger number of vulnerable packages due to their persistence on depending on unpatched packages, we have not. I think this mostly comes down to lack of visibility, awareness and education. This is exactly what I’m trying to change.

bithound supports:

  • JavaScript, TypeScript and JSX (back-end and front-end)
  • In terms of version control systems, only git is supported
  • Opening of bitbucket and github issues
  • Providing statistics on code quality, maintainability and stability. I queried Adam on this, but not a lot of information was forth coming.

bithound can be configured to not analyse some files. Very large repositories are prevented from being analysed due to large scale performance issues.

Analyses both NPM and Bower dependencies and notifies you if any are:

  • Out of date
  • Insecure. Assuming this is based on the known vulnerabilities (41 node advisories at the time of writing this)
  • Unused

Analysis of opensource projects are free.

You could of course just list all of your projects and global packages and check that there are none in the advisories, but this would be more work and who is going to remember to do that all the time?

For .Net developers, there is the likes of OWASP SafeNuGet.

Risks that Solution Causes

Some of the packages we consume may have good test coverage, but are the tests testing the right things? Are the tests testing that something can not happen? That is where the likes of RetireJS comes in.


There is a danger of implementing to much manual process thus slowing development down more than necessary. The way the process is implemented will have a lot to do with its level of success. For example automating as much as possible, so developers don’t have to think about as much as possible is going to make for more productive, focused and happier developers.

For example, when a Development Team needs to pull a library into their project, which often happens in the middle of working on a product backlog item (not planned at the beginning of the Sprint), if they have to context switch while a legal review and/or manual code review takes place, then this will cause friction and reduce the teams performance even though it may be out of their hands.
In this case, the Development Team really needs a dedicated resource to perform the legal review. The manual review could be done by another team member or even themselves with perhaps another team member having a quicker review after the fact. These sorts of decisions need to be made by the Development Team, not mandated by someone outside of the team that doesn’t have skin in the game or does not have the localised understanding that the people working on the project do.

Maintaining a list of the approved libraries really needs to be a process that does not take a lot of human interaction. How ever you work out your process, make sure it does not require a lot of extra developer effort on an ongoing basis. Some effort up front to automate as much as possible will facilitate this.


Using the likes of pre-commit hooks, the other tooling options detailed in the Countermeasures section and creating scripts to do most of the work for us is probably going to be a good option to start with.

Costs and Trade-offs

The process has to be streamlined so that it does not get in the developers way. A good way to do this is to ask the developers how it should be done. They know what will get in their way. In order for the process to be a success, the person(s) mandating it will need to get solid buy-in from the people using it (the developers).
The idea of setting up a process that notifies at least the Development Team if a library they want to use has known security defects, needs to be pitched to all stakeholders (developers, product owner, even external stakeholders) the right way. It needs to provide obvious benefit and not make anyones life harder than it already is. Everyone has their own agendas. Rather than fighting against them, include consideration for them in your pitch. I think this sort of a pitch is actually reasonably easy if you keep these factors in mind.


Additional Resources


Risks and Countermeasures to the Management of Application Secrets

September 17, 2015


  • Passwords and other secrets for things like data-stores, syslog servers, monitoring services, email accounts and so on can be useful to an attacker to compromise data-stores, obtain further secrets from email accounts, file servers, system logs, services being monitored, etc, and may even provide credentials to continue moving through the network compromising other machines.
  • Passwords and/or their hashes travelling over the network.

Data-store Compromise


The reason I’ve tagged this as moderate is because if you take the countermeasures, it doesn’t have to be a disaster.

There are many examples of this happening on a daily basis to millions of users. The Ashley Madison debacle is a good example. Ashley Madison’s entire business relied on its commitment to keep its clients (37 million of them) data secret, provide discretion and anonymity.

Before the breach, the company boasted about airtight data security but ironically, still proudly displays a graphic with the phrase “trusted security award” on its homepage

We worked hard to make a fully undetectable attack, then got in and found nothing to bypass…. Nobody was watching. No security. Only thing was segmented network. You could use Pass1234 from the internet to VPN to root on all servers.

Any CEO who isn’t vigilantly protecting his or her company’s assets with systems designed to track user behavior and identify malicious activity is acting negligently and putting the entire organization at risk. And as we’ve seen in the case of Ashley Madison, leadership all the way up to the CEO may very well be forced out when security isn’t prioritized as a core tenet of an organization.

Dark Reading

Other notable data-store compromises were LinkedIn with 6.5 million user accounts compromised and 95% of the users passwords cracked in days. Why so fast? Because they used simple hashing, specifically SHA-1. EBay with 145 million active buyers. Many others coming to light regularly.

Are you using well salted and quality strong key derivation functions (KDFs) for all of your sensitive data? Are you making sure you are notifying your customers about using high quality passwords? Are you informing them what a high quality password is? Consider checking new user credentials against a list of the most frequently used and insecure passwords collected.


Secure password management within applications is a case of doing what you can, often relying on obscurity and leaning on other layers of defence to make it harder for compromise. Like many of the layers already discussed in my book.

Find out how secret the data that is supposed to be secret that is being sent over the network actually is and consider your internal network just as malicious as the internet. Then you will be starting to get the idea of what defence in depth is about. That way when one defence breaks down, you will still be in good standing.

defence in depth

You may read in many places that having data-store passwords and other types of secrets in configuration files in clear text is an insecurity that must be addressed. Then when it comes to mitigation, there seems to be a few techniques for helping, but most of them are based around obscuring the secret rather than securing it. Essentially just making discovery a little more inconvenient like using an alternative port to SSH to other than the default of 22. Maybe surprisingly though, obscurity does significantly reduce the number of opportunistic type attacks from bots and script kiddies.

Store Configuration in Configuration files


Do not hard code passwords in source files for all developers to see. Doing so also means the code has to be patched when services are breached. At the very least, store them in configuration files and use different configuration files for different deployments and consider keeping them out of source control.

Here are some examples using the node-config module.


is a fully featured, well maintained configuration package that I have used on a good number of projects.

To install: From the command line within the root directory of your NodeJS application, run:

npm install node-config --save

Now you are ready to start using node-config. An example of the relevant section of an app.js file may look like the following:

// Due to bug in node-config the if statement is required before config is required
// https://github.com/lorenwest/node-config/issues/202
if (process.env.NODE_ENV === 'production')
   process.env.NODE_CONFIG_DIR = path.join(__dirname, 'config');

Where ever you use node-config, in your routes for example:

var config = require('config');
var nodemailer = require('nodemailer');
var enquiriesEmail = config.enquiries.email;

// Setting up email transport.
var transporter = nodemailer.createTransport({
   service: config.enquiries.service,
   auth: {
      user: config.enquiries.user,
      pass: config.enquiries.pass // App specific password.

A good collection of different formats can be used for the config files: .json, .json5, .hjson, .yaml.js, .coffee, .cson, .properties, .toml

There is a specific file loading order which you specify by file naming convention, which provides a lot of flexibility and which caters for:

  • Having multiple instances of the same application running on the same machine
  • The use of short and full host names to mitigate machine naming collisions
  • The type of deployment. This can be anything you set the $NODE_ENV environment variable to for example: development, production, staging, whatever.
  • Using and creating config files which stay out of source control. These config files have a prefix of local. These files are to be managed by external configuration management tools, build scripts, etc. Thus providing even more flexibility about where your sensitive configuration values come from.

The config files for the required attributes used above may take the following directory structure:

+-- config/
| |
| +-- default.js (usually has the most in it)
| |
| +-- devbox1-development.js
| |
| +-- devbox2-development.js
| |
| +-- stagingbox-staging.js
| |
| +-- prodbox-production.js
| |
| +-- local.js (creted by build)
+-- routes
| |
| +-- home.js
| |
| +-- ...
+-- app.js (entry point)
+-- ...

The contents of the above example configuration files may look like the following:

module.exports = {
   enquiries: {
      // Supported services:
      // https://github.com/andris9/nodemailer-wellknown#supported-services
      // supported-services actually use the best security settings by default.
      // I tested this with a wire capture, because it is always the most fool proof way.
      service: 'FastMail',
      email: 'yourusername@fastmail.com',
      user: 'yourusername',
      pass: null
   // Lots of other settings.
   // ...
module.exports = {
   enquiries: {
      // Test password for developer on devbox1
      pass: 'D[6F9,4fM6?%2ULnirPVTk#Q*7Z+5n' // App specific password.
module.exports = {
   enquiries: {
      // Test password for developer on devbox2
      pass: 'eUoxK=S9&amp;amp;amp;lt;,`@m0T1=^(EZ#61^5H;.H' // App specific password.
// Build creates this file.
module.exports = {
   enquiries: {
      // Password created by the build.
      pass: '10lQu$4YC&amp;amp;amp;amp;x~)}lUF&amp;amp;amp;gt;3pm]Tk&amp;amp;amp;gt;@+{N]' // App specific password.

node-config also:

  • Provides command line overrides, thus allowing you to override configuration values at application start from command
  • Allows for the overriding of environment variables with custom environment variables from a custom-environment-variables.json file

Encrypting/decrypting credentials in code may provide some obscurity, but not much more than that.
There are different answers for different platforms. None of which provide complete security, if there is such a thing, but instead focusing on different levels of obscurity.


Store database credentials as a Local Security Authority (LSA) secret and create a DSN with the stored credential. Use a SqlServer connection string with Trusted_Connection=yes

The hashed credentials are stored in the SAM file and the registry. If an attacker has physical access to the storage, they can easily copy the hashes if the machine is not running or can be shut-down. The hashes can be sniffed from the wire in transit. The hashes can be pulled from the running machines memory (specifically the Local Security Authority Subsystem Service (LSASS.exe)) using tools such as Mimikatz, WCE, hashdump or fgdump. An attacker generally only needs the hash. Trusted tools like psexec take care of this for us. All discussed in my “0wn1ng The Web” presentation.

Encrypt Sections of a web, executable, machine-level, application-level configuration files with aspnet_regiis.exe with the -pe option and name of the configuration element to encrypt and the configuration provider you want to use. Either DataProtectionConfigurationProvider (uses DPAPI) or RSAProtectedConfigurationProvider (uses RSA). the -pd switch is used to decrypt or programatically:

string connStr = ConfigurationManager.ConnectionString["MyDbConn1"].ToString();

Of course there is a problem with this also. DPAPI uses LSASS, which again an attacker can extract the hash from its memory. If the RSAProtectedConfigurationProvider has been used, a key container is required. Mimikatz will force an export from the key container to a .pvk file. Which can then be read using OpenSSL or tools from the Mono.Security assembly.

I have looked at a few other ways using PSCredential and SecureString. They all seem to rely on DPAPI which as mentioned uses LSASS which is open for exploitation.

Credential Guard and Device Guard leverage virtualisation-based security. By the look of it still using LSASS. Bromium have partnered with Microsoft and coined it Micro-virtualization. The idea is that every user task is isolated into its own micro-VM. There seems to be some confusion as to how this is any better. Tasks still need to communicate outside of their VM, so what is to stop malicious code doing the same? I have seen lots of questions but no compelling answers yet. Credential Guard must run on physical hardware directly. Can not run on virtual machines. This alone rules out many

Bromium vSentry transforms information and infrastructure protection with a revolutionary new architecture that isolates and defeats advanced threats targeting the endpoint through web, email and documents

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

This is marketing talk. Please don’t take this literally.

vSentry empowers users to access whatever information they need from any network, application or website, without risk to the enterprise

Traditional security solutions rely on detection and often fail to block targeted attacks which use unknown “zero day” exploits. Bromium uses hardware enforced isolation to stop even “undetectable” attacks without disrupting the user.


With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and
a joy to use


These seem like bold claims.

Also worth considering is that Microsofts new virtualization-based security also relies on UEFI Secure Boot, which has been proven insecure.


Containers also help to provide some form of isolation. Allowing you to only have the user accounts to do what is necessary for the application.

I usually use a deployment tool that also changes the permissions and ownership of the files involved with the running web application to a single system user, so unprivileged users can not access the web applications files at all. The deployment script is executed over SSH in a remote shell. Only specific commands on the server are allowed to run and a very limited set of users have any sort of access to the machine. If you are using Linux Containers then you can reduce this even more if it is not already.

One of the beauties of GNU/Linux is that you can have as much or little security as you decide. No one has made that decision for you already and locked you out of the source. You are not feed lies like all of the closed source OS vendors trying to pimp their latest money spinning product. GNU/Linux is a dirty little secrete that requires no marketing hype. It just provides complete control if you want it. If you do not know what you want, then someone else will probably take that control from you. It is just a matter of time if it hasn’t happened already.

Least Privilege


An application should have the least privileges possible in order to carry out what it needs to do. Consider creating accounts for each trust distinction. For example where you only need to read from a data store, then create that connection with a users credentials that is only allowed to read, and so on for other privileges. This way the attack surface is minimised. Adhering to the principle of least privilege. Also consider removing table access completely from the application and only provide permissions to the application to run stored queries. This way if/when an attacker is able to
compromise the machine and retrieve the password for an action on the data-store, they will not be able to do a lot anyway.



Put your services like data-stores on network segments that are as sheltered as possible and only contain similar services.

Maintain as few user accounts on the servers in question as possible and with the least privileges as possible.

Data-store Compromise


As part of your defence in depth strategy, you should expect that your data-store is going to get stolen, but hope that it does not. What assets within the data-store are sensitive? How are you going to stop an attacker that has gained access to the data-store from making sense of the sensitive data?

As part of developing the application that uses the data-store, a strategy also needs to be developed and implemented to carry on business as usual when this happens. For example, when your detection mechanisms realise that someone unauthorised has been on the machine(s) that host your data-store, as well as the usual alerts being fired off to the people that are going to investigate and audit, your application should take some automatic measures like:

  • All following logins should be instructed to change passwords

If you follow the recommendations below, data-store theft will be an inconvenience, but not a disaster.

Consider what sensitive information you really need to store. Consider using the following key derivation functions (KDFs) for all sensitive data. Not just passwords. Also continue to remind your customers to always use unique passwords that are made up of alphanumeric, upper-case, lower-case and special characters. It is also worth considering pushing the use of high quality password vaults. Do not limit password lengths. Encourage long passwords.

PBKDF2, bcrypt and scrypt are KDFs that are designed to be slow. Used in a process commonly known as key stretching. The process of key stretching in terms of how long it takes can be tuned by increasing or decreasing the number of cycles used. Often 1000 cycles or more for passwords. “The function used to protect stored credentials should balance attacker and defender verification. The defender needs an acceptable response time for verification of users’ credentials during peak use. However, the time required to map <credential> -> <protected form> must remain beyond threats’ hardware (GPU, FPGA) and technique (dictionary-based, brute force, etc) capabilities.

OWASP Password Storage

PBKDF2, bcrypt and the newer scrypt, apply a Pseudorandom Function (PRF) such as a crypto-graphic hash, cipher or HMAC to the data being received along with a unique salt. The salt should be stored with the hashed data.

Do not use MD5, SHA-1 or the SHA-2 family of cryptographic one-way hashing functions by themselves for cryptographic purposes like hashing your sensitive data. In-fact do not use hashing functions at all for this unless they are leveraged with one of the mentioned KDFs. Why? Because the hashing speed can not be slowed as hardware continues to get faster. Many organisations that have had their data-stores stolen and continue to on a weekly basis could avoid their secrets being compromised simply by using a decent KDF with salt and a decent number of iterations. “Using four AMD Radeon HD6990 graphics cards, I am able to make about 15.5 billion guesses per second using the SHA-1 algorithm.

Per Thorsheim

In saying that, PBKDF2 can use MD5, SHA-1 and the SHA-2 family of hashing functions. Bcrypt uses the Blowfish (more specifically the Eksblowfish) cipher. Scrypt does not have user replaceable parts like PBKDF2. The PRF can not be changed from SHA-256 to something else.

Which KDF To Use?

This depends on many considerations. I am not going to tell you which is best, because there is no best. Which to use depends on many things. You are going to have to gain understanding into at least all three KDFs. PBKDF2 is the oldest so it is the most battle tested, but there has also been lessons learnt from it that have been taken to the latter two. The next oldest is bcrypt which uses the Eksblowfish cipher which was designed specifically for bcrypt from the blowfish cipher, to be very slow to initiate thus boosting protection against dictionary attacks which were often run on custom Application-specific Integrated Circuits (ASICs) with low gate counts, often found in GPUs of the day (1999).
The hashing functions that PBKDF2 uses were a lot easier to get speed increases due to ease of parallelisation as opposed to the Eksblowfish cipher attributes such as: far greater memory required for each hash, small and frequent pseudo-random memory accesses, making it harder to cache the data into faster memory. Now with hardware utilising large Field-programmable Gate Arrays (FPGAs), bcrypt brute-forcing is becoming more accessible due to easily obtainable cheap hardware such as:

The sensitive data stored within a data-store should be the output of using one of the three key derivation functions we have just discussed. Feed with the data you want protected and a salt. All good frameworks will have at least PBKDF2 and bcrypt APIs

bcrypt brute-forcing

With well ordered rainbow tables and hardware with high FPGA counts, brute-forcing bcrypt is now feasible:

Risks that Solution Causes

Reliance on adjacent layers of defence means those layers have to actually be up to scratch. There is a possibility that they will not be.

Possibility of missing secrets being sent over the wire.

Possible reliance on obscurity with many of the strategies I have seen proposed. Just be aware that obscurity may slow an attacker down a little, but it will not stop them.

Store Configuration in Configuration files

With moving any secrets from source code to configuration files, there is a possibility that the secrets will not be changed at the same time. If they are not changed, then you have not really helped much, as the secrets are still in source control.

With good configuration tools like node-config, you are provided with plenty of options of splitting up meta-data, creating overrides, storing different parts in different places, etc. There is a risk that you do not use the potential power and flexibility to your best advantage. Learn the ins and outs of what ever system it is you are using and leverage its features to do the best at obscuring your secrets and if possible securing them.


Is an excellent configuration package with lots of great features. There is no security provided with node-config, just some potential obscurity. Just be aware of that, and as discussed previously, make sure surrounding layers have beefed up security.


As is often the case with Microsoft solutions, their marketing often leads people to believe that they have secure solutions to problems when that is not the case. As discussed previously, there are plenty of ways to get around the Microsoft so called security features. As anything else in this space, they may provide some obscurity, but do not depend on them being secure.

Statements like the following have the potential for producing over confidence:

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.


Please keep your systems patched and updated.

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and a joy to use


There is a risk that people will believe this.


As with Microsofts “virtualisation-based security” Linux containers may slow system compromise down, but a determined attacker will find other ways to get around container isolation. Maintaining a small set of user accounts is a worthwhile practise, but that alone will not be enough to stop a highly skilled and determined attacker moving forward.
Even when technical security is very good, an experienced attacker will use other mediums to gain what they want, like social engineering, physical compromise, both, or some other attack vectors. Defence in depth is crucial in achieving good security. Concentrating on the lowest hanging fruit first and working your way up the tree.

Locking file permissions and ownership down is good, but that alone will not save you.

Least Privilege

Applying least privilege to everything can take quite a bit of work. Yes, it is probably not that hard to do, but does require a breadth of thought and time. Some of the areas discussed could be missed. Having more than one person working on the task is often effective as each person can bounce ideas off of each other and the other person is likely to notice areas that you may have missed and visa-versa.


Segmentation is useful, and a common technique to helping to build resistance against attacks. It does introduce some complexity though. With complexity comes the added likely-hood of introducing a fault.

Data-store Compromise

If you follow the advice in the countermeasures section, you will be doing more than most other organisations in this area. It is not hard, but if implemented could increase complacency/over confidence. Always be on your guard. Always expect that although you have done a lot to increase your security stance, a determined and experienced attacker is going to push buttons you may have never realised you had. If they want something enough and have the resources and determination to get it, they probably will. This is where you need strategies in place to deal with post compromise. Create process (ideally partly automated) to deal with theft.

Also consider that once an attacker has made off with your data-store, even if it is currently infeasible to brute-force the secrets, there may be other ways around obtaining the missing pieces of information they need. Think about the paper shredders and the associated competitions. With patience, most puzzles can be cracked. If the compromise is an opportunistic type of attack, they will most likely just give up and seek an easier target. If it is a targeted attack by determined and experienced attackers, they will probably try other attack vectors until they get what they want.

Do not let over confidence be your weakness. An attacker will search out the weak link. Do your best to remove weak links.

Costs and Trade-offs

There is potential for hidden costs here, as adjacent layers will need to be hardened. There could be trade-offs here that force us to focus on the adjacent layers. This is never a bad thing though. It helps us to step back and take a holistic view of our security.

Store Configuration in Configuration files

There should be little cost in moving secrets out of source code and into configuration files.


You will need to weigh up whether the effort to obfuscate secrets is worth it or not. It can also make the developers job more cumbersome. Some of the options provided may be worthwhile doing.


Containers have many other advantages and you may already be using them for making your deployment processes easier and less likely to have dependency issues. They also help with scaling and load balancing, so they have multiple benefits.

Least Privilege

Is something you should be at least considering and probably doing in every case. It is one of those considerations that is worth while applying to most layers.


Segmenting of resources is a common and effective measure to take for at least slowing down attacks and a cost well worth considering if you have not already.

Data-store Compromise

The countermeasures discussed here go without saying, although many organisations do not do them well if at all. It is up to you whether you want to be one of the statistics that has all of their secrets revealed. Following the countermeasures here is something that just needs to be done if you have any data that is sensitive in your data-store(s).

TL-WN722N on Kali VM on Linux Host

September 3, 2015

The following is the process I found to set-up the pass-through of the very common USB TP-LINK TL-WN722N Wifi adapter (which is known to work well with Linux) to a Virtual Host Kali Linux 1.1.0 (same process for 2.0) guest, by-passing the Linux Mint 17.1 (Rebecca) Host.


VirtualBox 4.3.18_r96516

Wifi adapter

TP-LINK TL-WN722N Version 1.10

  • chip-set: Atheros ar9271
  • Vendor ID: 0cf3
  • Product ID: 9271
  • Module (driver): ath9k_htc


Useful commands

  • iwconfig
  • ifconfig
  • sudo lshw -C network
  • iwlist scan
  • lsusb
  • dmesg | grep -e wlan -e ath9
  • contents of /var/log/syslog
  • lsmod
  • Release DHCP assigned IP. Similar to Windows ipconfig /release
    dhclient -r [interface-name]
  • Renew DHCP assigned IP. Similar to Windows ipconfig /renew
    dhclient [interface-name]


I want to be able to access the internet on my laptop at the same time that I’m penetration testing a client network. I use my phone as a wireless hot-spot to access the internet. The easiest way to do this is to use the laptops on-board wireless interface to connect to the phones wireless hot-spot and pass the USB Wifi adapter straight to the guest.

Taking the following statement: “The preferred way to get Internet over wlan into a VM is to use the WLAN adapter on the host and using normal NAT for the VM. Passing USB WLAN adapters to the guest is almost untested.” from here, I like to think of more of a challenge than anything else. It can be however, something to keep in mind. if you’re prepared to persevere, you’ll get it working.



When you plug the Wifi adapter into your laptop and run lsusb, you should see a line that looks like:

ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n

The first four hex digits are the Vendor ID and the second four hex digits are the Product ID.

If you have a look from the bottom up of the /var/log/syslog file, you’ll see similar output to the following:

kernel: [ 98.212097] usb 2-2: USB disconnect, device number 3
kernel: [ 102.654780] usb 1-1: new high-speed USB device number 2 using ehci_hcd
kernel: [ 103.279004] usb 1-1: New USB device found, idVendor=0cf3, idProduct=7015
kernel: [ 103.279014] usb 1-1: New USB device strings: Mfr=16, Product=32, SerialNumber=48
kernel: [ 103.279020] usb 1-1: Product: UB95
kernel: [ 103.279025] usb 1-1: Manufacturer: ATHEROS
kernel: [ 103.279030] usb 1-1: SerialNumber: 12345
kernel: [ 103.597849] usb 1-1: ath9k_htc: Transferred FW: htc_7010.fw, size: 72992
kernel: [ 104.596310] ath9k_htc 1-1:1.0: ath9k_htc: Target is unresponsive
kernel: [ 104.596328] Failed to initialize the device
kernel: [ 104.605694] ath9k_htc: probe of 1-1:1.0 failed with error -22

Provide USB privileges to guest

First of all you need to add the user that controls guest to the vboxusers group on the host so that VM’s can control USB devices. logout/in of/to host.

Provide USB recognition to guest

Install the particular VirtualBox Extension Pack on to the host as discussed here. These packs can be found here. If you have an older version of VirtualBox, you can find them here. Don’t forget to checksum the pack before you add the extension.

  1. apt-get update
  2. apt-get upgrade
  3. apt-get dist-upgrade
  4. apt-get install linux-headers-$(uname -r)
  5. Shutdown Linux guest OS
  6. Apply extension to VirtualBox in the host at: File -> Preferences -> Extensions

Blacklist Wifi Module on Host

Unload the ath9k_htc module to take effect immediately and blacklist it so that it doesn’t load on boot. The module needs to be blacklisted on the host in order for the guest to be able to load it. Now we need to check to see if the module is loaded on the host with the following command:

lsmod | grep -e ath

We’re looking for ath9k_htc. If it is visible in the output produced from previous command, unload it with the following command:

modprobe -r ath9k_htc

Now you’ll need to create a blacklist file in /etc/modprobe.d/. Create /etc/modprobe.d/blacklist-ath9k.conf and add the following text into it and save:

blacklist ath9k_htc

Now go into the settings of your VM -> USB -> and add a Device Filter. I name this tl-wn722n and add the Vendor and Product ID’s we discovered with lsusb. Make sure The “Enable USB 2.0 (EHCI) Controller” is enabled also.


Upgrade Driver on Guest

Start the VM.

Install the latest firmware-atheros package

On the guest, check to see which version of firmware-atheros is installed:

dpkg-query -l '*atheros*'

Will probably be 0.44kali whether you’re on Kali Linux 1.0.0 or 2.

aptitude show firmware-atheros

Will provide lots more information if you’re interested. So now we need to remove this old package:

apt-get remove --purge firmware-atheros

Add the jessie-backports (that’s Debian 8.0) repository to your /etc/apt/sources.list in the following form:

deb http://ftp.nz.debian.org/debian jessie-backports main contrib non-free

Change the country prefix to your country if you like and follow it up with an update:

apt-get update

Then install the later package from the new repository we just added:

apt-get install -t jessie-backports firmware-atheros

Now if you run the dpkg-query -l '*atheros*' command again, you’re package should be on version 0.44~bp8+1


Plug your Wifi adapter into your laptop.

In the Devices menu of your guest -> USB Devices, you should be able to select the “ATHEROS USB2.0 WLAN” adapter.

Run dmesg | grep htc and you should see something similar to the following printed:

[ 4.648701] usb 2-1: ath9k_htc: Firmware htc_9271.fw requested
[ 4.648805] usbcore: registered new interface driver ath9k_htc
[ 4.649951] usb 2-1: firmware: direct-loading firmware htc_9271.fw
[ 4.966479] usb 2-1: ath9k_htc: Transferred FW: htc_9271.fw, size: 50980
[ 5.217395] ath9k_htc 2-1:1.0: ath9k_htc: HTC initialized with 33 credits
[ 5.860808] ath9k_htc 2-1:1.0: ath9k_htc: FW Version: 1.3

You should now be able to select the phones wireless hot-spot you want to connect to in network manager.

Additional Resources

  1. ath9k_htc Debian Module
  2. VirtualBox information around setting up the TL-WN722N
  3. TP-LINK TL-WN722N wiki
  4. Loading and unloading Linux Kernel Modules
  5. Kernel Module Blacklisting

Keeping Your NodeJS Web App Running on Production Linux

June 27, 2015

All the following offerings that I’ve evaluated target different scenarios. I’ve listed the pros and cons for each of them and where I think they fit into a potential solution to monitor your web applications (I’m leaning toward NodeJS) and make sure they keep running. I’ve listed the goals I was looking to satisfy.

For me I have to have a good knowledge of the landscape before I commit to a decision and stand behind it. I like to know I’ve made the best decision based on all the facts that are publicly available. Therefore, as always, it’s my responsibility to make sure I’ve done my research in order to make an informed and ideally… best decision possible. I’m pretty sure my evaluation was un-biased, as I hadn’t used any of the offerings other than forever before.

I looked at quite a few more than what I’ve detailed below, but the following candidates I felt were worth spending some time on.

Keep in mind, that everyone’s requirements will be different, so rather than tell you which to use because I don’t know your situation, I’ve listed the attributes (positive, negative and neutral) that I think are worth considering when making this choice. After my evaluation I make some decisions and start the configuration.

Evaluation criterion

  1. Who is the creator. I favour teams rather than individuals, as individuals move on, then where does that leave the product?
  2. Does it do what we need it to do? Goals address this.
  3. Do I foresee any integration problems with other required components?
  4. Cost in money. Is it free? I usually gravitate toward free software. It’s usually an easier sell to clients and management. Are there catches once you get further down the road? Usually open source projects are marketed as is.
  5. Cost in time. Is the set-up painful?
  6. How well does it appear to be supported? What do the users say?
  7. Documentation. Is there any / much? What is it’s quality?
  8. Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have a valid reason).
  9. Release schedule. How often are releases being made? When was the last release?
  10. Gut feeling, Intuition. How does it feel. If you have experience in making these sorts of choices, lean on it. Believe it or not, this should probably be No. 1.

The following tools have been my choice based on the above criterion.


  1. Application should start automatically on system boot
  2. Application should be re-started if it dies or becomes un-responsive
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy (Nginx, node-http-proxy, Tinyproxy, Squid, Varnish, etc)
    2. Clustering and providing load balancing for your single threaded application
    3. Visibility of application statistics.
  4. Enough documentation to feel comfortable consuming the offering
  5. The offering should be production ready. This means: mature with a security conscious architecture.

Sysvinit, Upstart, systemd & Runit

You’ll have one of these running on your Linux box.

These are system and service managers for Linux. Upstart and the later systemd were developed as replacements for the traditional init daemon (Sysvinit), which all depend on init. Init is an essential package that pulls in the default init system. In Debian, starting with Jessie, systemd is your default system and service manager.

There’s some quite helpful info on the differences between Sysvinit and systemd here.


As I have systemd installed out of the box on my test machine (Debian Jessie), I’ll be using this for my set-up.


  1. Well written comparison with Upstart, systemd, Runit and even Supervisor.

Running the likes of the below commands will provide some good details on how these packages interact with each other:

aptitude show sysvinit
aptitude show systemd
# and any others you think of

These system and service managers all run as PID 1 and start the rest of the system. Your Linux system will more than likely be using one of these to start tasks and services during boot, stop them during shutdown and supervise them while the system is running. Ideally you’re going to want to use something higher level to look after your NodeJS app. See the following candidates…


and it’s web UI. Can run any kind of script continuously (whether it is written in node.js or not). This wasn’t always the case though. It was originally targeted toward keeping NodeJS applications running.

Requires NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential, I’d rather do without NPM on a production server where we’re actively working to reduce the installed package count and disable everything else we can. I could install forever on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

NPM Details

Does it Meet our Goals?

  1. Not without an extra script. Crontab or similar
  2. Application will be re-started if it dies, but if it’s response times go up, there’s not much forever is going to do about it. It has no way of knowing.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing
    3. Visibility of application statistics could be added later with the likes of Monit or something else, but if you used Monit, then there wouldn’t really be a need for forever as Monit does the little that forever does and is capable of so much more, but is not pushy on what to do and how to do it. All the behaviour is defined with quite a nice syntax in a config file or as many as you like.
  4. I think there is enough documentation to feel comfortable consuming it, as forever doesn’t do a lot, which doesn’t have to be a bad thing.
  5. The code it self is probably production ready, but I’ve heard quite a bit about stability issues. You’re also expected to have NPM installed (more attack surface) when we already have native package managers on the server(s).

Overall Thoughts

For me, I’m looking for a tool set that does a bit more. Forever doesn’t satisfy my requirements. There’s often a balancing act between not doing enough and doing too much.



Younger than forever, but seems to have quite a few more features and does actually look quite good. I’m not sure about production ready though?

As mentioned on the github page: “PM2 is a production process manager for Node.js applications with a built-in load balancer“. This “Sounds” and at the initial glance looks shiny. Very quickly you should realise there are a few security issues you need to be aware of though.

The word “production” is used but it requries NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential and it shouldn’t be, I’d rather do without it on a production system. I could install PM2 on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

At the time of writing this, it’s less than a year old and in nodejs land, that means it’s very much in the immature realm. Do you really want to use something that young on a production server? I’d personally advise against it.

Yes, it’s very popular currently. That doesn’t tell me it’s ready for production though. It tells me the marketing is working.

Is your production server ready for PM2? That phrase alone tells me the mind-set behind the project. I’d much sooner see it the other way around. Is PM2 ready for my production server? You’re going to need a staging server for this, unless you honestly want development tools installed on your production server (git, build-essential, NVM and an unstable version of node 0.11.14 (at time of writing)) and run test scripts on your production server? Not for me or my clients thanks.

If you’ve considered the above concerns and can justify adding the additional attack surface area, check out the features if you haven’t already.

Features that Stood Out

They’re also listed on the github repository. Just beware of some of the caveats. Like for the load balancing: “we recommend the use of node#0.11.15+ or io.js#1.0.2+. We do not support node#0.10.* cluster module anymore!” 0.11.15 is unstable, but hang-on, I thought PM2 was a “production” process manager? OK, so were happy to mix unstable in with something we label as production?

On top of NodeJS, PM2 will run the following scripts: bash, python, ruby, coffee, php, perl.

Start-up Script Generation

Although I’ve heard a few stories that this is fairly un-reliable at the time of writing this. Which doesn’t surprise me, as the project is very young.


  1. Advanced Readme

Does it Meet our Goals?

  1. The feature exists, unsure of how reliable it is currently though?
  2. Application should be re-started if it dies shouldn’t be a problem. PM2 can also restart your application if it reaches a certain memory threshold. I haven’t seen anything around restarting based on response times or other application health issues.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Clustering and load-balancing is integrated but immature.
    3. PM2 provides a small collection of viewable statistics. Personally I’d want more, but I don’t see any reason why you’d have to swap PM2 because of this.
  4. There is reasonable official documentation for the age of the project. The community supplied documentation will need to catch up a bit, although there is a bit of that too. After working through all of the offerings and edge-cases, I feel as I usually do with NodeJS projects. The documentation doesn’t cover all the edge-cases and the development itself misses edge cases. Hopefully with time it’ll get better though as the project does look promising.
  5. I haven’t seen much that would make me think PM2 is production ready. It’s not yet mature. I don’t agree with it’s architecture.

Overall Thoughts

For me, the architecture doesn’t seem to be heading in the right direction to be used on a production web server where less is better. I’d like to see this change. If it did, I think it could be a serious contender for this space.


The following are better suited to monitoring and managing your applications. Other than Passenger, they should all be in your repository, which means trivial installs and configurations.



Supervisor is a process manager with a lot of features and a higher level of abstraction than the likes of the above Sysvinit, upstart, systemd, Runit, etc so it still needs to be run by an init daemon in itself.

From the docs: “It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.” Supervisor monitors the state of processes. Where as a tool like Monit can perform so many more types of tests and take what ever actions you define.

It’s in the Debian repositories  (trivial install on Debian and derivatives).


  1. Main web site
  2. There’s a good short comparison here.


Does it Meet our Goals?

  1. Application should start automatically on system boot: Yip. That’s what Supervisor does well.
  2. Application will be re-started if it dies, or becomes un-responsive. It’s often difficult to get accurate up/down status on processes on UNIX. Pidfiles often lie. Supervisord starts processes as subprocesses, so it always knows the true up/down status of its children.If your application becomes unresponsive or can’t connect to it’s database or any other service/resource it needs to work as expected. To be able to monitor these events and respond accordingly your application can expose a health-check interface, like GET /healthcheck. If everything goes well it should return HTTP 200, if not then HTTP 5**In some cases the restart of the process will solve this issue. httpok is a Supervisor event listener which makes GET requests to the configured URL. If the check fails or times out, httpok will restart the process.To enable httpok the following lines have to be placed in supervisord.conf:
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. This would be completely separate to supervisor.
    3. Visibility of application statistics could be added later with the likes of Monit or something else. For me, Supervisor doesn’t do enough. Monit does. Plus if you need what Monit offers, then you have to have three packages to think about, or Something like Supervisor, which is not an init system, so it kind of sits in the middle of the ultimate stack. So my way of thinking is, use the init system you already have to do the low level lifting and then something small to take care of everything else on your server that the init system is not really designed for and Monit has done this job really well. Just keep in mind also. This is not based on any bias. I hadn’t used Monit before this exercise.
  4. Supervisor is a mature product. It’s been around since 2004 and is still actively developed. The official and community provided docs are good.
  5. Yes it’s production ready. It’s proven itself.


Overall Thoughts

The documentation is quite good, easy to read and understand. I felt that the config was quite intuitive also. I already had systemd installed out of the box and didn’t see much point in installing Supervisor as systemd appeared to do everything Supervisor could do, plus systemd is an init system (it sits at the bottom of the stack). In most scenarios you’re going to have a Sysvinit or replacement of (that runs with a PID of 1), so in many cases Supervisor although it’s quite nice is kind of redundant, and of course Ubuntu has Upstart.

Supervisor is better suited to running multiple scripts with the same runtime, for example a bunch of different client applications running on Node. This can be done with systemd and the others, but Supervisor is a better fit for this sort of thing.




Is a utility for monitoring and managing daemons or similar programs. It’s mature, actively maintained, free, open source and licensed with GNU AGPL.

It’s in the debian repositories (trivial install on Debian and derivatives). The home page told me the binary was just under 500kB. The install however produced a different number:

After this operation, 765 kB of additional disk space will be used.

Monit provides an impressive feature set for such a small package.

Monit provides far more visibility into the state of your application and control than any of the offerings mentioned above. It’s also generic. It’ll manage and/or monitor anything you throw at it. It has the right level of abstraction. Often when you start working with a product you find it’s limitations and they stop you moving forward and you end up settling for imperfection or you swap the offering for something else providing you haven’t already invested to much effort into it. For me Monit hit the sweet spot and never seems to stop you in your tracks. There always seems to be an easy to relatively easy way to get any monitoring->take action sort of task done. What I also really like is that moving away from Monit should be relatively painless also. The time investment is small and some of it will be transferable in many cases. It’s just config from the control file.

Features that Stood Out

  • Ability to monitor files, directories, disks, processes, the system and other hosts.
  • Can perform emergency logrotates if a log file suddenly grows too large too fast
  • File Checksum TestingThis is good so long as the compromised server hasn’t also had the tool your using to perform your verification (md5sum or sha1sum) modified, which would be common. That’s why in cases like this, tools such as stealth can be a good choice.
  • Testing of other attributes like ownership and access permissions. These are good, but again can easily be modified.
  • Monitoring directories using time-stamp. Good idea, but don’t rely solely on this. time-stamps are easily modified with touch -r … providing you do it between Monit’s cycles and you don’t necessarily know when they are unless you have permissions to look at Monit’s control file.
  • Monitoring space of file-systems
  • Has a built-in lightweight HTTP(S) interface you can use to browse the Monit server and check the status of all monitored services. From the web-interface you can start, stop and restart processes and disable or enable monitoring of services. Monit provides fine grained control over who/what can access the web interface or whether it’s even active or not. Again an excellent feature that you can choose to use or not even have the extra attack surface.
  • There’s also an agregator (m/monit) that allows sys-admins to monitor and manage many hosts at a time. Also works well on mobile devices and is available at a one off cost (reasonable price) to monitor all hosts.
  • Once you install Monit you have to actively enable the http daemon in the monitrc in order to run the Monit cli and/or access the Monit http web UI. At first I thought “is this broken?” I couldn’t even run monit status (it’s a Monit command). ps told me Monit was running. Then I realised… it’s secure by default. You have to actually think about it in order to expose anything. It was this that confirmed Monit for me.
  • The Control File
  • Just like SSH, to protect the security of your control file and passwords the control file must have read-write permissions no more than 0700 (u=xrw,g=,o=); Monit will complain and exit otherwise.


The following was the documentation I used in the same order and I found that the most helpful.

  1. Main web site
  2. Official Documentation
  3. Source and links to other documentation including a QUICK START guide of about 6 lines.
  4. Adding Monit to systemd
  5. Release notes

Does it Meet our Goals?

  1. Application can start automatically on system boot
  2. Monit has a plethora of different types of tests it can perform and then follow up with actions based on the outcomes. Http is but one of them.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Yes, I don’t see any issues here
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. Monit will still monitor, restart and do what ever else you tell it to do.
    3. Monit provides application statistics to look at if that’s what you want, but it also goes further and provides directives for you to declare behaviour based on conditions that Monit checks for.
  4. Plenty of official and community supplied documentation
  5. Yes it’s production ready. It’s proven itself. Some extra education around some of the points I raised above with some of the security features would be good. If you could trust the hosts hashing programme (and other commonly trojanised programmes like find, ls, etc) that Monit uses, perhaps because you were monitoring it from a stealth controller (which had already taken a known good copy and produced it’s own bench-mark hash) or similar then yes, you could use that feature of Monit with greater assurance that the results it was producing were in fact accurate. In saying that, you don’t have to use the feature, but it’s there if you want it, which I see as very positive so long as you understand what could go wrong and where.


Overall Thoughts

The accepted answer here is a pretty good mix and approach to using the right tools for each job. Monit has a lot of capabilities, none of which you must use, so it doesn’t get in your way, as many opinionated tools do and like to dictate how you do things and what you must use in order to do them. Monit allows you to leverage what ever you already have in your stack. You don’t have to install package managers or increase your attack surface other than [apt-get|aptitude] install monit It’s easy to configure and has lots of good documentation.



I’ve looked at Passenger before and it looked quite good then. It still does, with one main caveat. It’s trying to do to much. One can easily get lost in the official documentation (example of the Monit install (handfull of commands to cover all Linux distros one page) vs Passenger install (aprx 10 pages)).  “Passenger is a web server and application server, designed to be fast, robust and lightweight. It runs your web apps with the least amount of hassle by taking care of almost all administrative heavy lifting for you.” I’d like to see the actual weight rather than just a relative term “lightweight”. To me it doesn’t look light weight. The feeling I got when evaluating Passenger was similar to the feeling produced with my Ossec evaluation.

The learning curve is quite a bit steeper than all the previous offerings. Passenger has strong opinions that once you buy into could make it hard to use the tools you may want to swap in and out. I’m not seeing the UNIX Philosophy here.

If you look at the Phusion Passenger Philosophy we see some note-worthy comments. “We believe no good software has bad documentation“. If your software is 100% intuitive, the need for documentation should be minimal. Few software products are 100% intuitive, because we only have so much time to develop it. The comment around “the Unix way” is interesting also. At this stage I’m not sure they’ve done better. I’d like to spend some time with someone or some team that has Passenger in production in a diverse environment and see how things are working out.

Passenger isn’t in the Debian repositories, so you would need to add the apt repository.

Passenger is six years old at the time of writing this, but the NodeJS support is only just over a year old.

Features that Stood Didn’t really Stand Out

Sadly there weren’t many that stood out for me.

  • Handle more traffic looked similar to Monit resource testing but without the detail. If there’s something Monit can’t do well, it’ll say “Hay, use this other tool and I’ll help you configure it to suite the way you want to work. If you don’t like it, swap it out for something else” With Passenger it seems to integrate into everything rather than providing tools to communicate loosely. Essentially locking you into a way of doing something that hopefully you like. It also talks about “Uses all available CPU cores“. If you’re using Monit you can use the NodeJS cluster module to take care of that. Again leaving the best tool for the job to do what it does best.
  • Reduce maintenance
    • Keep your app running, even when it crashesPhusion Passenger supervises your application processes, restarting them when necessary. That way, your application will keep running, ensuring that your website stays up. Because this is automatic and builtin, you do not have to setup separate supervision systems like Monit, saving you time and effort.” but this is what Monit excels at and it’s a much easier set-up than Passenger. This sort of marketing doesn’t sit right with me.
    • Host multiple apps at once. Host multiple apps on a single server with minimal effort. ” If we’re talking NodeJS web apps, then they are their own server. They host themselves. In this case it looks like Passenger is trying to solve a problem that doesn’t exist?
  • Improve security
    • Privilege separationIf you host multiple apps on the same system, then you can easily run each app as a different Unix user, thereby separating privileges.“. The Monit documentation says this: “If Monit is run as the super user, you can optionally run the program as a different user and/or group.” and goes on to provide examples how it’s done. So again I don’t see anything new here. Other than the “Slow client protections” which has side affects, that’s it for security considerations with Passenger. From what I’ve seen Monit has more in the way of security related features.
  • What I saw happening here was a lot of stuff that I actually didn’t need. Your mileage may vary.


Phusion Passenger is a commercial product that has enterprise, custom and open source (which is free and still has loads of features).


The following was the documentation I used in the same order and I found that the most helpful.

  1. NodeJS tutorial (This got me started with how it could work with NodeJS)
  2. Main web site
  3. Documentation and support portal
  4. Design and Architecture
  5. User Guide Index
  6. Nginx specific User Guide
  7. Standalone User Guide
  8. Twitter, blog
  9. IRC: #passenger at irc.freenode.net. I was on there for several days. There was very little activity.


Does it Meet our Goals?

  1. Application should start automatically on system boot. There is no doubt that Passenger goes way beyond this aim.
  2. Application should be re-started if it dies or becomes un-responsive. There is no doubt that Passenger goes way beyond this aim.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Passenger provides Integrations into Nginx, Apache and stand-alone (provide your own proxy)
    2. Passenger scales up NodeJS processes and automatically load balances between them
    3. Passenger is advertised as offering easily viewable statistics.
  4. There is loads of official documentation. Not as much community contributed though, as it’s still young.
  5. From what I’ve seen so far, I’d say Passenger is production ready. I would like to see more around how security was baked into the architecture though before I committed to using it.

Overall Thoughts

I spent quite a while reading the documentation. I just think it’s doing to much. I prefer to have stronger single focused tools that do one job, do it well and play nicely with all the other kids in the sand pit. You pick the tool up and it’s just intuitive how to use it and you end up reading docs to confirm how you think it should work. For me, this is not how passenger is.

If you’re looking for something even more comprehensive, check out Zabbix. If you like to pay for your tools, check out Nagios if you haven’t already.

At this point it was fairly clear as to which components I’d be using and configuring to keep my NodeJS application monitored, alive and healthy along with any other scripts or processes. systemd and Monit. If you’re on Ubuntu, you’d probably use Upstart instead of systemd as it should already be your default init system. So going with the default for the init system should give you a quick start and provide plenty of power. Plus it’s well supported, reliable, feature rich and you can manage anything/everything you want without installing extra packages. For the next level up, I’d choose Monit. I’ve now used it in production and it’s taken care of everything above the init system. I feel it has a good level of abstraction, plenty of features, doesn’t get in the way and integrates nicely into your production OS.

Getting Started with Monit

So we’ve installed Monit with an apt-get install monit and we’re ready to start configuring it.

ps aux | grep -i monit

Will reveal that Monit is running:

/usr/bin/monit -c /etc/monit/monitrc

Now if you issue a sudo service monit restart, it won’t work as you can’t access the Monit CLI due to the httpd not running.

The first thing we need to do is make some changes to the control file (/etc/monit/monitrc in Debian). The control file has sensible defaults already. At this stage I don’t need a web UI accessible via localhost or any other hosts, but it still needs to be turned on and accessible by at least localhost. Here’s why:

Note that HTTP support is required for almost all Monit CLI interface operation, as CLI commands (such as “monit status”) are handled by communicating with the Monit background process via the the HTTP interface. So basically you should have this enable, though you can bind the HTTP interface to localhost only so Monit is not accessible from the outside.

In order to turn on the httpd, all you need in your control file for that is:

set httpd port 2812 and use address localhost # only accept connection from localhost
allow localhost # allow localhost to connect to the server and

If you want to receive alerts via email, then you’ll need to configure that. Then on reload you should get start and stop events (when you quit).

sudo monit reload

Now if you issue a curl localhost:2812 you should get the web UI’s response of a html page. Now you can start to play with the Monit CLI

Now to stop the Monit background process use:

monit quit

Oh, you can find all the arguments you can throw at Monit here, or just issue a:

monit -h # will list all options.

To check the control file for syntax errors:

sudo monit -t

Also keep an eye on your log file which is specified in the control file: set logfile /var/log/monit.log

Right. So what happens when Monit dies…

Keep Monit Alive

Now you’re going to want to make sure your monitoring tool that can be configured to take all sorts of actions never just stops running, leaving you flying blind. No noise from your servers means all good right? Not necessarily. Your monitoring tool just has to keep running. So lets make sure of that now.

When Monit is apt-get install‘ed on Debian it gets installed and configured to run as a daemon. This is defined in Monit’s init script.
Monit’s init script is copied to /etc/init.d/ and the run levels set-up for it. This means when ever a run level is entered the init script will be run taking either the single argument of stop (example: /etc/rc0.d/K01monit), or start (example: /etc/rc2.d/S17monit). Further details on run levels here.

systemd to the rescue

Monit is pretty stable, but if for some reason it dies, then it won’t be automatically restarted again.
This is where systemd comes in. systemd is installed out of the box on Debian Jessie on-wards. Ubuntu uses Upstart which is similar. Both SysV init and systemd can act as drop-in replacements for each other or even work along side of each other, which is the case in Debian Jessie. If you add a unit file which describes the properties of the process that you want to run, then issue some magic commands, the systemd unit file will take precedence over the init script (/etc/init.d/monit)

Before we get started, lets get some terminology established. The two concepts in systemd we need to know about are unit and target.

  1. A unit is a configuration file that describes the properties of the process that you’d like to run. There are many examples of these I can show you and I’ll point you in the direction soon. They should have a [Unit] directive at a minimum. The syntax of the unit files and the target files were derived from Microsoft Windows .ini files. Now I think the idea is that if you want to have a [Service] directive within your unit file, then you would append .service to the end of your unit file name.
  2. A target is a grouping mechanism that allows systemd to start up groups of processes at the same time. This happens at every boot as processes are started at different run levels.

Now in Debian there are two places that systemd looks for unit files… In order from lowest to highest precedence, they are as follows:

  1. /lib/systemd/system/ (prefix with /usr dir for archlinux) unit files provided by installed packages. Have a look in here for many existing examples of unit files.
  2. /etc/systemd/system/ unit files created by the system administrator

As mentioned above, systemd should be the first process started on your Linux server. systemd reads the different targets and runs the scripts within the specific target’s “target.wants” directory (which just contains a collection of symbolic links to the unit files). For example the target file we’ll be working with is the multi-user.target file (actually we don’t touch it, systemctl does that for us (as per the magic commands mentioned above)). Just as systemd has two locations in which it looks for unit files. I think this is probably the same for the target files, although there wasn’t any target files in the system administrator defined unit location but there were some target.wants files there.

systemd Monit Unit file

I found a template that Monit had already provided for a unit file in /usr/share/doc/monit/examples/monit.service. There’s also one for Upstart. Copy that to where the system administrator unit files should go and make the change so that systemd restarts Monit if it dies for what ever reason. Check the Restart= options on the systemd.service man page. The following is what my initial unit file looked like:

Description=Pro-active monitoring utility for unix systems

ExecStart=/usr/bin/monit -I -c /etc/monit/monitrc
ExecStop=/usr/bin/monit -c /etc/monit/monitrc quit
ExecReload=/usr/bin/monit -c /etc/monit/monitrc reload


Now, some explanation. Most of this is pretty obvious. The After= directive just tells systemd to make sure the network.target file has been acted on first and of course network.target has After=network-pre.target which doesn’t have a lot in it. I’m not going to go into this now, as I don’t really care too much about it. It works. It means the network interfaces have to be up first. If you want to know how, why, check this documentation. Type=simple. Again check the systemd.service man page.
Now to have systemd control Monit, Monit must not run as a background process (the default). To do this, we can either add the set init statement to Monit’s control file or add the -I option when running systemd, which is exactly what we’ve done above. The WantedBy= is the target that this specific unit is part of.

Now we need to tell systemd to create the symlinks in multi-user.target.wants directory and other things. See the man page for more details about what enable actually does if you want them. You’ll also need to start the unit.

Now what I like to do here is:

systemctl status /etc/systemd/system/monit.service

Then compare this output once we enable the service:

● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; disabled)
   Active: inactive (dead)
sudo systemctl enable /etc/systemd/system/monit.service
# systemd now knows about monit.service
systemctl status /etc/systemd/system/monit.service


● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; enabled)
   Active: inactive (dead)

Now start the service:

sudo systemctl start monit.service # there's a stop and restart also.

Now you can check the status of your Monit service again. This shows terse runtime information about the units or PID you specify (monit.service in our case).

sudo systemctl status monit.service

By default this function will show you 10 lines of output. The number of lines can be controlled with the --lines= option

sudo systemctl --lines=20 status monit.service

Now try killing the Monit process. At the same time, you can watch the output of Monit in another terminal. tmux or screen is helpful for this:

sudo tail -f /var/log/monit.log
sudo kill -SIGTERM $(pidof monit)
# SIGTERM is a safe kill and is the default, so you don't actually need to specify it. Be patient, this may take a minute or two for the Monit process to terminate.

Or you can emulate a nastier termination with SIGKILL or even SEGV (which may kill monit faster).

Now when you run another status command you should see the PID has changed. This is because systemd has restarted Monit.

When you need to make modifications to the unit file, you’ll need to run the following command after save:

sudo systemctl daemon-reload

When you need to make modifications to the running services configuration file
/etc/monit/monitrc for example, you’ll need to run the following command after save:

sudo systemctl reload monit.service
# because systemd is now in control of Monit, rather than the before mentioned: sudo monit reload


Keep NodeJS Application Alive

Right, we know systemd is always going to be running. So lets use it to take care of the coarse grained service control. That is keeping your NodeJS application service alive.

Using systemd

systemd my-web-app.service Unit file

You’ll need to know where your NodeJS binary is. The following will provide the path:

which NodeJS

Now create a systemd unit file my-nodejs-app.service

Description=My amazing NodeJS application

# systemctl start my-nodejs-app # to start the NodeJS script
ExecStart=[where nodejs binary lives] [where your app.js/index.js lives]
# systemctl stop my-nodejs-app # to stop the NodeJS script
# SIGTERM (15) - Termination signal. This is the default and safest way to kill process.
# SIGKILL (9) - Kill signal. Use SIGKILL as a last resort to kill process. This will not save data or cleaning kill the process.
ExecStop=/bin/kill -SIGTERM $MAINPID
# systemctl reload my-nodejs-app # to perform a zero-downtime restart.
# SIGHUP (1) - Hangup detected on controlling terminal or death of controlling process. Use SIGHUP to reload configuration files and open/close log files.
ExecReload=/bin/kill -HUP $MAINPID
Group=my-nodejs-app # Not really needed unless it's different, as the default group of the user is chosen without this option. Self documenting though, so I like to have it present.


Add the system user and group so systemd can actually run your service as the user you’ve specified.

sudo groupadd --system my-nodejs-app # this is not needed if you adduser like below...
getent group # to verify which groups exist.
sudo adduser --system --no-create-home --group my-nodejs-app # This will create a system group with the same name and ID of the user.
groups my-nodejs-app # to verify which groups the new user is in.

Now as we did above, go through the same procedure enable‘ing, start‘ing and verifying your new service.

Make sure you have your directory permissions set-up correctly and you should have a running NodeJS application that when it dies will be restarted automatically by systemd.

Don’t forget to backup all your new files and changes in case something happens to your server.

We’re done with systemd for now. Following are some useful resources I’ve used:


Using Monit

Now just configure your Monit control file. You can spend a lot of time here tweaking a lot more than just your NodeJS application. There are loads of examples around and the control file itself has lots of commented out examples also. You’ll find the following the most helpful:

There are a few things that had me stuck for a bit. By default Monit only sends alerts on change, not on every cycle if the condition stays the same, unless when you set-up your

set alert your-ame@your.domain

Append receive all alerts, so that it looks like this:

set alert your-ame@your.domain receive all alerts

There’s quite a few things you just work out as you go. The main part I used to health-check my NodeJS app was:

check host myhost with address
   start program = "/bin/systemctl start my-nodejs-app.service"
   stop program = "/bin/systemctl stop my-nodejs-app.service"
   if failed ping then alert
   if failed
      port 80 and
      protocol http and
      status = 200 # The default without status is failure if status code >= 400
      request /testdir with content = "some text on my web page" and
         then restart
   if 5 restarts within 5 cycles then alert

I carry on and check things like:

  1. cpu and memory usage
  2. load averages
  3. File system space on all the mount points
  4. Check SSH that it hasn’t been restarted by anything other than Monit (potentially swapping the binary or it’s config). Of course if an attacker kills Monit, systemd immediately restarts it and we get Monit alert(s). We also get real-time logging hopefully to an off-site syslog server. Ideally your off-site syslog server also has alerts set-up on particular log events. On top of that you should also have inactivity alerts set-up so that if your log files are not generating events that you expect, then you also receive alerts. Services like Dead Man’s Snitch or packages like Simple Event Correlator with Cron are good for this. On top of all that, if you have a file integrity checker that resides on another system that your host reveals no details of and you’ve got it configured to check all the right file check-sums, dates, permissions, etc, you’re removing a lot of low hanging fruit for someone wanting to compromise your system.
  5. Directory permissions, uid, gid and checksums. Of course you’re also going to have to make sure the tools that Monit uses to do these checks haven’t been modified.


If you find anything I haven’t explained clearly, or you need a hand with any of this just leave a comment. Cheers.

Evaluation of Host Intrusion Detection Systems (HIDS)

May 30, 2015

Followed up with a test deployment and drive.

The best time to install a HIDS is on a fresh install before you open the host up to the internet or even your LAN if it’s corporate. Of course if you don’t have that luxury, there are a bunch of tools that can help you determine if you’re already owned. Be sure to run one or more over your target system before your HIDS bench-marks it.

The reason I chose Stealth and OSSEC to take further into an evaluation was because they rose to the top of a preliminary evaluation I performed during a recent Debian Web server hardening process where I looked at several other HIDS as well.




ossec-hids on github

Who’s Behind Ossec

  • Many developers, contributors, managers, reviewers, translators (infact the OSSEC team looks almost as large as the Stealth user base)


Lots of documentation. Not always the easiest to navigate. Lots of buzz on the inter-webs.
Several books.

Community / Communication

IRC channel #ossec at irc.freenode.org Although it’s not very active.


  • Manager (sometimes called server): does most of the work monitoring the Agents. It stores the file integrity checking databases, the logs, events and system auditing entries, rules, decoders, major configuration options.
  • Agents: small collections of programs installed on the machines we are interested in monitoring. Agents collect information and forward it to the manager for analysis and correlation.

There are quite a few other ancillary components also.


You can also go the agent-less route which may allow the Manager to perform file integrity checks using agent-less scripts. As with Stealth, you’ve still got the issue of needing to be root in order to read some of the files.

Agents can be installed on VMware ESX but from what I’ve read it’s quite a bit of work.

Features (What does it do)

  • File integrity checking
  • Rootkit detection
  • Real-time log file monitoring and analysis (you may already have something else doing this)
  • Intrusion Prevention System (IPS) features as well: blocking attacks in real-time
  • Alerts can go to a databases MySQL or PostgreSQL or other types of outputs
  • There is a PHP web UI that runs on Apache if you would rather look at pretty outputs vs log files.

What I like

To me, the ability to scan in real-time off-sets the fact that the agents need binaries installed. This hinders the attacker from covering their tracks.

Can be configured to scan systems in realtime based on inotify events.

Backed by a large company Trend Micro.

Options. Install options for starters. You’ve got the options of:

  • Agent-less installation as described above
  • Local installation: Used to secure and protect a single host
  • Agent installation: Used to secure and protect hosts while reporting back to a
    central OSSEC server
  • Server installation: Used to aggregate information

Can install a web UI on the manager, so you need Apache, PHP, MySQL.

What I don’t like

  • Unlike Stealth, The fact that something has to be installed on the agents
  • The packages are not in the standard repositories. The downloads, PGP keys and directions are here.
  • I think Ossec may be doing to much and if you don’t like the way it does one thing, you may be stuck with it. Personally I really like the idea of a tool doing one thing, doing it well and providing plenty of configuration options to change the way it does it’s one thing. This provides huge flexibility and minimises your dependency on a suite of tools and/or libraries
  • Information overload. There seems to be a lot to get your head around to get it set-up. There are a lot of install options documented (books, interwebs, official docs). It takes a bit to workout exactly the best procedure for your environment. Following are some of the resources I used:
    1. Some official documentation
    2. Perving in the repository
    3. User take one install
    4. User take two install
    5. User take three install


Stealth file integrity checker



Who’s Behind Stealth

Author: Frank B. Brokken. An admirable job for one person. Frank is not a fly-by-nighter though. Stealth was first presented to Congress in 2003. It’s still actively maintained and used by a few. It’s one of GNU/Linux’s dirty little secrets I think. It’s a great idea, makes a tricky job simple and does it in an elegant way.


  • 3.00.00 (2014-08-29)
  • 2.11.03 (2013-06-18)
    • Once you install Stealth, all the documentation can be found by sudo updatedb && locate stealth. I most commonly used: HTML docs (/usr/share/doc/stealth-doc/manual/html/) and (/usr/share/doc/stealth-doc/manual/pdf/stealth.pdf) for easy searching across the HTML docs
    • man page (/usr/share/doc/stealth/stealthman.html)
    • Examples: (/usr/share/doc/stealth/examples/)
  • More covered in my demo set-up below


  • 3.00.00 (2014-08-29) is in the repository for Debain Jessie
  • 2.11.03-2 (2013-06-18) This is as recent as you can go for Linux Mint Qiana (17) and Rebecca (17.1) within the repositories, unless you want to go out of band. There are quite a few dependencies ldd will show you:
    libbobcat.so.3 =&amp;gt; /usr/lib/libbobcat.so.3 (0xb765d000)
    libpthread.so.0 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libpthread.so.0 (0xb7641000)
    libstdc++.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb754e000)
    libm.so.6 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libm.so.6 (0xb7508000)
    libgcc_s.so.1 =&amp;gt; /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74eb000)
    libc.so.6 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb7340000)
    libX11.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libX11.so.6 (0xb71ee000)
    libcrypto.so.1.0.0 =&amp;gt; /usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0 (0xb7022000)
    libreadline.so.6 =&amp;gt; /lib/i386-linux-gnu/libreadline.so.6 (0xb6fdc000)
    libmilter.so.1.0.1 =&amp;gt; /usr/lib/i386-linux-gnu/libmilter.so.1.0.1 (0xb6fcb000)
    /lib/ld-linux.so.2 (0xb7748000)
    libxcb.so.1 =&amp;gt; /usr/lib/i386-linux-gnu/libxcb.so.1 (0xb6fa5000)
    libdl.so.2 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libdl.so.2 (0xb6fa0000)
    libtinfo.so.5 =&amp;gt; /lib/i386-linux-gnu/libtinfo.so.5 (0xb6f7c000)
    libXau.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libXau.so.6 (0xb6f78000)
    libXdmcp.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libXdmcp.so.6 (0xb6f72000)
  • 2.10.01 (2012-10-04) is in the repository for Debian Wheezy

Community / Communication

No community really. I see it as one of the dirty little secretes that I’m surprised many diligent sysadmins haven’t jumped on. The author is happy to answer emails. The author doesn’t market.



The computer initiating the scan.

Needs two kinds of outgoing services:

  1. ssh to reach the clients
  2. mail transport agent (MTA)(sendmail, postfix)

Considerations for the controller:

  1. No public access.
  2. All inbound services should be denied.
  3. Access only via its console.
  4. Physically secure location (one would think goes without saying, but you may be surprised).
  5. Sensitive information of the clients are stored on the controller.
  6. Password-less access to the clients for anyone who gains controller root access, unless ssh-cron is used, which appears to have been around since 2014-05.


The computer/s being scanned. I don’t see any reason why a Stealth solution couldn’t be set-up to look after many clients.


The controller stores one to many policy files. Each of which is specific to a single client and contains use directives and commands. It’s recommended policy to take copies of the client programmes such as the hashing programme sha1sum, find and others that are used extensively during the integrity scans and copy them to the controller to take bench-mark hashes. Subsequent runs will do the same to compare with the initial hashes stored.

Features (What does it do)

File integrity tests leaving virtually no sediments on the tested client.

Stealth subscribes to the “dark cockpit” approach. I.E. no mail is sent when no changes were detected. If you have a MTA, Stealth can be configured to send emails on changes it finds.

What I like

  • It’s simplicity. There is one package to install on the controller. Nothing to install on the client machines. Client just needs to have the controllers SSH public key. You will need a Mail Transfer Agent on your controller if you don’t already have one. My test machine (Linux Mint) didn’t have one.
  • Rather than just modifying the likes of sha1sum on the clients that Stealth uses to perform it’s integrity checks, Stealth would somehow have to be fooled into thinking that the changed hash of the sha1sum it’s just copied to the controller is the same as the previously recorded hash that it did the same with. If the previously recorded hash is removed or does not match the current hash, then Stealth will fire an alert off.
  • It’s in the Debian repositories. Which is a little surprising considering I couldn’t find any test suite results.
  • The whole idea behind it. Systems being monitored give little appearance that they’re being monitored, other than I think the presence of a single SSH login when Stealth first starts in the auth.log. This could actually be months ago, as the connection remains active for the life of Stealth. The login could be from a user doing anything on the client. It’s very discrete.
  • unpredictability of Stealth runs is offered through Stealth’s --random-interval and --repeat options. E.g., --repeat 60 --random-interval 30 results in new Stealth-runs on average every 75 seconds.
  • Subscribes to the Unix philosophy: “do one thing and do it well”
  • Stealth’s author is very approachable and open. After talking with Frank and suggesting some ideas to promote Stealth and it’s community, Frank started a discussion list.

What I don’t like

  • Lack of visible code reviews and testing.Yes it’s in Debian, but so was OpenSSL and Bash
  • One man band. Support provided via one person alone via email. Comparing with the likes of Ossec which has …
  • Lack of use cases. I don’t see anyone using/abusing it. Although Frank did send me some contacts of other people that are using it, so again, a very helpful author. Can’t find much on the interwebs. The documentation has clear signs that it’s been written and is targeting people already familiar with the tool. This is understandable as the author has been working on this project for nine years and could possibly be disconnected with what’s involved for someone completely new to the project to dive in and start using it. In saying that, that’s what I did and so far it worked out well.
  • This tells me that either very few are using it or it has no bugs and the install and configuration is stupidly straight forward or both.
  • Small user base. This is how many people are happy to reveal they are using Stealth.
  • Reading through the userguide, the following put me off a little: “preferably the public ssh-identity key of the controller should be placed in the client’s root .ssh/authorized_keys file, granting the controller root access to the client. Root access is normally needed to gain access to all directories and files of the client’s file system.” I never allow SSH root access to servers. So I’m not about to start. What’s worse, is that Stealth SSHing from server to client with key-pair can only do so automatically if the pass-phrase is blank. If it’s not blank then someone has to drive Stealth and the whole idea of Stealth (as far as I can tell) is to be an automatic file integrity checker.
    There are however some work-arounds to this. ssh-cron can run scheduled Stealth jobs, but needs to aquire the pass-phrase from the user once. It does this via ssh-askpass which is a X11 based pass-phrase input dialog. If your running ssh-cron and Stealth from a server (which would be a large number of potential users I would think) you won’t have X11. So if that’s the case ssh-cron is out of the question. At least that’s how I understand it from the man page and emails from Frank. Frank mentioned in an email: “Stealth itself could perfectly well be started `by hand’ setting up the connection using, e.g., an existing ssh private-key, which could thereafter completely be removed from the system running Stealth. The running Stealth process would then continue to use the established connection for as long as it’s running. This may be a very long time if the –daemon option is used.” I’ve used Monit to do any checks on the client that need root access. This way Stealth can run as a low privileged user.
  • In order to use an SSH key-pair with passphrase and have your controller resume scans on reboot, you’re going to need ssh-cron. Some distros like Mint and Ubuntu only have very old versions of libbobcat (3.19.01 (December 2013)) in their repositories. You could re-build if you fancy the dependency issues it may bring. Failing that, use Debian which is way more up to date, or just fire stealth off each time you reboot your controller machine manually and run it as a daemon with arguments such as --keep-alive (or --daemon if running stealth >= 4.00.00), --repeat. This will cause Stealth to keep running (sleeping) most of the time, then doing it’s checks at the intervals you define.


In making all of my considerations, I changed my mind quite a few times on which offerings were most suited to which environments. I think this is actually a good thing, as I think it means my evaluations were based on the real merits of each offering rather than any biases.

If you already have real-time logging to an off-site syslog server set-up and alerting. OSSEC would provide redundant features.

If you don’t already have real-time logging to an off-site syslog server, then OSSEC can help here.

The simplicity of Stealth, flatter learning curve and it’s over-all philosophy is what won me over.

Stealth Up and Running

I tested this out on a Mint installation.

Installed stealth and stealth-doc (2.11.03-2) via synaptic package manager. Then just did a locate for stealth to find the docs and other example files. The following are the files I used for documentation, how I used them and the tab order that made sense to me:

  1. The main documentation index: file:///usr/share/doc/stealth-doc/manual/html/stealth.html
  2. Chapter one introduction: file:///usr/share/doc/stealth-doc/manual/html/stealth01.html
  3. Chapter three to help build up a policy file: file:///usr/share/doc/stealth-doc/manual/html/stealth03.html
  4. Chapter five for running Stealth and building up policy file: file:///usr/share/doc/stealth-doc/manual/html/stealth05.html
  5. Chapter six for running Stealth: file:///usr/share/doc/stealth-doc/manual/html/stealth06.html
  6. Chapter seven for arguements to pass to Stealth: file:///usr/share/doc/stealth-doc/manual/html/stealth07.html
  7. Chapter eight for error messages: file:///usr/share/doc/stealth-doc/manual/html/stealth08.html
  8. The Man page: file:///usr/share/doc/stealth/stealthman.html
  9. Policy file examples: file:///usr/share/doc/stealth/examples/
  10. Useful scripts to use with Stealth: file:///usr/share/doc/stealth/scripts/usr/bin/
  11. All of the documentation in simple text format (good for searching across chapters for strings): file:///usr/share/doc/stealth-doc/manual/text/stealth.txt

Files I would need to copy and modify were:

  • /usr/share/doc/stealth/scripts/usr/bin/stealthcleanup.gz
  • /usr/share/doc/stealth/scripts/usr/bin/stealthcron.gz
  • /usr/share/doc/stealth/scripts/usr/bin/stealthmail.gz

Files I used for reference to build up a policy file:

  • /usr/share/doc/stealth/examples/demo.pol.gz
  • /usr/share/doc/stealth/examples/localhost.pol.gz
  • /usr/share/doc/stealth/examples/simple.pol.gz

As mentioned above, providing you have a working MTA, then Stealth will just do it’s thing when you run it. The next step is to schedule it’s runs. This can be also (as mentioned above) with a pseudo random interval.

Feel free to leave a comment if you need help setting Stealth up, as it did take a bit of fiddling, but does what it says it does on the box very well.

Web Server Log Management

April 25, 2015

As part of the ongoing work around preparing a Debian web server to host applications accessible from the WWW I performed some research, analysis, made decisions along the way and implemented a first stage logging strategy. I’ve done similar set-ups many times before, but thought it worth sharing my experience for all to learn something from it and/or provide input, recommendations, corrections to the process so we all get to improve.

The main system loggers I looked into

  • GNU syslogd which I don’t think is being developed anymore? Correct me if I’m wrong. Most Linux distributions no longer ship with this. Only supports UDP. It’s also a bit lacking in features. From what I gather is single-threaded. I didn’t spend long looking at this as there wasn’t much point. The following two offerings are the main players.
  • rsyslog: which ships with Debian and most other Linux distros now I believe. I like to do as little as possible and rsyslog fits this description for me. The documentation seems pretty good. Rainer Gerhards wrote rsyslog and his blog provides some good insights. Supports UDP, TCP. Can send over TLS. There is also the Reliable Event Logging Protocol (RELP) which Rainer created.
    rsyslog is great at gathering, transporting, storing log messages and includes some really neat functionality for dividing the logs. It’s not designed to alert on logs. That’s where the likes of Simple Event Correlator (SEC) comes in. Rainer discusses why TCP isn’t as reliable as many think here.
  • syslog-ng: I didn’t spend to long here, as I didn’t see any features that I needed that were better than the default of rsyslog. Can correlate log messages, both real-time and off-line. Supports reliable and encrypted transport using TCP and TLS. message filtering, sorting, pre-processing, log normalisation.

There are are few comparisons around. Most of the ones I’ve seen are a bit biased and often out of date.


  • Record events and have them securely transferred to another syslog server in real-time, or as close to it as possible, so that potential attackers don’t have time to modify them on the local system before they’re replicated to another location
  • Reliability (resilience / ability to recover connectivity)
  • Extensibility: ability to add more machines and be able to aggregate events from many sources on many machines
  • Receive notifications from the upstream syslog server of specific events. No HIDS is going to remove the need to reinstall your system if you are not notified in time and an attacker plants and activates their root-kit.
  • Receive notifications from the upstream syslog server of lack of events. The network is down for example.

Environmental Considerations

A couple of servers in the mix:

FreeNAS File Server

Recent versions can send their syslog events to a syslog server. With some work, it looks like FreeNAS can be setup to act as a syslog server.

pfSense Router

Can send log events, but only by UDP by the look of it.

Following are the two strategies that emerged. You can see by the detail that I went down the path of the first one initially. It was the path of least resistance / quickest to setup. I’m going to be moving away from papertrail toward strategy two. Mainly because I’ve had a few issues where messages have been getting lost that have been very hard to track down (I’ve spent over a week on it). As the sender, you have no insight into what papertrail is doing. The support team don’t provide a lot of insight into their service when you have to trouble-shoot things. They have been as helpful as they can be, but I’ve expressed concern around them being unable to trouble-shoot their own services.


Strategy One

Rsyslog, TCP, local queuing, TLS, papertrail for your syslog server (PT doesn’t support RELP, but say that’s because their clients haven’t seen any issues with reliability in using plain TCP over TLS with local queuing). My guess is they haven’t looked hard enough. I must be the first then. Beware!

As I was setting this up and watching both ends. We had an internet outage of just over an hour. At that stage we had very few events being generated, so it was trivial to verify both ends. I noticed that once the ISP’s router was back on-line and the events from the queue moved to papertrail, that there was in fact one missing.

Why did Rainer Gerhards create RELP if TCP with queues was good enough? That was a question that was playing on me for a while. In the end, it was obvious that TCP without RELP isn’t good enough.
At this stage it looks like the queues may loose messages. Rainer says things like “In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that the action is not ready, which means the remote server is off-line. This can be detected with plain TCP syslog and RELP“, but it can be detected without RELP.

You can aggregate log files with rsyslog or by using papertrails remote_syslog daemon.

Alerting is available, including for inactivity of events.

Papertrails documentation is good and support is reasonable. Due to the huge amounts of traffic they have to deal with, they are unable to trouble-shoot any issues you may have. If you still want to go down the papertrail path, to get started, work through this which sets up your rsyslog to use UDP (specified in the /etc/rsyslog.conf by a single ampersand in front of the target syslog server). I want something more reliable than that, so I use two ampersands, which specifies TCP.

As we’re going to be sending our logs over the internet for now, we need TLS. Check papertrails CA server bundle for integrity:

curl https://papertrailapp.com/tools/papertrail-bundle.pem | md5sum

Should be: c75ce425e553e416bde4e412439e3d09

If all good throw the contents of that URL into a file called papertrail-bundle.pem.
Then scp the papertrail-bundle.pem into the web servers /etc dir. The command for that will depend on whether you’re already on the web server and you want to pull, or whether you’re somewhere else and want to push. Then make sure the ownership is correct on the pem file.

chown root:root papertrail-bundle.pem

install rsyslog-gnutls

apt-get install rsyslog-gnutls

Add the TLS config

$DefaultNetstreamDriverCAFile /etc/papertrail-bundle.pem # trust these CAs
$ActionSendStreamDriver gtls # use gtls netstream driver
$ActionSendStreamDriverMode 1 # require TLS
$ActionSendStreamDriverAuthMode x509/name # authenticate by host-name
$ActionSendStreamDriverPermittedPeer *.papertrailapp.com

to your /etc/rsyslog.conf. Create egress rule for your router to let traffic out to dest port 39871.

sudo service rsyslog restart

To generate a log message that uses your system syslogd config /etc/rsyslog.conf, run:

logger "hi"

should log “hi” to /var/log/messages and also to papertrail, but it wasn’t.

# Show a live update of the last 10 lines (by default) of /var/log/messages
sudo tail -f [-n <number of lines to tail>] /var/log/messages

OK, so lets run rsyslog in config checking mode:

/usr/sbin/rsyslogd -f /etc/rsyslog.conf -N1

Output all good looks like:

rsyslogd: version <the version number>, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.


  1. https://www.loggly.com/docs/troubleshooting-rsyslog/
  2. http://help.papertrailapp.com/
  3. http://help.papertrailapp.com/kb/configuration/troubleshooting-remote-syslog-reachability/
  4. /usr/sbin/rsyslogd -version will provide the installed version and supported features.

Which didn’t help a lot, as I don’t have telnet installed. I can’t ping from the DMZ as ICMP is not allowed out and I’m not going to install tcpdump or strace on a production server. The more you have running, the more surface area you have, the greater the opportunities to exploit.

So how do we tell if rsyslogd is actually running if it doesn’t appear to be doing anything useful?

pidof rsyslogd


/etc/init.d/rsyslog status

Showing which files rsyslogd has open can be useful:

lsof -p <rsyslogd pid>

or just combine the results of pidof rsyslogd

sudo lsof -p $(pidof rsyslogd)

To start with I had a line like:

rsyslogd 3426 root 8u IPv4 9636 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (SYN_SENT)

Which obviously showed rsyslogd‘s SYN packets were not getting through. I’ve had some discussion with Troy from PT support around the reliability of plain TCP over TLS without RELP. I think if the server is business critical, then strategy two “maybe” the better option. Troy has assured me that they’ve never had any issues with logs being lost due to lack of reliability with out RELP. Troy also pointed me to their recommended local queue options. After adding the queue tweaks and a rsyslogd restart, it resulted in:

rsyslogd 3615 root 8u IPv4 9766 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (ESTABLISHED)

I could now see events in the papertrail web UI in real-time.

Socket Statistics (ss)(the better netstat) should also show the established connection.

By default papertrail accepts TCP over TLS (TLS encryption check-box on, Plain text check-box off) and UDP. So if your TLS isn’t setup properly, your events won’t be accepted by papertrail. I later confirmed this to be true.

Check that our Logs are Commuting over TLS

Now without installing anything on the web server or router, or physically touching the server sending packets to papertrail or the router. Using a switch (ubiquitous) rather than a hub. No wire tap or multi-network interfaced computer. No switch monitoring port available on expensive enterprise grade switches (along with the much needed access). We’re basically down to two approaches I can think of and I really couldn’t be bothered getting up out of my chair.

  1. MAC flooding with the help of macof which is a utility from the dsniff suite. This essentially causes your switch to go into a “failopen mode” where it acts like a hub and broadcasts it’s packets to every port.

    MAC Flooding

  2. Man in the Middle (MiTM) with some help from ARP spoofing or poisoning. I decided to choose the second option, as it’s a little more elegant.

    ARP Spoofing

On our MitM box, I set a static IP: address, netmask, gateway in /etc/network/interfaces and add domain, search and nameservers to the /etc/resolv.conf.

Follow that up with a service network-manager restart

On the web server run:

ifconfig -a

to get MAC: <MitM box MAC> On MitM box run the same command to get MAC: <web server MAC>
On web server run:

ip neighbour

to find MACs associated with IP’s (the local ARP table). Router was: <router MAC>.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> REACHABLE
<router IP> dev eth0 lladdr <router MAC> REACHABLE

Now you need to turn your MitM box into a router temporarily. On the MitM box run

cat /proc/sys/net/ipv4/ip_forward

You’ll see a ‘1’ if forwarding is on. If it’s not, throw a ‘1’ into the file:

echo 1 > /proc/sys/net/ipv4/ip_forward

and check again to make sure. Now on the MitM box run

arpspoof -t <web server IP> <router IP>

This will continue to notify <web server IP> that our (MitM box) MAC address belongs to <router IP>. Essentially… we (MitM box) are <router IP> to the <web server IP> box, but our IP address doesn’t change. Now on the web server you can see that it’s ARP table has been updated and because arpspoof keeps running, it keeps telling <web server IP> that our MitM box is the router.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> STALE
<router IP> dev eth0 lladdr <MitM box MAC> REACHABLE

Now on our MitM box, while our arpspoof continues to run, we start Wireshark listening on our eth0 interface or what ever interface your using, and you can see that all packets that the web server is sending, we are intercepting and forwarding (routing) on to the gateway.

Now Wireshark clearly showed that the data was encrypted. I commented out the five TLS config lines in the /etc/rsyslog.conf file -> saved -> restarted rsyslog -> turned on “Plain text” in papertrail and could now see the messages in clear text. Now when I turned off “Plain text” papertrail would no longer accept syslog events. Excellent!

One of the nice things about arpspoof is that it re-applies the original ARP’s once it’s done.

You can also tell arpspoof to poison the routers ARP table. This way any traffic going to the web server via the router, not originating from the web server will be routed through our MitM box also.

Don’t forget to revert the change to /proc/sys/net/ipv4/ip_forward.

Exporting Wireshark Capture

You can use the File->Save As… option here for a collection of output types, or the way I usually do it is:

  1. First completely expand all the frames you want visible in your capture file
  2. File->Export Packet Dissections->as “Plain Text” file…
  3. Check the “All packets” check-box
  4. Check the “Packet summary line” check-box
  5. Check the “Packet details:” check-box and the “As displayed”
  6. OK

Trouble-shooting messages that papertrail never shows

To run rsyslogd in debug

Check to see which arguments get passed into rsyslogd to run as a daemon in /etc/init.d/rsyslog and /etc/default/rsyslog. You’ll probably see a RSYSLOGD_OPTIONS="". There may be some arguments between the quotes.

sudo service rsyslog stop
sudo /usr/sbin/rsyslogd [your options here] -dn >> ~/rsyslog-debug.log

The debug log can be quite useful for trouble-shooting. Also keep your eye on the stderr as you can see if it’s writing anything out (most system start-up scripts throw this away).
Once you’ve finished collecting log:

sudo service rsyslog start

To see if rsyslog is running

pidof rsyslogd
# or
/etc/init.d/rsyslog status
Turn on the impstats module

The stats it produces show when you run into errors with an output, and also the state of the queues.
You can also run impstats on the receiving machine if it’s in your control. Papertrail obviously is not.
Put the following into your rsyslog.conf file at the top and restart rsyslog:

# Turn on some internal counters to trouble-shoot missing messages

# need to turn log stream logging off
# End turn on some internal counters to trouble-shoot missing messages

Now if you get an error like:

rsyslogd-2039: Could not open output pipe '/dev/xconsole': No such file or directory [try http://www.rsyslog.com/e/2039 ]

You can just change the /dev/xconsole to /dev/console
xconsole is still in the config file for legacy reasons, it should have been cleaned up by the package maintainers.

GnuTLS error in rsyslog-debug.log

By running rsyslogd manually in debug mode, I found an error when the message failed to send:

unexpected GnuTLS error -53 in nsd_gtls.c:1571

Standard Error when running rsyslogd manually produces:

GnuTLS error: Error in the push function

With some help from the GnuTLS mailing list:

That means that send() returned -1 for some reason.” You can enable more output by adding an environment variable GNUTLS_DEBUG_LEVEL=9 prior to running the application, and that should at least provide you with the errno. This didn’t actually provide any more detail to stderr. However, thanks to Rainer we do now have debug.gnutls parameter in the rsyslog code that if you specify this global variable in the rsyslog.conf and assign it a value between 0-10 you’ll have gnutls debug output going to rsyslog’s debug log.

Strategy Two

Rsyslog, TCP, local queuing, TLS, RELP, SEC, syslog server on local network. Notification for inactivity of events could be performed by cron and SEC?
LogAnalyzer also created by Rainer Gerhards (rsyslog author), but more work to setup than an on-line service you don’t have to setup. In saying that. You would have greater control and security which for me is the big win here.
Normalisation also looks like Rainer has his finger in this pie.

In theory Adding RELP to TCP with local queues is a step-up in terms of reliability. Others have said, the reliability of TCP over TLS with local queues is excellent anyway. I’ve yet to confirm it’s excellence. At the time of writing this post,I’m seriously considering moving toward RELP to help solve my reliability issues.

Additional Resource

gentoo rsyslog wiki

Keeping Your Linux Server/s In Time With Your Router

March 28, 2015

Your NTP Server

With this set-up, we’ve got one-to-many Linux servers in a network that all want to be synced with the same up-stream Network Time Protocol (NTP) server/s that your router (or what ever server you choose to be your NTP authority) uses.

On your router or what ever your NTP server host is, add the NTP server pools. Now how you do this really depends on what your using for your NTP server, so I’ll leave this part out of scope. There are many NTP pools you can choose from. Pick one or a collection that’s as close to you’re NTP server as possible.

If your NTP daemon is running on your router, you’ll need to decide and select which router interfaces you want the NTP daemon supplying time to. You almost certainly won’t want it on the WAN interface (unless you’re a pool member) if you have one on your router.

Make sure you restart your NTP daemon.

Your Client Machines

If you have ntpdate installed, /etc/default/ntpdate says to look at /etc/ntp.conf which doesn’t exist without ntp being installed. It looks like this:

# Set to "yes" to take the server list from /etc/ntp.conf, from package ntp,
# so you only have to keep it in one place.

but you’ll see that it also has a default NTPSERVERS variable set which is overridden if you add your time server to /etc/ntp.conf. If you enter the following and ntpdate is installed:

dpkg-query -W -f='${Status} ${Version}\n' ntpdate

You’ll get output like:

install ok installed 1:4.2.6.p5+dfsg-3ubuntu2

Otherwise install it:

apt-get install ntp

The public NTP server/s can be added straight to the bottom of the /etc/ntp.conf file, but because we want to use our own NTP server, we add the IP address of our server that’s configured with our NTP pools to the bottom of the file.

server <IP address of your local NTP server here>

Now if your NTP daemon is running on your router, hopefully you have everything blocked on its interface/s by default and are using a white-list for egress filtering.

In which case you’ll need to add a firewall rule to each interface of the router that you want NTP served up on.

NTP talks over UDP and listens on port 123 by default.

After any configuration changes to your ntpd make sure you restart it. On most routers this is done via the web UI.

On the client (Linux) machines:

sudo service ntp restart

Now issuing the date command on your Linux machine will provide the current time, yes with seconds.


The main two commands I use are:

sudo ntpq -c lpeer

Which should produce output like:

            remote                       refid         st t when poll reach delay offset jitter
*<server name>.<domain name> <upstream ntp ip address> 2  u  54   64   77   0.189 16.714 11.589

and the standard NTP query program followed by the as argument:


Which will drop you at ntpq’s prompt:

ntpq> as

Which should produce output like:

ind assid status  conf reach auth condition  last_event cnt
  1 15720  963a   yes   yes  none  sys.peer    sys_peer  3

Now in the first output, the * in front of the remote means the server is getting it’s time successfully from the upstream NTP server/s which needs to be the case in our scenario. Often you may also get a refid of .INIT. which is one of the “Kiss-o’-Death Codes” which means “The association has not yet synchronized for the first time”. See the NTP parameters. I’ve found that sometimes you just need to be patient here.

In the second output, if you get a condition of reject, it’s usually because your local ntp can’t access the NTP server you set-up. Check your firewall rules etc.

Now check all the times are in sync with the date command.

350Z Hi-fi Install

February 28, 2015

When we acquired the 2004 350Z, it had a low quality hi-fi installed in it. The FM radio broadcast band in Japan is 76-90 MHz Which misses a large portion of the New Zealand stations.

It’s been a few years since I performed a car hi-fi installation, so I lent on some friends. Specifically Matthew Fung which sold me the AVK6’s and the LRx 5.1MT.

Previous Hi-fi install

Now this install was in my VY Ute. This image shows the sub woofer enclosure in place under the deck right next to the fuel tank. The metal deck panel gets fixed over this and then the tray liner goes over top of that.

VY sub woofer enclosure in place


Unlike the 350Z install, this enclosure didn’t have to be pretty at all because it would never be seen.

VY sub woofer and enclosure


I learnt a lot from this install that I would take forward to future installs. Namely the 350Z install

350Z Hi-fi Install

Some of Matthew’s advice was to spend in the following fashion:

35% of your budget on speakers, 35% on amplifier, 20% head unit and 10% on sub woofer. This is good advice.

Required Components

Other than the 350Z

  • Head unit: Alpine CDE-148EBT From Quality Car Audio
  • DIN/DDIN kit: 99-7402 from Quality Car Audio
  • Power amplifier: Italian made Audison LRx 5.1MT from Matthew Fung
  • Front speakers: Audison AVK6 6.5’s from Matthew Fung
  • Rear speakers: Existing factory from Matthew Fung
  • Sub woofers: 2 x Alpine SWR-12D4 From Quality Car Audio. One of the reasons for two sub woofers was appearance.
  • Sub woofer box: Fabricated with some old 19mm MDF cover sheets that have been in my garage since I left the carpentry trade about 15 years ago.
  • Square drive (because they are far superior to posi) super screws
  • Sound deadening: Single layer of Soundstream Deathmat from HyperDrive. Once finished about 15kg will be applied to floors, both front and boot and front doors.
  • In-line fuse block: 80amp from JayCar
  • Battery terminals: from JayCar
  • Power cable: 8m of STINGER SPW14TR 4 AWG GAUGE from Ebay
  • Speaker cable: Stinger SHW512B 100 ft Roll of HPM 12 Gauge Matte Blue Car Speaker Wire run to all speakers from Ebay
  • RCA cables: 3 x Stinger SI8217 Audio RCA Interconnect Cable 2 Channels 8000 Series 17 ft from Ebay
  • Internet of course
  • Carbon fiber vinyl wrap
  • Front speaker mounts: Rubber drain pipe transitions
  • Approximately one week labour

Yes I was tempted to skimp on the likes of the power cable, speaker wire, RCA cables because the good ones are much dearer than the cheap ones. This is where you really get what you pay for. If you want to produce a great result, aim for the best you can get here. A few hundred dollars really makes a big difference. I’ve used cheaper parts here on previous installs and the overall difference is very noticeable. It also give you room to upgrade components in the future without having to run all your wires again which is where most of the install time gets sucked up.

Day One

Design and Construct Sub Woofer Enclosure

Generally speaking if you are running two of the exact same sub woofers in the same air space you can just combine the air spaces, but if you swap one of the sub woofers for one that behaves differently, then you are likely to run into issues where the sound waves interfere with each other. Also if you have a sub woofer enclosure that has an air space for each sub woofer you also get bracing for free from the divider. Bracing is all about making the enclosure more rigid so that the sound waves don’t cause the casing to vibrate/move with the waves more than it should. The more rigid you can make the enclosure the more you help the enclosure do what it’s designed to do… produce accurate low frequencies.

sub woofer enclosure

I chose to have one enclosure with two separate air spaces, thus providing a rigid casing. There are three distinct volume calculations here.

  1. One for the main air space
  2. One for the arch way (Looks like a ‘D’ rotated 90° counter clockwise) you can see cut in the rear of the enclosure
  3. The space behind the rear of the main enclosure. This had to be done to accommodate the cast aluminium frame of the SWR-12D4.

I also allowed for adjustment in air space with the two ends of the enclosure, as they could be moved in or out depending on whether I needed more or less air space. As you can see in the image above, the rear and the front of the enclosure have tapered cuts down toward the bottom of the enclosure. These tapered cuts were made once A) the box had been constructed, B) the ends had been adjusted and fixed in place and C) I was certain of the internal air space.

All joints are wood glued and sealed with an acrylic sealant. Solvent based sealants can play havoc with some of the sub woofer internals, thus causing premature failure. The screws are varying lengths depending on the angle of the join. All screw holes were pre-drilled into the initial MDF that needs to be retained with the diameter of the thread, so that the thread doesn’t grip onto the MDF being retained. Then the MDF that the screws actually bite into are pre-drilled with a bit the diameter of the shank (which is a little smaller than the thread). If you don’t do this second part, the MDF will just split which means very little grip and essentially a very weak joint.

All screws were countersunk with a… countersink bit. This is done so that when you cover the box in your chosen covering, it all appears flat. Two circular holes cut with jig saw. The holes that the speaker cable passes through are also sealed with acrylic sealant.

I also chose not to use terminals on the sub woofer enclosure to fix speaker wires to, but rather run the speaker wires straight from the power amplifier to the sub woofer, thus removing one of the couplings. The sub woofer enclosure can still be disconnected at the power amplifier and at the speaker. If you’re planning on swapping sub woofers in and out frequently,

As I’m a carpenter by trade (from aprx 1990 – 2000), this construction was trivial.

For those of you unfamiliar with the 350Z, this is what the target space looks like (plan view) with the dimensions:

350Z sub woofer space


You’ll also notice the cross section arrows. They should actually be facing the other direction. The A:A section is the below image in reverse. If I was to go full width with the enclosure, it wouldn’t be able to be installed as the rear hatch opening is considerably narrower than the enclosures target location. It ends up being just over 100mm gap on each side of the enclosure once installed. I just fill this space with plumbers foam pipe lagging which actually looks fine. There is also a small gap in front of the enclosure (between enclosure and rear speakers) and behind it (between enclosure and strut brace). I’ve just pushed 12mm black PEF rod down the gap and it again looks fine. One of the rules in carpentry is, if you can’t hide a transition, then emphasis it. That’s what we’ve done with these gaps. The enclosure doesn’t need to be fixed in place in case of head on collision, as it can’t move forward but can tilt forward a little, but as the roof is too low to allow it to move forward far, it’s safe. Just make sure those huge sub woofers are well fixed as they would be like canon balls in an accident.

Volume Calculations

The gross internal target volume for the sub woofer was 0.85 ft³. It’s range is 0.23 ft³ both ways. I ended up being just below 0.85 ft³. So I was happy.

  1. For the main air space, that’s the largest one. I just worked out the area of the quadrilateral (bottom, front, top, rear) then multiply by the depth. All in cm’s, then convert cm³ to ft³. As I’m lazy, the easiest way is to just throw your values into the calculator here or here. These dimensions are all internal and specified in mm unless stated otherwise.
  2. For the semi-circle (‘D’ rotated 90° counter clockwise) it’s just πr2/2 for the area then multiplied by the depth of 1.9cm
  3. The following was the calculation for the triangle behind the ‘D’. this is also used to mount the power amplifier. The outer panel here carries across two of these internal spaces. Hope that makes sense? You’ll see an image below anyway of how the power amplifier is mounted on the rear of the enclosure. I also had the very rear panel of the enclosure over-hang the internal dimension by about 50mm. This was so that I could mount some LED strip lighting if I wanted to. The Audison LRx 5.1MT has it’s logo light up which has for now satisfied my geek addiction for LED mood lighting.
    enclosure triangle volume calculation


SWR-12D4 Dimensions

SWR-12D4 dimensions

SWR-12D4 Design

SWR-12D4 design


Day Two

Just more of the same thing plus fitment and making small adjustments. I found that by draping a towel over the rear strut brace and just sliding the enclosure over it and tilting it forward it’d just slot into place nicely.

Day Three

Removal of Internals

Removed all the internal panels that I’d need in order to run cables and apply sound deadening. I numbered each piece I pulled off in order, so that it was easier to work out the re-fit order when the time came. I also mask any plugs or screws etc to the panel so they don’t get lost.

  • Remove center console
  • Remove panels to get to rear speakers and to run power cable and speaker wires. I also decided to make use of the empty space which is about 2 ft³ behind the drivers seat. For now this is still left open. I’m looking around wreckers for another cubby-hole like the one behind the passenger seat, as they are identical. This will allow me to claim back the space that we’ve lost due to the enclosure going into the limited boot space.
  • Removed door sill panels and kick panels. Both of which can be pulled off gently.
  • Remove seats. Also followed the sequence in this post to reset the windows going all the way up.

Once I was done with the full install and started the car, the Engine Management Light (EML) came on. This will continue to happen until you reset it. First thing to do though is find out what the fault is if there is one. Follow the sequence of events here. Diagnose the fault code based on the flashes detailed here and here. My code ended up being “P0000” (No Self Diagnostic Failure Indicated). That’s:

10 (0) Long Blink
10 (0) Short Blinks
10 (0) Short Blinks
10 (0) Short Blinks

So I reseted the ECU and I was done.

Apply Sound Deadening

The 350Z’s have a lot of road noise by default. If you want to keep that out so that it doesn’t interfere with your new sounds and keep your sounds inside the car it can be a good idea to apply this liberally. Plus it’s a good time to do this when you have most of your internal panels removed. I’ve found that it made quite a big difference. The sounds are barely audible outside the vehicle even when played at a modest level. When cranked, you can still feel the low frequencies outside more than you can hear them. Believe it or not, that includes the lower frequencies. I applied this a little higher than you can see in the below image. All the rattles you hear in a lot of vehicles with large sub woofer setups are gone. This means the low frequencies are not loosing energy in the panels but are instead going into the cabin space. Which is exactly where we want them.

Sound Deadening to floors

Day Four

Power from Battery to Power Amplifier

Run power cable from battery through firewall to boot of car where amplifier will sit. This link shows a left hand drive 350Z. For right hand drive Z’s, it’s pretty similar.

Run RCA Leads

from front console to where sub woofer enclosure will live. Careful to keep speaker and RCA leads as far away from power cables as possible to reduce interference.

Here you can see the power cable on the left of the drive-shaft tunnel (top of image) and the three RCA leads on the right of the drive-shaft tunnel (bottom of image) zip-tied together and passing through the body just behind where the carpet finishes. You can also see the RCA lead ends looped around the gear stick in the image waiting to be plugged into the new head unit once installed.

Also notice the fold-out work light far left of the image. This is cordless and has magnets on the rear of both LED panels and the middle shaft. The LED panels can be pivoted around the shaft. This is one of the most handy work lights I’ve ever used. It also has a torch light in the middle shaft. This can be run with A) one panel running, B) two panels running, C) just the torch light running. It’s rechargeable and seems to run for as long as I need it each day. I think it’s supposed to run for three hours with only one panel running, but it seemed to run for longer than that. $30 from bunnings.

RCA leads and power cable

Run Speaker Wires

All except front left door and sub woofers.

For the front speakers, the hardest part of just about the entire install is getting the 12 gauge speaker wire through the plastic door harness without joiners. This post provided the detail I needed. Previous car hi-fi installs have been similar in that this task has been the most frustrating. I started with the right door (drivers door) which is harder than the passenger door for several reasons.

  • There is next to no visibility from underneath the steering wheel looking at where the speaker wire must pass through the plastic harness
  • The plastic harness has a greater population of used pins that you have to be very careful not to hit with your knife or drill. I used both tools.

The other thing to keep in mind is that the plastic harness that clicks into place on the car body must go in top first from the inside, not bottom first. This cost me a bit of time (I bent the metal retainer out of shape because I put bottom in first) and I ended up coming back to the driver side after I’d done the passenger side successfully and learnt a few more tricks.

For the front speakers, I ran the speaker wires down the door sills right next to the plastic clips that hold the sill covers on. Then through the holes at the back of the sill and through into the boot near where the power amplifier would be mounted.

I also ran 12 gauge wire for the rear speakers. There are plenty of holes to thread them through and keep them well concealed. Keeping in mind that they need to stay as far away from power cables as possible.


Factory Speaker Wire Colours

I took note of factory speaker wire colours in case I needed them later. I didn’t, but here they are anyway for anyone needing them:

Front Left

  • Red with silver loops
  • Blue with silver loops

Front Right: Didn’t capture these

Rear Left

  • Light green with silver loops
  • Light brown with dark brown stripes

Rear Right

  • Light orange with silver loops
  • Black and pink stripes with silver loops


Day Five

Fit Tweeters

See the image on Day Six for how the tweeter looks mounted. The small plastic triangle panel that the tweeter is fixed to from memory needs to be pulled in toward the car where it meets the window pane and then slid down. Mine had an existing round grill that looked like a tweeter was mounted underneath, but all it was was a grill. I drilled a hole just big enough to pass the tweeters existing wire through it. At this stage I’ve used double sided sponge tape. The tweeters are fairly heavy so we’ll wait and see if the tape continues to hold them on. If it doesn’t I’ll have to resort to some glue. Currently it’s been about a week and they’re still staying put. I used some acetone to clean the surface of both the rear of the tweeter and the area that it was going to be fixed to. Just be careful with this though, as it will start to melt the plastic, so make sure you only do it where it’s not going to be seen.

As you can see in the image, no wires are visible and it’s a good position for the sound stage. I’ve got the crossovers set to the highest gain for the tweeters. I thought this may have been to much, but it seems perfect. The lower frequencies are surprisingly well handled by the Audison AVK6 6.5″ drivers.

Run Speaker Wire to Left Front Door

Similar to the right door.


Day Six

Apply Sound Deadening to Side Doors

A lot of road noise comes in the doors on the 350Z’s. Also with your hi-fi setup, you’d loose a lot of energy through your doors. Applying sound deadening liberally to the doors as well as the floors stops a lot of this. It’ll also kill any rattles you may have had otherwise.

Fit Front Speakers with Crossovers

There is a sunken space where I fixed my crossovers. I soldered and crimped terminals to the speaker wires that get screwed to the crossover. I soldered and applied heat-shrink sleeving to the tweeter wires / 12 gauge wire junction.

Audison AVK6 6.5


Below is a closer look at what the 6.5″ and tweeter mounts look like. I used a drainage transition rubber pip for the 6.5″ driver. These are great for mitigating vibration and for cutting to the exact right size.
Now you have to get the off-set right here. The speaker from rear of rim to back is 70mm. I cut my rubber mounts to 40mm. This allowed 30mm sitting inside the door panel. The window pane when it’s wound down goes behind the speaker, so you don’t have a lot of room there. I think you’ve got about 35mm, so I had about 5mm clearance. If it’s not enough, the glass is going to hit the rear of your speaker and could be disastrous. 40mm mounts were actually to wide. It was quite hard to fit the door panels on as they were touching the speaker rim. I’d suggest making your mounts anything over 35mm (don’t go less), target should be about 36/37mm. So as you can see, this has got to be fairly accurate else either your window will hit the rear of the speaker or your door panel will be up against the speaker rim.
There is an excellent step by step guide to most of this here.


Fit Rear Speakers

I was using the existing factory speakers as they don’t really matter that much. Why don’t they matter much? Most of your sound stage should be coming from the front of you.

In my case there were no signs of which speaker terminal was positive or negative. The best way to work this out is to use a 9v battery and touch wires from the batteries + and – terminals to each of the speakers terminals. if the speaker pops out slightly when connected, it means you’ve connected the batteries + to the speakers +. If the speaker pops in slightly when connected, it means you’ve connected the batteries + to the speakers -. This way you can tell which terminal you should connect your designated positive speaker wire. This method doesn’t harm your speaker as the voltage is too low.


Day Seven

Install Head Unit

Most of the steps you would need are here.

Fit DIN/DDIN kit 99-7402 to the dash assembly. As the CDE-148EBT is a single DIN unit, we get some space back in the form of another compartment below the head unit.

The Alpine CDE-148EBT comes with a harness. I only needed to solder -> heat shrink on the following harness wires to the existing wires. The existing wires were all factory labelled with tags. This was obviously very helpful. I had already located a wiring guide for this operation that I would not now need. Strange thing was with the guide that the colours were incorrect for my Z.

  • Orange (head unit illumination)
  • Red (ignition)
  • Yellow (battery)
  • Black (ground) just replaced existing ground wire
  • Blue/white (remote turn on, supplies 12v to power amp to tell it to turn on when head unit is on) I missed this one and had to pull the console out again and connect it. I go through this in day eight.

Connect Aux in, USB extension lead, all six RCA’s.

There are a lot of wires behind the head unit and it can be a bit tricky getting everything back in. Be gentle and patient.

Re-fit Seats and Some Panels

Self explanatory.

Apply Sound Deadening

Around where sub woofer enclosure is going to live. I applied it to the sides also.


Fit 80 amp In-line Fuse

I fitted the fuse block about 150mm from the battery. The end of the in-line fuse block is just visible in the below image.

80 amp inline fuse

Install Sub Woofer Box


Fit Sub Woofers

If you plan on swapping your subs in and out a bit then I’d advise using the likes of these inserts to screw into. Screw these into your MDF then screw into them:


It was about here that I realised that I should have gone with the  SWR-12D2 (the 2 ohm version) sub woofers. As I hadn’t actually used the sub woofers yet, I’d be able to just exchange the 4 ohm subs with the 2 ohm subs. Quality Car Audio refused to provide an exchange for the still new subs. Both 4 ohm and 2 ohm models where exactly the same price.

Some reading on under-powering sub woofers:

Connected Power Cable to Amplifier

Ran Earth Cable to Bare Metal

Unscrewed a bracket that came through the left side of the car in the spare wheel well and fitted the terminal I had soldered and crimped onto my 4 gauge earth cable between the bracket and the body of the car after I sanded the paint off for good connection. Cranked that bolt up nice and tight.

Connected all Speaker Wires to Power Amplifier

Connected RCA Leads

Test Front and Rear Speakers

Re-install internal panels


Day Eight

Power Amplifier Not Auto Powering On

Take power amp in to be tested as it wouldn’t power on. The car power amplifiers I’d used in the past would turn on when they receive signal from the pre-outs. That’s signal that’s pre-amplified. Turns out a jumper, well a plastic jumper with a single 12v remote turn on from the head unit was needed on the bottom left pin of the power amplifier. So out with the head unit again and run the extra wire. That’s why you want to leave re-fitting of panels as late as possible.

Cleaned up

Charged Battery

Initial Tuning

Power Amplifier

Currently I have the Audison LRx 5.1MT gains set to (where 0 is 0 and 10 is max):

  • Fronts: 8
  • Rears: 3 (existing very crappy factory speakers)
  • mono Sub woofer channel: 8

Head unit

  • Turn off internal power amp as it’s not used and helps to reduce power supply interference/noise
  • The Alpine CDE-148EBT has a lot of options and features when it comes to tuning your sound. This will keep me busy for a long time I’d say.

Sub Woofer Configuration

  1. I first tried 2 ohm setup with one speaker as a benchmark because I knew that it was ideal
  2. I then tried parallel 1 ohm which I don’t think provided enough energy to the sub woofers
  3. Third configuration I tried was in series 4 ohm which worked quite well.

I’ve also noticed that with the sub woofers pointing forward so I can actually hear them may not have been the best design decision. Especially with two of them running in mono as it affects the sound stage a bit. it probably would have produced a slightly better result if they had of been pointing toward the rear of the car under the strut brace so that the bass could be felt more than heard. In saying that, it’s still early days and I have a lot of tuning to get the levels just right. Also as I didn’t want to remove the spare wheel which sits under the boot base in order to mount the power amplifier under there, one of the only places left would be to mount it on top of the sub woofer enclosure, which although it is rather a good looking piece of hardware. I think 12″ sub woofers look better.


A bit more reading on parallel verses series wiring of your sub woofers.

Mounted Power Amplifier to Rear of Sub Woofer Enclosure

Audison LRx 5.1MT

Overall Sound

I’m kind of surprised that I’ve managed to achieve such a truthful sound of my recordings. I wasn’t sure this was possible in a car. Think of studio monitors and that’s what this system reminds me of. All of a sudden many recordings sound bad and the good recordings sound outstanding and produce the emotion that high quality music (recording, mix, mastering) is renowned for. For example my personal recordings that I did a few years ago sound amazing as I had the help from one of New Zealand’s best sound engineers / producers (Ian McAllister).

I think also the fact that I didn’t take any short-cuts that I’m aware of? It’s all the small details that add up as well as using great components.


Hunting down boot carpet for reduced space. The factory carpet in the Z is not actually carpet but just felt. So I’m going to get some real carpet with the ‘Z’ embroidered into it. There are a few places that actually supply this.



Get every new post delivered to your Inbox.

Join 332 other followers