Archive for the ‘Web’ Category

Captcha Considerations

December 31, 2015

Risks

Exploiting Captcha

Lack of captchas are a risk, but so are captchas themselves…

Let’s look at the problem here? What are we trying to stop with captchas?

Bots submitting. What ever it is, whether:

  • Advertising
  • Creating an unfair advantage over real humans
  • Link creation in attempt to increase SEO
  • Malicious code insertion

You are more than likely not interested in accepting it.

What do we not want to block?

People submitting genuinely innocent input. If a person is prepared to fill out a form manually, even if it is spam, then a person can view the submission and very quickly delete the validated, filtered and possibly sanitised message.

Countermeasures

PreventionVERYEASY

Types

Text Recognition

recaptcha uses this technique. See below for details.

Image Recognition

Uses images which users have to perform certain operations on, like dragging them to another image. For example: “Please drag all cat images to the cat mat.”, or “Please select all images of things that dogs eat.” sweetcaptcha is an example of this type of captcha. This type completely rules out the visually impaired users.

Friend Recognition

Pioneered by… you guessed it. Facebook. This type of captcha focusses on human hackers, the idea being that they will not know who your friends are.

Instead of showing you a traditional captcha on Facebook, one of the ways we may help verify your identity is through social authentication. We will show you a few pictures of your friends and ask you to name the person in those photos. Hackers halfway across the world might know your password, but they don’t know who your friends are.

I disagree with that statement. A determined hacker will usually be able to find out who your friends are. There is another problem, do you know who all of your friends are? Every acquaintance? I am terrible with names and so are many people. This is supposed to be used to authenticate you. So you have to be able to answer the questions before you can log in.

Logic Questions

This is what textcaptcha uses. Simple logic questions designed for the intelligence of a seven year old child. These are more accessible than image and textual image recognition, but they can take longer than image recognition to answer, unless the user is visually impared. The questions are usually language specific also, usually targeting the English language.

User Interaction

This is a little like image recognition. Users have to perform actions that virtual intelligence can not work out… yet. Like dragging a slider a certain number of notches.
If an offering gets popular, creating some code to perform the action may not be that hard and would definitely be worth the effort for bot creators.
This is obviously not going to work for the visually impaired or for people with handicapped motor skills.

 

In NPM land, as usual there are many options to choose from. The following were the offerings I evaluated. None of which really felt like a good fit:

Offerings

  • total-captcha. Depends on node-canvas. Have to install cairo first, but why? No explanation. Very little of anything here. Move on. How does this work? Do not know. What type is it? Presume text recognition.
  • easy-captcha is a text recognition offering generating images
  • simple-captcha looks like another text recognition offering. I really do not want to be writing image files to my server.
  • node-captcha Depends on canvas. By the look of the package this is another text recognition in a generated image.
  • re-captcha was one of the first captcha offerings, created at the Carnegie Mellon University by Luis von Ahn, Ben Maurer, Colin McMillen, David Abraham and Manuel Blum who invented the term captcha. Google later acquired it in September 2009. recaptcha is a text recognition captcha that uses scanned text that optical character recognition (OCR) technology has failed to interpret, which has the added benefit of helping to digitise text for The New York Times and Google Books.
    recaptcha
  • sweetcaptcha uses the sweetcaptcha cloud service of which you must abide by their terms and conditions, requires another node package, and requires some integration work. sweetcaptcha is an image recognition type of captcha.
    sweetcaptcha
  • textcaptcha is a logic question captcha relying on an external service for the questions and md5 hashes of the correct lower cased answers. This looks pretty simple to set up, but again expects your users to use their brain on things they should not have to.

 

After some additional research I worked out why the above types and offerings didn’t feel like a good fit. It pretty much came down to user experience.

Why should genuine users/customers of your web application be disadvantaged by having to jump through hoops because you have decided you want to stop bots spamming you? Would it not make more sense to make life harder for the bots rather than for your genuine users?

Some other considerations I had. Ideally I wanted a simple solution requiring few or ideally no external dependencies, no JavaScript required, no reliance on the browser or anything out of my control, no images and it definitely should not cost any money.

Alternative Approaches

  • Services like Disqus can be good for commenting. Obviously the comments are all stored somewhere in the cloud out of your control and this is an external dependency. For simple text input, this is probably not what you want. Similar services such as all the social media authentication services can take things a bit too far I think. They remove freedoms from your users. Why should your users be disadvantaged by leaving a comment or posting a message on your web application? Disqus tracks users activities from hosting website to website whether you have an account, are logged in or not. Any information they collect such as IP address, web browser details, installed add-ons, referring pages and exit links may be disclosed to any third party. When this data is aggregated it is useful for de-anonymising users. If users choose to block the Disqus script, the comments are not visible. Disqus has also published its registered users entire commenting histories, along with a list of connected blogs and services on publicly viewable user profile pages. Disqus also engage in add targeting and blackhat SEO techniques from the websites in which their script is installed.
  • Services like Akismet and Mollom which take user input and analyse for spam signatures. Mollom sometimes presents a captcha if it is unsure. These two services learn from their mistakes if they mark something as spam and you unmark it, but of course you are going to have to be watching for that. Matt Mullenweg created Akismet so that his mother could blog in safety. “His first attempt was a JavaScript plugin which modified the comment form and hid fields, but within hours of launching it, spammers downloaded it, figured out how it worked, and bypassed it. This is a common pitfall for anti-spam plugins: once they get traction“. My advice to this is not to use a common plugin, but to create something custom. I discuss this soon.

The above solutions are excellent targets for creating exploits that will have a large pay off due to the fact that so many websites are using them. There are exploits discovered for these services regularly.

Still not cutting it

Given the fact that many clients count on conversions to make money, not receiving 3.2% of those conversions could put a dent in sales. Personally, I would rather sort through a few SPAM conversions instead of losing out on possible income.

Casey Henry: Captchas’ Effect on Conversion Rates

Spam is not the user’s problem; it is the problem of the business that is providing the website. It is arrogant and lazy to try and push the problem onto a website’s visitors.

Tim Kadlec: Death to Captchas

User Time Expenditure

Recording how long it takes from fetch to submit. This is another technique, in which the time is measured from fetch to submit. For example if the time span is under five seconds it is more than likely a bot, so handle the message accordingly.

Bot Pot

Spamming bots operating on custom mechanisms will in most cases just try, then move on. If you decide to use one of the common offerings from above, exploits will be more common, depending on how wide spread the offering is. This is one of the cases where going custom is a better option. Worse case is you get some spam and you can modify your technique, but you get to keep things simple, tailored to your web application, your users needs, no external dependencies and no monthly fees. This is also the simplest technique and requires very little work to implement.

Spam bots:

  • Love to populate form fields
  • Usually ignore CSS. For example, if you have some CSS that hides a form field and especially if the CSS is not inline on the same page, they will usually fail at realising that the field is not supposed to be visible.

So what we do is create a field that is not visible to humans and is supposed to be kept empty. On the server once the form is submitted, we check that it is still empty. If it is not, then we assume a bot has been at it.

This is so simple, does not get in the way of your users, yet very effective at filtering bot spam.

Client side:

form .bot-pot {
   display: none;
}
<form>
   <!--...-->
   <div>
      <input type="text" name="bot-pot" class="bot-pot">
   </div>
   <!--...-->
</form>

Server side:

I show the validation code middle ware of the route on line 30 below. The validation is performed on line 16

var form = require('express-form');
var fieldToValidate = form.field;
//...

function home(req, res) {
   res.redirect('/');
}

function index(req, res) {
   res.render('home', { title: 'Home', id: 'home', brand: 'your brand' });
}

function validate() {
   return form(
      // Bots love to populate everything.
      fieldToValidate('bot-pot').maxLength(0)
   );
}

function contact(req, res) {

   if(req.form.isValid)
      // We know the bot-pot is of zero length. So no bots.
   //...
}

module.exports = function (app) {
   app.get('/', index);
   app.get('/home', home);
   app.post('/contact', validate(), contact);
};

So as you can see, a very simple solution. You could even consider combining the above two techniques.

Lack of Visibility in Web Applications

November 26, 2015

Risks

I see this as an indirect risk to the asset of web application ownership (That’s the assumption that you will always own your web application).

Not being able to introspect your application at any given time or being able to know how the health status is, is not a comfortable place to be in and there is no reason you should be there.

Insufficient Logging and Monitoring

average-widespread-veryeasy-moderate

Can you tell at any point in time if someone or something is:

  • Using your application in a way that it was not intended to be used
  • Violating policy. For example circumventing client side input sanitisation.

How easy is it for you to notice:

  • Poor performance and potential DoS?
  • Abnormal application behaviour or unexpected logic threads
  • Logic edge cases and blind spots that stake holders, Product Owners and Developers have missed?

Countermeasures

As Bruce Schneier said: “Detection works where prevention fails and detection is of no use without response“. This leads us to application logging.

With good visibility we should be able to see anticipated and unanticipated exploitation of vulnerabilities as they occur and also be able to go back and review the events.

Insufficient Logging

PreventionAVERAGE

When it comes to logging in NodeJS, you can’t really go past winston. It has a lot of functionality and what it does not have is either provided by extensions, or you can create your own. It is fully featured, reliable and easy to configure like NLog in the .NET world.

I also looked at express-winston, but could not see why it needed to exist.

{
   ...
   "dependencies": {
      ...,
      "config": "^1.15.0",
      "express": "^4.13.3",
      "morgan": "^1.6.1",
      "//": "nodemailer not strictly necessary for this example,",
      "//": "but used later under the node-config section.",
      "nodemailer": "^1.4.0",
      "//": "What we use for logging.",
      "winston": "^1.0.1",
      "winston-email": "0.0.10",
      "winston-syslog-posix": "^0.1.5",
      ...
   }
}

winston-email also depends on nodemailer.

Opening UDP port

with winston-syslog seems to be what a lot of people are using. I think it may be due to the fact that winston-syslog is the first package that works well for winston and syslog.

If going this route, you will need the following in your /etc/rsyslog.conf:

$ModLoad imudp
# Listen on all network addresses. This is the default.
$UDPServerAddress 0.0.0.0
# Listen on localhost.
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
# Or the new style configuration.
Address <IP>
Port <port>
# Logging for your app.
local0.* /var/log/yourapp.log

I Also looked at winston-rsyslog2 and winston-syslogudp, but they did not measure up for me.

If you do not need to push syslog events to another machine, then it does not make much sense to push through a local network interface when you can use your posix syscalls as they are faster and safer. Line 7 below shows the open port.

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0015s latency).
PORT STATE SERVICE REASON VERSION
514/udp open|filtered syslog no-response
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Using Posix

The winston-syslog-posix package was inspired by blargh. winston-syslog-posix uses node-posix.

If going this route, you will need the following in your /etc/rsyslog.conf instead of the above:

# Logging for your app.
local0.* /var/log/yourapp.log

Now you can see on line 7 below that the syslog port is no longer open:

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0014s latency).
PORT STATE SERVICE REASON VERSION
514/udp closed syslog port-unreach
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Logging configuration should not be in the application startup file. It should be in the configuration files. This is discussed further under the Store Configuration in Configuration files section.

Notice the syslog transport in the configuration below starting on line 39.

module.exports = {
   logger: {
      colours: {
         debug: 'white',
         info: 'green',
         notice: 'blue',
         warning: 'yellow',
         error: 'yellow',
         crit: 'red',
         alert: 'red',
         emerg: 'red'
      },
      // Syslog compatible protocol severities.
      levels: {
         debug: 0,
         info: 1,
         notice: 2,
         warning: 3,
         error: 4,
         crit: 5,
         alert: 6,
         emerg: 7
      },
      consoleTransportOptions: {
         level: 'debug',
         handleExceptions: true,
         json: false,
         colorize: true
      },
      fileTransportOptions: {
         level: 'debug',
         filename: './yourapp.log',
         handleExceptions: true,
         json: true,
         maxsize: 5242880, //5MB
         maxFiles: 5,
         colorize: false
      },
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'debug',
         identity: 'yourapp_winston'
         //facility: 'local0' // default
            // /etc/rsyslog.conf also needs: local0.* /var/log/yourapp.log
            // If non posix syslog is used, then /etc/rsyslog.conf or one
            // of the files in /etc/rsyslog.d/ also needs the following
            // two settings:
            // $ModLoad imudp // Load the udp module.
            // $UDPServerRun 514 // Open the standard syslog port.
            // $UDPServerAddress 127.0.0.1 // Interface to bind to.
      },
      emailTransportOptions: {
         handleExceptions: true,
         level: 'crit',
         from: 'yourusername_alerts@fastmail.com',
         to: 'yourusername_alerts@fastmail.com',
         service: 'FastMail',
         auth: {
            user: "yourusername_alerts",
            pass: null // App specific password.
         },
         tags: ['yourapp']
      }
   }
}

In development I have chosen here to not use syslog. You can see this on line 3 below. If you want to test syslog in development, you can either remove the logger object override from the devbox1-development.js file or modify it to be similar to the above. Then add one line to the /etc/rsyslog.conf file to turn on. As mentioned in a comment above in the default.js config file on line 44.

module.exports = {
   logger: {
      syslogPosixTransportOptions: null
   }
}

In production we log to syslog and because of that we do not need the file transport you can see configured starting on line 30 above in the default.js configuration file, so we set it to null as seen on line 6 below in the prodbox-production.js file.

I have gone into more depth about how we handle syslogs here, where all of our logs including these ones get streamed to an off-site syslog server. Thus providing easy aggregation of all system logs into one user interface that DevOpps can watch on their monitoring panels in real-time and also easily go back in time to visit past events. This provides excellent visibility as one layer of defence.

There were also some other options for those using Papertrail as their off-site syslog and aggregation PaaS, but the solutions were not as clean as simply logging to local syslog from your applications and then sending off-site from there.

module.exports = {
   logger: {
      consoleTransportOptions: {
         level: {},
      },
      fileTransportOptions: null,
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'info',
         identity: 'yourapp_winston'
      }
   }
}
// Build creates this file.
module.exports = {
   logger: {
      emailTransportOptions: {
         auth: {
            pass: 'Z-o?(7GnCQsnrx/!-G=LP]-ib' // App specific password.
         }
      }
   }
}

The logger.js file wraps and hides extra features and transports applied to the logging package we are consuming.

var winston = require('winston');
var loggerConfig = require('config').logger;
require('winston-syslog-posix').SyslogPosix;
require('winston-email').Email;

winston.emitErrs = true;

var logger = new winston.Logger({
   // Alternatively: set to winston.config.syslog.levels
   exitOnError: false,
   // Alternatively use winston.addColors(customColours); There are many ways
   // to do the same thing with winston
   colors: loggerConfig.colours,
   levels: loggerConfig.levels
});

// Add transports. There are plenty of options provided and you can add your own.

logger.addConsole = function(config) {
   logger.add (winston.transports.Console, config);
   return this;
};

logger.addFile = function(config) {
   logger.add (winston.transports.File, config);
   return this;
};

logger.addPosixSyslog = function(config) {
   logger.add (winston.transports.SyslogPosix, config);
   return this;
};

logger.addEmail = function(config) {
   logger.add (winston.transports.Email, config);
   return this;
};

logger.emailLoggerFailure = function (err /*level, msg, meta*/) {
   // If called with an error, then only the err param is supplied.
   // If not called with an error, level, msg and meta are supplied.
   if (err) logger.alert(
      JSON.stringify(
         'error-code:' + err.code + '. '
         + 'error-message:' + err.message + '. '
         + 'error-response:' + err.response + '. logger-level:'
         + err.transport.level + '. transport:' + err.transport.name
      )
   );
};

logger.init = function () {
   if (loggerConfig.fileTransportOptions)
      logger.addFile( loggerConfig.fileTransportOptions );
   if (loggerConfig.consoleTransportOptions)
      logger.addConsole( loggerConfig.consoleTransportOptions );
   if (loggerConfig.syslogPosixTransportOptions)
      logger.addPosixSyslog( loggerConfig.syslogPosixTransportOptions );
   if (loggerConfig.emailTransportOptions)
      logger.addEmail( loggerConfig.emailTransportOptions );
};

module.exports = logger;
module.exports.stream = {
   write: function (message, encoding) {
      logger.info(message);
   }
};

When the app first starts it initialises the logger on line 7 below.

//...
var express = require('express');
var morganLogger = require('morgan');
var logger = require('./util/logger'); // Or use requireFrom module so no relative paths.
var app = express();
//...
logger.init();
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
//...
// In order to utilise connect/express logger module in our third party logger,
// Pipe the messages through.
app.use(morganLogger('combined', {stream: logger.stream}));
//...
app.use(express.static(path.join(__dirname, 'public')));
//...
require('./routes')(app);

if ('development' == app.get('env')) {
   app.use(errorHandler({ dumpExceptions: true, showStack: true }));
   //...
}
if ('production' == app.get('env')) {
   app.use(errorHandler());
   //...
}

http.createServer(app).listen(app.get('port'), function(){
   logger.info(
      "Express server listening on port " + app.get('port') + ' in '
      + process.env.NODE_ENV + ' mode'
   );
});

* You can also optionally log JSON metadata
* You can provide an optional callback to do any work required, which will be called once all transports have logged the specified message.

Here are some examples of how you can use the logger. The logger.log(<level> can be replaced with logger.<level>( where level is any of the levels defined in the default.js configuration file above:

// With string interpolation also.
logger.log('info', 'test message %s', 'my string');
logger.log('info', 'test message %d', 123);
logger.log('info', 'test message %j', {aPropertyName: 'Some message details'}, {});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);

Also consider hiding cross cutting concerns like logging using Aspect Oriented Programing (AOP)

Insufficient Monitoring

PreventionEASY

There are a couple of ways of approaching monitoring. You may want to see the health of your application even if it is all fine, or only to be notified if it is not fine (sometimes called the dark cockpit approach).

Monit is an excellent tool for the dark cockpit approach. It’s easy to configure. Has excellent short documentation that is easy to understand and the configuration file has lots of examples commented out ready for you to take as is and modify to suite your environment. I’ve personally had excellent success with Monit.

 

Risks that Solution Causes

Lack of Visibility

With the added visibility, you will have to make decisions based on the new found information you now have. There will be no more blissful ignorance if there was before.

Insufficient Logging and Monitoring

There will be learning and work to be done to become familiar with libraries and tooling. Code will have to be written around logging as in wrapping libraries, initialising and adding logging statements or hiding them using AOP.

 

Costs and Trade-offs

Insufficient Logging and Monitoring

You can do a lot for little cost here. I would rather trade off a few days work in order to have a really good logging system through your code base that is going to show you errors fast in development and then show you different errors in the places your DevOps need to see them in production.

Same for monitoring. Find a tool that you find working with a pleasure. There are just about always free and open source tools to every commercial alternative. If you are working with a start-up or young business, the free and open source tools can be excellent to keep ongoing costs down. Especially mature tools that are also well maintained like Monit.

Additional Resources

Risks and Countermeasures to the Management of Application Secrets

September 17, 2015

Risks

  • Passwords and other secrets for things like data-stores, syslog servers, monitoring services, email accounts and so on can be useful to an attacker to compromise data-stores, obtain further secrets from email accounts, file servers, system logs, services being monitored, etc, and may even provide credentials to continue moving through the network compromising other machines.
  • Passwords and/or their hashes travelling over the network.

Data-store Compromise

Exploitability

The reason I’ve tagged this as moderate is because if you take the countermeasures, it doesn’t have to be a disaster.

There are many examples of this happening on a daily basis to millions of users. The Ashley Madison debacle is a good example. Ashley Madison’s entire business relied on its commitment to keep its clients (37 million of them) data secret, provide discretion and anonymity.

Before the breach, the company boasted about airtight data security but ironically, still proudly displays a graphic with the phrase “trusted security award” on its homepage

We worked hard to make a fully undetectable attack, then got in and found nothing to bypass…. Nobody was watching. No security. Only thing was segmented network. You could use Pass1234 from the internet to VPN to root on all servers.

Any CEO who isn’t vigilantly protecting his or her company’s assets with systems designed to track user behavior and identify malicious activity is acting negligently and putting the entire organization at risk. And as we’ve seen in the case of Ashley Madison, leadership all the way up to the CEO may very well be forced out when security isn’t prioritized as a core tenet of an organization.

Dark Reading

Other notable data-store compromises were LinkedIn with 6.5 million user accounts compromised and 95% of the users passwords cracked in days. Why so fast? Because they used simple hashing, specifically SHA-1. EBay with 145 million active buyers. Many others coming to light regularly.

Are you using well salted and quality strong key derivation functions (KDFs) for all of your sensitive data? Are you making sure you are notifying your customers about using high quality passwords? Are you informing them what a high quality password is? Consider checking new user credentials against a list of the most frequently used and insecure passwords collected.

Countermeasures

Secure password management within applications is a case of doing what you can, often relying on obscurity and leaning on other layers of defence to make it harder for compromise. Like many of the layers already discussed in my book.

Find out how secret the data that is supposed to be secret that is being sent over the network actually is and consider your internal network just as malicious as the internet. Then you will be starting to get the idea of what defence in depth is about. That way when one defence breaks down, you will still be in good standing.

defence in depth

You may read in many places that having data-store passwords and other types of secrets in configuration files in clear text is an insecurity that must be addressed. Then when it comes to mitigation, there seems to be a few techniques for helping, but most of them are based around obscuring the secret rather than securing it. Essentially just making discovery a little more inconvenient like using an alternative port to SSH to other than the default of 22. Maybe surprisingly though, obscurity does significantly reduce the number of opportunistic type attacks from bots and script kiddies.

Store Configuration in Configuration files

Prevention

Do not hard code passwords in source files for all developers to see. Doing so also means the code has to be patched when services are breached. At the very least, store them in configuration files and use different configuration files for different deployments and consider keeping them out of source control.

Here are some examples using the node-config module.

node-config

is a fully featured, well maintained configuration package that I have used on a good number of projects.

To install: From the command line within the root directory of your NodeJS application, run:

npm install node-config --save

Now you are ready to start using node-config. An example of the relevant section of an app.js file may look like the following:

// Due to bug in node-config the if statement is required before config is required
// https://github.com/lorenwest/node-config/issues/202
if (process.env.NODE_ENV === 'production')
   process.env.NODE_CONFIG_DIR = path.join(__dirname, 'config');

Where ever you use node-config, in your routes for example:

var config = require('config');
var nodemailer = require('nodemailer');
var enquiriesEmail = config.enquiries.email;

// Setting up email transport.
var transporter = nodemailer.createTransport({
   service: config.enquiries.service,
   auth: {
      user: config.enquiries.user,
      pass: config.enquiries.pass // App specific password.
   }
});

A good collection of different formats can be used for the config files: .json, .json5, .hjson, .yaml.js, .coffee, .cson, .properties, .toml

There is a specific file loading order which you specify by file naming convention, which provides a lot of flexibility and which caters for:

  • Having multiple instances of the same application running on the same machine
  • The use of short and full host names to mitigate machine naming collisions
  • The type of deployment. This can be anything you set the $NODE_ENV environment variable to for example: development, production, staging, whatever.
  • Using and creating config files which stay out of source control. These config files have a prefix of local. These files are to be managed by external configuration management tools, build scripts, etc. Thus providing even more flexibility about where your sensitive configuration values come from.

The config files for the required attributes used above may take the following directory structure:

OurApp/
|
+-- config/
| |
| +-- default.js (usually has the most in it)
| |
| +-- devbox1-development.js
| |
| +-- devbox2-development.js
| |
| +-- stagingbox-staging.js
| |
| +-- prodbox-production.js
| |
| +-- local.js (creted by build)
|
+-- routes
| |
| +-- home.js
| |
| +-- ...
|
+-- app.js (entry point)
|
+-- ...

The contents of the above example configuration files may look like the following:

module.exports = {
   enquiries: {
      // Supported services:
      // https://github.com/andris9/nodemailer-wellknown#supported-services
      // supported-services actually use the best security settings by default.
      // I tested this with a wire capture, because it is always the most fool proof way.
      service: 'FastMail',
      email: 'yourusername@fastmail.com',
      user: 'yourusername',
      pass: null
   }
   // Lots of other settings.
   // ...
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox1
      pass: 'D[6F9,4fM6?%2ULnirPVTk#Q*7Z+5n' // App specific password.
   }
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox2
      pass: 'eUoxK=S9&amp;amp;amp;lt;,`@m0T1=^(EZ#61^5H;.H' // App specific password.
   }
}
{
}
{
}
// Build creates this file.
module.exports = {
   enquiries: {
      // Password created by the build.
      pass: '10lQu$4YC&amp;amp;amp;amp;x~)}lUF&amp;amp;amp;gt;3pm]Tk&amp;amp;amp;gt;@+{N]' // App specific password.
   }
}

node-config also:

  • Provides command line overrides, thus allowing you to override configuration values at application start from command
  • Allows for the overriding of environment variables with custom environment variables from a custom-environment-variables.json file

Encrypting/decrypting credentials in code may provide some obscurity, but not much more than that.
There are different answers for different platforms. None of which provide complete security, if there is such a thing, but instead focusing on different levels of obscurity.

Windows

Store database credentials as a Local Security Authority (LSA) secret and create a DSN with the stored credential. Use a SqlServer connection string with Trusted_Connection=yes

The hashed credentials are stored in the SAM file and the registry. If an attacker has physical access to the storage, they can easily copy the hashes if the machine is not running or can be shut-down. The hashes can be sniffed from the wire in transit. The hashes can be pulled from the running machines memory (specifically the Local Security Authority Subsystem Service (LSASS.exe)) using tools such as Mimikatz, WCE, hashdump or fgdump. An attacker generally only needs the hash. Trusted tools like psexec take care of this for us. All discussed in my “0wn1ng The Web” presentation.

Encrypt Sections of a web, executable, machine-level, application-level configuration files with aspnet_regiis.exe with the -pe option and name of the configuration element to encrypt and the configuration provider you want to use. Either DataProtectionConfigurationProvider (uses DPAPI) or RSAProtectedConfigurationProvider (uses RSA). the -pd switch is used to decrypt or programatically:

string connStr = ConfigurationManager.ConnectionString["MyDbConn1"].ToString();

Of course there is a problem with this also. DPAPI uses LSASS, which again an attacker can extract the hash from its memory. If the RSAProtectedConfigurationProvider has been used, a key container is required. Mimikatz will force an export from the key container to a .pvk file. Which can then be read using OpenSSL or tools from the Mono.Security assembly.

I have looked at a few other ways using PSCredential and SecureString. They all seem to rely on DPAPI which as mentioned uses LSASS which is open for exploitation.

Credential Guard and Device Guard leverage virtualisation-based security. By the look of it still using LSASS. Bromium have partnered with Microsoft and coined it Micro-virtualization. The idea is that every user task is isolated into its own micro-VM. There seems to be some confusion as to how this is any better. Tasks still need to communicate outside of their VM, so what is to stop malicious code doing the same? I have seen lots of questions but no compelling answers yet. Credential Guard must run on physical hardware directly. Can not run on virtual machines. This alone rules out many
deployments.

Bromium vSentry transforms information and infrastructure protection with a revolutionary new architecture that isolates and defeats advanced threats targeting the endpoint through web, email and documents

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

This is marketing talk. Please don’t take this literally.

vSentry empowers users to access whatever information they need from any network, application or website, without risk to the enterprise

Traditional security solutions rely on detection and often fail to block targeted attacks which use unknown “zero day” exploits. Bromium uses hardware enforced isolation to stop even “undetectable” attacks without disrupting the user.

Bromium

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and
a joy to use

Bromium

These seem like bold claims.

Also worth considering is that Microsofts new virtualization-based security also relies on UEFI Secure Boot, which has been proven insecure.

Linux

Containers also help to provide some form of isolation. Allowing you to only have the user accounts to do what is necessary for the application.

I usually use a deployment tool that also changes the permissions and ownership of the files involved with the running web application to a single system user, so unprivileged users can not access the web applications files at all. The deployment script is executed over SSH in a remote shell. Only specific commands on the server are allowed to run and a very limited set of users have any sort of access to the machine. If you are using Linux Containers then you can reduce this even more if it is not already.

One of the beauties of GNU/Linux is that you can have as much or little security as you decide. No one has made that decision for you already and locked you out of the source. You are not feed lies like all of the closed source OS vendors trying to pimp their latest money spinning product. GNU/Linux is a dirty little secrete that requires no marketing hype. It just provides complete control if you want it. If you do not know what you want, then someone else will probably take that control from you. It is just a matter of time if it hasn’t happened already.

Least Privilege

Prevention

An application should have the least privileges possible in order to carry out what it needs to do. Consider creating accounts for each trust distinction. For example where you only need to read from a data store, then create that connection with a users credentials that is only allowed to read, and so on for other privileges. This way the attack surface is minimised. Adhering to the principle of least privilege. Also consider removing table access completely from the application and only provide permissions to the application to run stored queries. This way if/when an attacker is able to
compromise the machine and retrieve the password for an action on the data-store, they will not be able to do a lot anyway.

Location

Prevention

Put your services like data-stores on network segments that are as sheltered as possible and only contain similar services.

Maintain as few user accounts on the servers in question as possible and with the least privileges as possible.

Data-store Compromise

Prevention

As part of your defence in depth strategy, you should expect that your data-store is going to get stolen, but hope that it does not. What assets within the data-store are sensitive? How are you going to stop an attacker that has gained access to the data-store from making sense of the sensitive data?

As part of developing the application that uses the data-store, a strategy also needs to be developed and implemented to carry on business as usual when this happens. For example, when your detection mechanisms realise that someone unauthorised has been on the machine(s) that host your data-store, as well as the usual alerts being fired off to the people that are going to investigate and audit, your application should take some automatic measures like:

  • All following logins should be instructed to change passwords

If you follow the recommendations below, data-store theft will be an inconvenience, but not a disaster.

Consider what sensitive information you really need to store. Consider using the following key derivation functions (KDFs) for all sensitive data. Not just passwords. Also continue to remind your customers to always use unique passwords that are made up of alphanumeric, upper-case, lower-case and special characters. It is also worth considering pushing the use of high quality password vaults. Do not limit password lengths. Encourage long passwords.

PBKDF2, bcrypt and scrypt are KDFs that are designed to be slow. Used in a process commonly known as key stretching. The process of key stretching in terms of how long it takes can be tuned by increasing or decreasing the number of cycles used. Often 1000 cycles or more for passwords. “The function used to protect stored credentials should balance attacker and defender verification. The defender needs an acceptable response time for verification of users’ credentials during peak use. However, the time required to map <credential> -> <protected form> must remain beyond threats’ hardware (GPU, FPGA) and technique (dictionary-based, brute force, etc) capabilities.

OWASP Password Storage

PBKDF2, bcrypt and the newer scrypt, apply a Pseudorandom Function (PRF) such as a crypto-graphic hash, cipher or HMAC to the data being received along with a unique salt. The salt should be stored with the hashed data.

Do not use MD5, SHA-1 or the SHA-2 family of cryptographic one-way hashing functions by themselves for cryptographic purposes like hashing your sensitive data. In-fact do not use hashing functions at all for this unless they are leveraged with one of the mentioned KDFs. Why? Because the hashing speed can not be slowed as hardware continues to get faster. Many organisations that have had their data-stores stolen and continue to on a weekly basis could avoid their secrets being compromised simply by using a decent KDF with salt and a decent number of iterations. “Using four AMD Radeon HD6990 graphics cards, I am able to make about 15.5 billion guesses per second using the SHA-1 algorithm.

Per Thorsheim

In saying that, PBKDF2 can use MD5, SHA-1 and the SHA-2 family of hashing functions. Bcrypt uses the Blowfish (more specifically the Eksblowfish) cipher. Scrypt does not have user replaceable parts like PBKDF2. The PRF can not be changed from SHA-256 to something else.

Which KDF To Use?

This depends on many considerations. I am not going to tell you which is best, because there is no best. Which to use depends on many things. You are going to have to gain understanding into at least all three KDFs. PBKDF2 is the oldest so it is the most battle tested, but there has also been lessons learnt from it that have been taken to the latter two. The next oldest is bcrypt which uses the Eksblowfish cipher which was designed specifically for bcrypt from the blowfish cipher, to be very slow to initiate thus boosting protection against dictionary attacks which were often run on custom Application-specific Integrated Circuits (ASICs) with low gate counts, often found in GPUs of the day (1999).
The hashing functions that PBKDF2 uses were a lot easier to get speed increases due to ease of parallelisation as opposed to the Eksblowfish cipher attributes such as: far greater memory required for each hash, small and frequent pseudo-random memory accesses, making it harder to cache the data into faster memory. Now with hardware utilising large Field-programmable Gate Arrays (FPGAs), bcrypt brute-forcing is becoming more accessible due to easily obtainable cheap hardware such as:

The sensitive data stored within a data-store should be the output of using one of the three key derivation functions we have just discussed. Feed with the data you want protected and a salt. All good frameworks will have at least PBKDF2 and bcrypt APIs

bcrypt brute-forcing

With well ordered rainbow tables and hardware with high FPGA counts, brute-forcing bcrypt is now feasible:

Risks that Solution Causes

Reliance on adjacent layers of defence means those layers have to actually be up to scratch. There is a possibility that they will not be.

Possibility of missing secrets being sent over the wire.

Possible reliance on obscurity with many of the strategies I have seen proposed. Just be aware that obscurity may slow an attacker down a little, but it will not stop them.

Store Configuration in Configuration files

With moving any secrets from source code to configuration files, there is a possibility that the secrets will not be changed at the same time. If they are not changed, then you have not really helped much, as the secrets are still in source control.

With good configuration tools like node-config, you are provided with plenty of options of splitting up meta-data, creating overrides, storing different parts in different places, etc. There is a risk that you do not use the potential power and flexibility to your best advantage. Learn the ins and outs of what ever system it is you are using and leverage its features to do the best at obscuring your secrets and if possible securing them.

node-config

Is an excellent configuration package with lots of great features. There is no security provided with node-config, just some potential obscurity. Just be aware of that, and as discussed previously, make sure surrounding layers have beefed up security.

Windows

As is often the case with Microsoft solutions, their marketing often leads people to believe that they have secure solutions to problems when that is not the case. As discussed previously, there are plenty of ways to get around the Microsoft so called security features. As anything else in this space, they may provide some obscurity, but do not depend on them being secure.

Statements like the following have the potential for producing over confidence:

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

Bromium

Please keep your systems patched and updated.

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and a joy to use

Bromium

There is a risk that people will believe this.

Linux

As with Microsofts “virtualisation-based security” Linux containers may slow system compromise down, but a determined attacker will find other ways to get around container isolation. Maintaining a small set of user accounts is a worthwhile practise, but that alone will not be enough to stop a highly skilled and determined attacker moving forward.
Even when technical security is very good, an experienced attacker will use other mediums to gain what they want, like social engineering, physical compromise, both, or some other attack vectors. Defence in depth is crucial in achieving good security. Concentrating on the lowest hanging fruit first and working your way up the tree.

Locking file permissions and ownership down is good, but that alone will not save you.

Least Privilege

Applying least privilege to everything can take quite a bit of work. Yes, it is probably not that hard to do, but does require a breadth of thought and time. Some of the areas discussed could be missed. Having more than one person working on the task is often effective as each person can bounce ideas off of each other and the other person is likely to notice areas that you may have missed and visa-versa.

Location

Segmentation is useful, and a common technique to helping to build resistance against attacks. It does introduce some complexity though. With complexity comes the added likely-hood of introducing a fault.

Data-store Compromise

If you follow the advice in the countermeasures section, you will be doing more than most other organisations in this area. It is not hard, but if implemented could increase complacency/over confidence. Always be on your guard. Always expect that although you have done a lot to increase your security stance, a determined and experienced attacker is going to push buttons you may have never realised you had. If they want something enough and have the resources and determination to get it, they probably will. This is where you need strategies in place to deal with post compromise. Create process (ideally partly automated) to deal with theft.

Also consider that once an attacker has made off with your data-store, even if it is currently infeasible to brute-force the secrets, there may be other ways around obtaining the missing pieces of information they need. Think about the paper shredders and the associated competitions. With patience, most puzzles can be cracked. If the compromise is an opportunistic type of attack, they will most likely just give up and seek an easier target. If it is a targeted attack by determined and experienced attackers, they will probably try other attack vectors until they get what they want.

Do not let over confidence be your weakness. An attacker will search out the weak link. Do your best to remove weak links.

Costs and Trade-offs

There is potential for hidden costs here, as adjacent layers will need to be hardened. There could be trade-offs here that force us to focus on the adjacent layers. This is never a bad thing though. It helps us to step back and take a holistic view of our security.

Store Configuration in Configuration files

There should be little cost in moving secrets out of source code and into configuration files.

Windows

You will need to weigh up whether the effort to obfuscate secrets is worth it or not. It can also make the developers job more cumbersome. Some of the options provided may be worthwhile doing.

Linux

Containers have many other advantages and you may already be using them for making your deployment processes easier and less likely to have dependency issues. They also help with scaling and load balancing, so they have multiple benefits.

Least Privilege

Is something you should be at least considering and probably doing in every case. It is one of those considerations that is worth while applying to most layers.

Location

Segmenting of resources is a common and effective measure to take for at least slowing down attacks and a cost well worth considering if you have not already.

Data-store Compromise

The countermeasures discussed here go without saying, although many organisations do not do them well if at all. It is up to you whether you want to be one of the statistics that has all of their secrets revealed. Following the countermeasures here is something that just needs to be done if you have any data that is sensitive in your data-store(s).

Holistic Info-Sec for Web Developers

July 24, 2015

Quick update: Fascicle 0 is now considered Done. Available as an ebook on LeanPub and hard copy on Amazon.

Holistic InfoSec for Web Developers

 

Fascicle 1 is now content complete.

Most of my spare energy is going to be going into my new book for a while. I’m going to be tweeting as I write it, so please follow @binarymist. You can also keep up with my release notes at github. You can also discuss progress or even what you would find helpful as a web developer with a focus on information security, where it’s all happening.

HolisticInfoSecForWebDevelopers

I’ve split the book up into three fascicles to allow the content to be released sooner.

 

Keeping Your NodeJS Web App Running on Production Linux

June 27, 2015

All the following offerings that I’ve evaluated target different scenarios. I’ve listed the pros and cons for each of them and where I think they fit into a potential solution to monitor your web applications (I’m leaning toward NodeJS) and make sure they keep running. I’ve listed the goals I was looking to satisfy.

For me I have to have a good knowledge of the landscape before I commit to a decision and stand behind it. I like to know I’ve made the best decision based on all the facts that are publicly available. Therefore, as always, it’s my responsibility to make sure I’ve done my research in order to make an informed and ideally… best decision possible. I’m pretty sure my evaluation was un-biased, as I hadn’t used any of the offerings other than forever before.

I looked at quite a few more than what I’ve detailed below, but the following candidates I felt were worth spending some time on.

Keep in mind, that everyone’s requirements will be different, so rather than tell you which to use because I don’t know your situation, I’ve listed the attributes (positive, negative and neutral) that I think are worth considering when making this choice. After my evaluation I make some decisions and start the configuration.

Evaluation criterion

  1. Who is the creator. I favour teams rather than individuals, as individuals move on, then where does that leave the product?
  2. Does it do what we need it to do? Goals address this.
  3. Do I foresee any integration problems with other required components?
  4. Cost in money. Is it free? I usually gravitate toward free software. It’s usually an easier sell to clients and management. Are there catches once you get further down the road? Usually open source projects are marketed as is.
  5. Cost in time. Is the set-up painful?
  6. How well does it appear to be supported? What do the users say?
  7. Documentation. Is there any / much? What is it’s quality?
  8. Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have a valid reason).
  9. Release schedule. How often are releases being made? When was the last release?
  10. Gut feeling, Intuition. How does it feel. If you have experience in making these sorts of choices, lean on it. Believe it or not, this should probably be No. 1.

The following tools have been my choice based on the above criterion.

Goals

  1. Application should start automatically on system boot
  2. Application should be re-started if it dies or becomes un-responsive
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy (Nginx, node-http-proxy, Tinyproxy, Squid, Varnish, etc)
    2. Clustering and providing load balancing for your single threaded application
    3. Visibility of application statistics.
  4. Enough documentation to feel comfortable consuming the offering
  5. The offering should be production ready. This means: mature with a security conscious architecture.

Sysvinit, Upstart, systemd & Runit

You’ll have one of these running on your Linux box.

These are system and service managers for Linux. Upstart and the later systemd were developed as replacements for the traditional init daemon (Sysvinit), which all depend on init. Init is an essential package that pulls in the default init system. In Debian, starting with Jessie, systemd is your default system and service manager.

There’s some quite helpful info on the differences between Sysvinit and systemd here.

systemd

As I have systemd installed out of the box on my test machine (Debian Jessie), I’ll be using this for my set-up.

Documentation

  1. Well written comparison with Upstart, systemd, Runit and even Supervisor.

Running the likes of the below commands will provide some good details on how these packages interact with each other:

aptitude show sysvinit
aptitude show systemd
# and any others you think of

These system and service managers all run as PID 1 and start the rest of the system. Your Linux system will more than likely be using one of these to start tasks and services during boot, stop them during shutdown and supervise them while the system is running. Ideally you’re going to want to use something higher level to look after your NodeJS app. See the following candidates…

forever

and it’s web UI. Can run any kind of script continuously (whether it is written in node.js or not). This wasn’t always the case though. It was originally targeted toward keeping NodeJS applications running.

Requires NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential, I’d rather do without NPM on a production server where we’re actively working to reduce the installed package count and disable everything else we can. I could install forever on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

NPM Details

Does it Meet our Goals?

  1. Not without an extra script. Crontab or similar
  2. Application will be re-started if it dies, but if it’s response times go up, there’s not much forever is going to do about it. It has no way of knowing.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing
    3. Visibility of application statistics could be added later with the likes of Monit or something else, but if you used Monit, then there wouldn’t really be a need for forever as Monit does the little that forever does and is capable of so much more, but is not pushy on what to do and how to do it. All the behaviour is defined with quite a nice syntax in a config file or as many as you like.
  4. I think there is enough documentation to feel comfortable consuming it, as forever doesn’t do a lot, which doesn’t have to be a bad thing.
  5. The code it self is probably production ready, but I’ve heard quite a bit about stability issues. You’re also expected to have NPM installed (more attack surface) when we already have native package managers on the server(s).

Overall Thoughts

For me, I’m looking for a tool set that does a bit more. Forever doesn’t satisfy my requirements. There’s often a balancing act between not doing enough and doing too much.

PM2

PM2

Younger than forever, but seems to have quite a few more features and does actually look quite good. I’m not sure about production ready though?

As mentioned on the github page: “PM2 is a production process manager for Node.js applications with a built-in load balancer“. This “Sounds” and at the initial glance looks shiny. Very quickly you should realise there are a few security issues you need to be aware of though.

The word “production” is used but it requries NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential and it shouldn’t be, I’d rather do without it on a production system. I could install PM2 on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

At the time of writing this, it’s less than a year old and in nodejs land, that means it’s very much in the immature realm. Do you really want to use something that young on a production server? I’d personally advise against it.

Yes, it’s very popular currently. That doesn’t tell me it’s ready for production though. It tells me the marketing is working.

Is your production server ready for PM2? That phrase alone tells me the mind-set behind the project. I’d much sooner see it the other way around. Is PM2 ready for my production server? You’re going to need a staging server for this, unless you honestly want development tools installed on your production server (git, build-essential, NVM and an unstable version of node 0.11.14 (at time of writing)) and run test scripts on your production server? Not for me or my clients thanks.

If you’ve considered the above concerns and can justify adding the additional attack surface area, check out the features if you haven’t already.

Features that Stood Out

They’re also listed on the github repository. Just beware of some of the caveats. Like for the load balancing: “we recommend the use of node#0.11.15+ or io.js#1.0.2+. We do not support node#0.10.* cluster module anymore!” 0.11.15 is unstable, but hang-on, I thought PM2 was a “production” process manager? OK, so were happy to mix unstable in with something we label as production?

On top of NodeJS, PM2 will run the following scripts: bash, python, ruby, coffee, php, perl.

Start-up Script Generation

Although I’ve heard a few stories that this is fairly un-reliable at the time of writing this. Which doesn’t surprise me, as the project is very young.

Documentation

  1. Advanced Readme

Does it Meet our Goals?

  1. The feature exists, unsure of how reliable it is currently though?
  2. Application should be re-started if it dies shouldn’t be a problem. PM2 can also restart your application if it reaches a certain memory threshold. I haven’t seen anything around restarting based on response times or other application health issues.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Clustering and load-balancing is integrated but immature.
    3. PM2 provides a small collection of viewable statistics. Personally I’d want more, but I don’t see any reason why you’d have to swap PM2 because of this.
  4. There is reasonable official documentation for the age of the project. The community supplied documentation will need to catch up a bit, although there is a bit of that too. After working through all of the offerings and edge-cases, I feel as I usually do with NodeJS projects. The documentation doesn’t cover all the edge-cases and the development itself misses edge cases. Hopefully with time it’ll get better though as the project does look promising.
  5. I haven’t seen much that would make me think PM2 is production ready. It’s not yet mature. I don’t agree with it’s architecture.

Overall Thoughts

For me, the architecture doesn’t seem to be heading in the right direction to be used on a production web server where less is better. I’d like to see this change. If it did, I think it could be a serious contender for this space.

 


The following are better suited to monitoring and managing your applications. Other than Passenger, they should all be in your repository, which means trivial installs and configurations.

Supervisor

Supervisord

Supervisor is a process manager with a lot of features and a higher level of abstraction than the likes of the above Sysvinit, upstart, systemd, Runit, etc so it still needs to be run by an init daemon in itself.

From the docs: “It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.” Supervisor monitors the state of processes. Where as a tool like Monit can perform so many more types of tests and take what ever actions you define.

It’s in the Debian repositories  (trivial install on Debian and derivatives).

Documentation

  1. Main web site
  2. There’s a good short comparison here.

Source

Does it Meet our Goals?

  1. Application should start automatically on system boot: Yip. That’s what Supervisor does well.
  2. Application will be re-started if it dies, or becomes un-responsive. It’s often difficult to get accurate up/down status on processes on UNIX. Pidfiles often lie. Supervisord starts processes as subprocesses, so it always knows the true up/down status of its children.If your application becomes unresponsive or can’t connect to it’s database or any other service/resource it needs to work as expected. To be able to monitor these events and respond accordingly your application can expose a health-check interface, like GET /healthcheck. If everything goes well it should return HTTP 200, if not then HTTP 5**In some cases the restart of the process will solve this issue. httpok is a Supervisor event listener which makes GET requests to the configured URL. If the check fails or times out, httpok will restart the process.To enable httpok the following lines have to be placed in supervisord.conf:
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. This would be completely separate to supervisor.
    3. Visibility of application statistics could be added later with the likes of Monit or something else. For me, Supervisor doesn’t do enough. Monit does. Plus if you need what Monit offers, then you have to have three packages to think about, or Something like Supervisor, which is not an init system, so it kind of sits in the middle of the ultimate stack. So my way of thinking is, use the init system you already have to do the low level lifting and then something small to take care of everything else on your server that the init system is not really designed for and Monit has done this job really well. Just keep in mind also. This is not based on any bias. I hadn’t used Monit before this exercise.
  4. Supervisor is a mature product. It’s been around since 2004 and is still actively developed. The official and community provided docs are good.
  5. Yes it’s production ready. It’s proven itself.

 

Overall Thoughts

The documentation is quite good, easy to read and understand. I felt that the config was quite intuitive also. I already had systemd installed out of the box and didn’t see much point in installing Supervisor as systemd appeared to do everything Supervisor could do, plus systemd is an init system (it sits at the bottom of the stack). In most scenarios you’re going to have a Sysvinit or replacement of (that runs with a PID of 1), so in many cases Supervisor although it’s quite nice is kind of redundant, and of course Ubuntu has Upstart.

Supervisor is better suited to running multiple scripts with the same runtime, for example a bunch of different client applications running on Node. This can be done with systemd and the others, but Supervisor is a better fit for this sort of thing.

 

Monit

monit

Is a utility for monitoring and managing daemons or similar programs. It’s mature, actively maintained, free, open source and licensed with GNU AGPL.

It’s in the debian repositories (trivial install on Debian and derivatives). The home page told me the binary was just under 500kB. The install however produced a different number:

After this operation, 765 kB of additional disk space will be used.

Monit provides an impressive feature set for such a small package.

Monit provides far more visibility into the state of your application and control than any of the offerings mentioned above. It’s also generic. It’ll manage and/or monitor anything you throw at it. It has the right level of abstraction. Often when you start working with a product you find it’s limitations and they stop you moving forward and you end up settling for imperfection or you swap the offering for something else providing you haven’t already invested to much effort into it. For me Monit hit the sweet spot and never seems to stop you in your tracks. There always seems to be an easy to relatively easy way to get any monitoring->take action sort of task done. What I also really like is that moving away from Monit should be relatively painless also. The time investment is small and some of it will be transferable in many cases. It’s just config from the control file.

Features that Stood Out

  • Ability to monitor files, directories, disks, processes, the system and other hosts.
  • Can perform emergency logrotates if a log file suddenly grows too large too fast
  • File Checksum TestingThis is good so long as the compromised server hasn’t also had the tool your using to perform your verification (md5sum or sha1sum) modified, which would be common. That’s why in cases like this, tools such as stealth can be a good choice.
  • Testing of other attributes like ownership and access permissions. These are good, but again can easily be modified.
  • Monitoring directories using time-stamp. Good idea, but don’t rely solely on this. time-stamps are easily modified with touch -r … providing you do it between Monit’s cycles and you don’t necessarily know when they are unless you have permissions to look at Monit’s control file.
  • Monitoring space of file-systems
  • Has a built-in lightweight HTTP(S) interface you can use to browse the Monit server and check the status of all monitored services. From the web-interface you can start, stop and restart processes and disable or enable monitoring of services. Monit provides fine grained control over who/what can access the web interface or whether it’s even active or not. Again an excellent feature that you can choose to use or not even have the extra attack surface.
  • There’s also an agregator (m/monit) that allows sys-admins to monitor and manage many hosts at a time. Also works well on mobile devices and is available at a one off cost (reasonable price) to monitor all hosts.
  • Once you install Monit you have to actively enable the http daemon in the monitrc in order to run the Monit cli and/or access the Monit http web UI. At first I thought “is this broken?” I couldn’t even run monit status (it’s a Monit command). ps told me Monit was running. Then I realised… it’s secure by default. You have to actually think about it in order to expose anything. It was this that confirmed Monit for me.
  • The Control File
  • Just like SSH, to protect the security of your control file and passwords the control file must have read-write permissions no more than 0700 (u=xrw,g=,o=); Monit will complain and exit otherwise.

Documentation

The following was the documentation I used in the same order and I found that the most helpful.

  1. Main web site
  2. Official Documentation
  3. Source and links to other documentation including a QUICK START guide of about 6 lines.
  4. Adding Monit to systemd
  5. Release notes

Does it Meet our Goals?

  1. Application can start automatically on system boot
  2. Monit has a plethora of different types of tests it can perform and then follow up with actions based on the outcomes. Http is but one of them.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Yes, I don’t see any issues here
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. Monit will still monitor, restart and do what ever else you tell it to do.
    3. Monit provides application statistics to look at if that’s what you want, but it also goes further and provides directives for you to declare behaviour based on conditions that Monit checks for.
  4. Plenty of official and community supplied documentation
  5. Yes it’s production ready. It’s proven itself. Some extra education around some of the points I raised above with some of the security features would be good. If you could trust the hosts hashing programme (and other commonly trojanised programmes like find, ls, etc) that Monit uses, perhaps because you were monitoring it from a stealth controller (which had already taken a known good copy and produced it’s own bench-mark hash) or similar then yes, you could use that feature of Monit with greater assurance that the results it was producing were in fact accurate. In saying that, you don’t have to use the feature, but it’s there if you want it, which I see as very positive so long as you understand what could go wrong and where.

 

Overall Thoughts

The accepted answer here is a pretty good mix and approach to using the right tools for each job. Monit has a lot of capabilities, none of which you must use, so it doesn’t get in your way, as many opinionated tools do and like to dictate how you do things and what you must use in order to do them. Monit allows you to leverage what ever you already have in your stack. You don’t have to install package managers or increase your attack surface other than [apt-get|aptitude] install monit It’s easy to configure and has lots of good documentation.

Passenger

Passenger

I’ve looked at Passenger before and it looked quite good then. It still does, with one main caveat. It’s trying to do to much. One can easily get lost in the official documentation (example of the Monit install (handfull of commands to cover all Linux distros one page) vs Passenger install (aprx 10 pages)).  “Passenger is a web server and application server, designed to be fast, robust and lightweight. It runs your web apps with the least amount of hassle by taking care of almost all administrative heavy lifting for you.” I’d like to see the actual weight rather than just a relative term “lightweight”. To me it doesn’t look light weight. The feeling I got when evaluating Passenger was similar to the feeling produced with my Ossec evaluation.

The learning curve is quite a bit steeper than all the previous offerings. Passenger has strong opinions that once you buy into could make it hard to use the tools you may want to swap in and out. I’m not seeing the UNIX Philosophy here.

If you look at the Phusion Passenger Philosophy we see some note-worthy comments. “We believe no good software has bad documentation“. If your software is 100% intuitive, the need for documentation should be minimal. Few software products are 100% intuitive, because we only have so much time to develop it. The comment around “the Unix way” is interesting also. At this stage I’m not sure they’ve done better. I’d like to spend some time with someone or some team that has Passenger in production in a diverse environment and see how things are working out.

Passenger isn’t in the Debian repositories, so you would need to add the apt repository.

Passenger is six years old at the time of writing this, but the NodeJS support is only just over a year old.

Features that Stood Didn’t really Stand Out

Sadly there weren’t many that stood out for me.

  • Handle more traffic looked similar to Monit resource testing but without the detail. If there’s something Monit can’t do well, it’ll say “Hay, use this other tool and I’ll help you configure it to suite the way you want to work. If you don’t like it, swap it out for something else” With Passenger it seems to integrate into everything rather than providing tools to communicate loosely. Essentially locking you into a way of doing something that hopefully you like. It also talks about “Uses all available CPU cores“. If you’re using Monit you can use the NodeJS cluster module to take care of that. Again leaving the best tool for the job to do what it does best.
  • Reduce maintenance
    • Keep your app running, even when it crashesPhusion Passenger supervises your application processes, restarting them when necessary. That way, your application will keep running, ensuring that your website stays up. Because this is automatic and builtin, you do not have to setup separate supervision systems like Monit, saving you time and effort.” but this is what Monit excels at and it’s a much easier set-up than Passenger. This sort of marketing doesn’t sit right with me.
    • Host multiple apps at once. Host multiple apps on a single server with minimal effort. ” If we’re talking NodeJS web apps, then they are their own server. They host themselves. In this case it looks like Passenger is trying to solve a problem that doesn’t exist?
  • Improve security
    • Privilege separationIf you host multiple apps on the same system, then you can easily run each app as a different Unix user, thereby separating privileges.“. The Monit documentation says this: “If Monit is run as the super user, you can optionally run the program as a different user and/or group.” and goes on to provide examples how it’s done. So again I don’t see anything new here. Other than the “Slow client protections” which has side affects, that’s it for security considerations with Passenger. From what I’ve seen Monit has more in the way of security related features.
  • What I saw happening here was a lot of stuff that I actually didn’t need. Your mileage may vary.

Offerings

Phusion Passenger is a commercial product that has enterprise, custom and open source (which is free and still has loads of features).

Documentation

The following was the documentation I used in the same order and I found that the most helpful.

  1. NodeJS tutorial (This got me started with how it could work with NodeJS)
  2. Main web site
  3. Documentation and support portal
  4. Design and Architecture
  5. User Guide Index
  6. Nginx specific User Guide
  7. Standalone User Guide
  8. Twitter, blog
  9. IRC: #passenger at irc.freenode.net. I was on there for several days. There was very little activity.

Source

Does it Meet our Goals?

  1. Application should start automatically on system boot. There is no doubt that Passenger goes way beyond this aim.
  2. Application should be re-started if it dies or becomes un-responsive. There is no doubt that Passenger goes way beyond this aim.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Passenger provides Integrations into Nginx, Apache and stand-alone (provide your own proxy)
    2. Passenger scales up NodeJS processes and automatically load balances between them
    3. Passenger is advertised as offering easily viewable statistics.
  4. There is loads of official documentation. Not as much community contributed though, as it’s still young.
  5. From what I’ve seen so far, I’d say Passenger is production ready. I would like to see more around how security was baked into the architecture though before I committed to using it.

Overall Thoughts

I spent quite a while reading the documentation. I just think it’s doing to much. I prefer to have stronger single focused tools that do one job, do it well and play nicely with all the other kids in the sand pit. You pick the tool up and it’s just intuitive how to use it and you end up reading docs to confirm how you think it should work. For me, this is not how passenger is.

If you’re looking for something even more comprehensive, check out Zabbix. If you like to pay for your tools, check out Nagios if you haven’t already.


At this point it was fairly clear as to which components I’d be using and configuring to keep my NodeJS application monitored, alive and healthy along with any other scripts or processes. systemd and Monit. If you’re on Ubuntu, you’d probably use Upstart instead of systemd as it should already be your default init system. So going with the default for the init system should give you a quick start and provide plenty of power. Plus it’s well supported, reliable, feature rich and you can manage anything/everything you want without installing extra packages. For the next level up, I’d choose Monit. I’ve now used it in production and it’s taken care of everything above the init system. I feel it has a good level of abstraction, plenty of features, doesn’t get in the way and integrates nicely into your production OS.

Getting Started with Monit

So we’ve installed Monit with an apt-get install monit and we’re ready to start configuring it.

ps aux | grep -i monit

Will reveal that Monit is running:

/usr/bin/monit -c /etc/monit/monitrc

Now if you issue a sudo service monit restart, it won’t work as you can’t access the Monit CLI due to the httpd not running.

The first thing we need to do is make some changes to the control file (/etc/monit/monitrc in Debian). The control file has sensible defaults already. At this stage I don’t need a web UI accessible via localhost or any other hosts, but it still needs to be turned on and accessible by at least localhost. Here’s why:

Note that HTTP support is required for almost all Monit CLI interface operation, as CLI commands (such as “monit status”) are handled by communicating with the Monit background process via the the HTTP interface. So basically you should have this enable, though you can bind the HTTP interface to localhost only so Monit is not accessible from the outside.

In order to turn on the httpd, all you need in your control file for that is:

set httpd port 2812 and use address localhost # only accept connection from localhost
allow localhost # allow localhost to connect to the server and

If you want to receive alerts via email, then you’ll need to configure that. Then on reload you should get start and stop events (when you quit).

sudo monit reload

Now if you issue a curl localhost:2812 you should get the web UI’s response of a html page. Now you can start to play with the Monit CLI

Now to stop the Monit background process use:

monit quit

Oh, you can find all the arguments you can throw at Monit here, or just issue a:

monit -h # will list all options.

To check the control file for syntax errors:

sudo monit -t

Also keep an eye on your log file which is specified in the control file: set logfile /var/log/monit.log

Right. So what happens when Monit dies…

Keep Monit Alive

Now you’re going to want to make sure your monitoring tool that can be configured to take all sorts of actions never just stops running, leaving you flying blind. No noise from your servers means all good right? Not necessarily. Your monitoring tool just has to keep running. So lets make sure of that now.

When Monit is apt-get install‘ed on Debian it gets installed and configured to run as a daemon. This is defined in Monit’s init script.
Monit’s init script is copied to /etc/init.d/ and the run levels set-up for it. This means when ever a run level is entered the init script will be run taking either the single argument of stop (example: /etc/rc0.d/K01monit), or start (example: /etc/rc2.d/S17monit). Further details on run levels here.

systemd to the rescue

Monit is pretty stable, but if for some reason it dies, then it won’t be automatically restarted again.
This is where systemd comes in. systemd is installed out of the box on Debian Jessie on-wards. Ubuntu uses Upstart which is similar. Both SysV init and systemd can act as drop-in replacements for each other or even work along side of each other, which is the case in Debian Jessie. If you add a unit file which describes the properties of the process that you want to run, then issue some magic commands, the systemd unit file will take precedence over the init script (/etc/init.d/monit)

Before we get started, lets get some terminology established. The two concepts in systemd we need to know about are unit and target.

  1. A unit is a configuration file that describes the properties of the process that you’d like to run. There are many examples of these I can show you and I’ll point you in the direction soon. They should have a [Unit] directive at a minimum. The syntax of the unit files and the target files were derived from Microsoft Windows .ini files. Now I think the idea is that if you want to have a [Service] directive within your unit file, then you would append .service to the end of your unit file name.
  2. A target is a grouping mechanism that allows systemd to start up groups of processes at the same time. This happens at every boot as processes are started at different run levels.

Now in Debian there are two places that systemd looks for unit files… In order from lowest to highest precedence, they are as follows:

  1. /lib/systemd/system/ (prefix with /usr dir for archlinux) unit files provided by installed packages. Have a look in here for many existing examples of unit files.
  2. /etc/systemd/system/ unit files created by the system administrator

As mentioned above, systemd should be the first process started on your Linux server. systemd reads the different targets and runs the scripts within the specific target’s “target.wants” directory (which just contains a collection of symbolic links to the unit files). For example the target file we’ll be working with is the multi-user.target file (actually we don’t touch it, systemctl does that for us (as per the magic commands mentioned above)). Just as systemd has two locations in which it looks for unit files. I think this is probably the same for the target files, although there wasn’t any target files in the system administrator defined unit location but there were some target.wants files there.

systemd Monit Unit file

I found a template that Monit had already provided for a unit file in /usr/share/doc/monit/examples/monit.service. There’s also one for Upstart. Copy that to where the system administrator unit files should go and make the change so that systemd restarts Monit if it dies for what ever reason. Check the Restart= options on the systemd.service man page. The following is what my initial unit file looked like:

[Unit]
Description=Pro-active monitoring utility for unix systems
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/monit -I -c /etc/monit/monitrc
ExecStop=/usr/bin/monit -c /etc/monit/monitrc quit
ExecReload=/usr/bin/monit -c /etc/monit/monitrc reload
Restart=always

[Install]
WantedBy=multi-user.target

Now, some explanation. Most of this is pretty obvious. The After= directive just tells systemd to make sure the network.target file has been acted on first and of course network.target has After=network-pre.target which doesn’t have a lot in it. I’m not going to go into this now, as I don’t really care too much about it. It works. It means the network interfaces have to be up first. If you want to know how, why, check this documentation. Type=simple. Again check the systemd.service man page.
Now to have systemd control Monit, Monit must not run as a background process (the default). To do this, we can either add the set init statement to Monit’s control file or add the -I option when running systemd, which is exactly what we’ve done above. The WantedBy= is the target that this specific unit is part of.

Now we need to tell systemd to create the symlinks in multi-user.target.wants directory and other things. See the man page for more details about what enable actually does if you want them. You’ll also need to start the unit.

Now what I like to do here is:

systemctl status /etc/systemd/system/monit.service

Then compare this output once we enable the service:

● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; disabled)
   Active: inactive (dead)
sudo systemctl enable /etc/systemd/system/monit.service
# systemd now knows about monit.service
systemctl status /etc/systemd/system/monit.service

Outputs:

● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; enabled)
   Active: inactive (dead)

Now start the service:

sudo systemctl start monit.service # there's a stop and restart also.

Now you can check the status of your Monit service again. This shows terse runtime information about the units or PID you specify (monit.service in our case).

sudo systemctl status monit.service

By default this function will show you 10 lines of output. The number of lines can be controlled with the --lines= option

sudo systemctl --lines=20 status monit.service

Now try killing the Monit process. At the same time, you can watch the output of Monit in another terminal. tmux or screen is helpful for this:

sudo tail -f /var/log/monit.log
sudo kill -SIGTERM $(pidof monit)
# SIGTERM is a safe kill and is the default, so you don't actually need to specify it. Be patient, this may take a minute or two for the Monit process to terminate.

Or you can emulate a nastier termination with SIGKILL or even SEGV (which may kill monit faster).

Now when you run another status command you should see the PID has changed. This is because systemd has restarted Monit.

When you need to make modifications to the unit file, you’ll need to run the following command after save:

sudo systemctl daemon-reload

When you need to make modifications to the running services configuration file
/etc/monit/monitrc for example, you’ll need to run the following command after save:

sudo systemctl reload monit.service
# because systemd is now in control of Monit, rather than the before mentioned: sudo monit reload

 

Keep NodeJS Application Alive

Right, we know systemd is always going to be running. So lets use it to take care of the coarse grained service control. That is keeping your NodeJS application service alive.

Using systemd

systemd my-web-app.service Unit file

You’ll need to know where your NodeJS binary is. The following will provide the path:

which NodeJS

Now create a systemd unit file my-nodejs-app.service

[Unit]
Description=My amazing NodeJS application
After=network.target

[Service]
# systemctl start my-nodejs-app # to start the NodeJS script
ExecStart=[where nodejs binary lives] [where your app.js/index.js lives]
# systemctl stop my-nodejs-app # to stop the NodeJS script
# SIGTERM (15) - Termination signal. This is the default and safest way to kill process.
# SIGKILL (9) - Kill signal. Use SIGKILL as a last resort to kill process. This will not save data or cleaning kill the process.
ExecStop=/bin/kill -SIGTERM $MAINPID
# systemctl reload my-nodejs-app # to perform a zero-downtime restart.
# SIGHUP (1) - Hangup detected on controlling terminal or death of controlling process. Use SIGHUP to reload configuration files and open/close log files.
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=my-nodejs-app
User=my-nodejs-app
Group=my-nodejs-app # Not really needed unless it's different, as the default group of the user is chosen without this option. Self documenting though, so I like to have it present.
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

Add the system user and group so systemd can actually run your service as the user you’ve specified.

sudo groupadd --system my-nodejs-app # this is not needed if you adduser like below...
getent group # to verify which groups exist.
sudo adduser --system --no-create-home --group my-nodejs-app # This will create a system group with the same name and ID of the user.
groups my-nodejs-app # to verify which groups the new user is in.

Now as we did above, go through the same procedure enable‘ing, start‘ing and verifying your new service.

Make sure you have your directory permissions set-up correctly and you should have a running NodeJS application that when it dies will be restarted automatically by systemd.

Don’t forget to backup all your new files and changes in case something happens to your server.

We’re done with systemd for now. Following are some useful resources I’ve used:

 

Using Monit

Now just configure your Monit control file. You can spend a lot of time here tweaking a lot more than just your NodeJS application. There are loads of examples around and the control file itself has lots of commented out examples also. You’ll find the following the most helpful:

There are a few things that had me stuck for a bit. By default Monit only sends alerts on change, not on every cycle if the condition stays the same, unless when you set-up your

set alert your-ame@your.domain

Append receive all alerts, so that it looks like this:

set alert your-ame@your.domain receive all alerts

There’s quite a few things you just work out as you go. The main part I used to health-check my NodeJS app was:

check host myhost with address 1.2.3.4
   start program = "/bin/systemctl start my-nodejs-app.service"
   stop program = "/bin/systemctl stop my-nodejs-app.service"
   if failed ping then alert
   if failed
      port 80 and
      protocol http and
      status = 200 # The default without status is failure if status code >= 400
      request /testdir with content = "some text on my web page" and
         then restart
   if 5 restarts within 5 cycles then alert

I carry on and check things like:

  1. cpu and memory usage
  2. load averages
  3. File system space on all the mount points
  4. Check SSH that it hasn’t been restarted by anything other than Monit (potentially swapping the binary or it’s config). Of course if an attacker kills Monit, systemd immediately restarts it and we get Monit alert(s). We also get real-time logging hopefully to an off-site syslog server. Ideally your off-site syslog server also has alerts set-up on particular log events. On top of that you should also have inactivity alerts set-up so that if your log files are not generating events that you expect, then you also receive alerts. Services like Dead Man’s Snitch or packages like Simple Event Correlator with Cron are good for this. On top of all that, if you have a file integrity checker that resides on another system that your host reveals no details of and you’ve got it configured to check all the right file check-sums, dates, permissions, etc, you’re removing a lot of low hanging fruit for someone wanting to compromise your system.
  5. Directory permissions, uid, gid and checksums. Of course you’re also going to have to make sure the tools that Monit uses to do these checks haven’t been modified.

 

If you find anything I haven’t explained clearly, or you need a hand with any of this just leave a comment. Cheers.

Journey To Self Hosting

November 29, 2014

I was recently tasked with working out the best options for hosting web applications and their data for a client. This was their foray into whether to throw all their stuff into the cloud or to build their own infrastructure to host everything on.

Hosting Options

There are a lot of options available now. Most of which are derivatives of either external cloud or internal (possibly cloud). All of which come with features and some price tags that need to be weighed up. I’ve been collecting resources of providers and their offerings (both cloud and in-house) for quite a while. So I didn’t have to go far to pull them together for comparison.

All sites and apps require a different amount of each resource type to be allocated to them. For example many web sites are still predominantly static, which require more network band-width than any other resource, some memory, a little processing power and provided they’re being cached on the server, not a lot else. These resources are very cheap.

If you’re running an e-commerce site, then you can potentially add more Disk I/O which is usually the first bottleneck, processing power and space for your data store. Add in redundancy, backups and administration of.
Fast disks (or lets just call it storage) are cheap. In fact most hardware is cheap.

Administration of redundancy, backups and staying on top of security starts to cost more. Although the “staying on top of security” will need to be done whether you’re on someone else’s hardware or on your own. It’s just that it’s a lot easier on your own because you’re in control and dictate the amount of visibility you have.

The Cloud

The Cloud

Pros

It’s out of your hands.
Indeed it is, in more ways than one. Your trust is going to have to be honoured here (or not). Yes you have SLA’s, but what guarantee do the SLA’s give you that the people working on your system and data are not having a bad day. Maybe they’ve broken up with their girlfriend, or what ever. It takes very little to miss something that could drastically compromise your system and or data.

VPS’s can be spun up quickly, but remember, good things take time. Everything has a cost. Things are quick and easy for a reason. There is a cost to this, think about what those (often hidden) costs are.

In some cases it can be cheaper, but you get what you pay for.

Cons

Your are trusting others with your data. Even others that you are not aware of. In many cases, hosting providers can be (and in many cases are) forced by governments and other agencies to give up your secrets. This is very common place now and you may not even know it’s happened.

Your provider may go out of business.

There is an inherent lack of security in all the cloud providers I’ve looked at and worked with. They will tell you they take security seriously, but when someone that understands security inspects how they do things, the situation often looks like Swiss cheese.

In-House Cloud

In-House Cloud

Pros

You are in control of your data and your application, providing you or “your” staff:

  • and/or external consultants are competent and haven’t made mistakes in setting up your infrastructure
  • Are patching all software/firmware involved
  • Are Fastidiously hardening your server/s (this is continuous. It doesn’t stop at the initial set-up)
  • Have set-up the routes and firewall rules correctly
  • Have the correct alerts set-up
  • Have implemented Intrusion Detection and Prevention Systems (IDS’s/IPS’s)
  • Have penetration tested the set-up and not just from a technical perspective. It’s often best to get pairs to do the reviews.

The list goes on. If you are at all in doubt, that’s where you consider the alternatives. In saying that, most hosting and cloud providers perform abysmally, despite their claims that your applications and data is safe with them.

It “can” cost less than entrusting your system and data to someone (or many someone’s) on the other side of the planet. Weigh up the costs. They will not always be what they appear at face value.

Hardware is very cheap.

Cons

Potential lack of in-house skills.

People with the right skills and attitudes are not cheap.

It may not be core business. You may not have the necessary capitols in-house to scope, architect, cost, set-up, administer. Potentially you could hire someone to do the initial work and the on going administration. The amount of on going administration will be partly determined by what your hosting. Generally speaking hosting company web sites, blogs etc, will require less work than systems with distributed components and redundancy.

Spinning up an instance to develop or prototype on, doesn’t have to be hard. In fact if you have some hardware, provisioning of VM images is usually quick and easy. There is actually a pro in this too… you decide how much security you want baked into these images and the processes taken to configure.

Consider download latencies from people you want to reach possibly in other countries.

In some cases it can be more expensive, but you get what you pay for.

Outcome

The decision for this client was made to self host. There will be a follow up post detailing some of the hardening process I took for one of their Debian web servers.

Node.js Asynchronicity and Callback Nesting

July 26, 2014

Just a heads up before we get started on this… Chrome DevTools (v35) now has the ability to show the full call stack of asynchronous JavaScript callbacks. As of writing this, if you develop on Linux you’ll want the dev channel. Currently my Linux Mint 13 is 3 versions behind. So I had to update to the dev channel until I upgraded to the LTS 17 (Qiana).

All code samples can be found at GitHub.

Deep Callback Nesting

AKA callback hell, temple of doom, often the functions that are nested are anonymous and often they are implicit closures. When it comes to asynchronicity in JavaScript, callbacks are our bread and butter. In saying that, often the best way to use them is by abstracting them behind more elegant APIs.

Being aware of when new functions are created and when you need to make sure the memory being held by closure is released (dropped out of scope) can be important for code that’s hot otherwise you’re in danger of introducing subtle memory leaks.

What is it?

Passing functions as arguments to functions which return immediately. The function (callback) that’s passed as an argument will be run at some time in the future when the potentially time expensive operation is done. This callback by convention has it’s first parameter as the error on error, or as null on success of the expensive operation. In JavaScript we should never block on potentially time expensive operations such as I/O, network operations. We only have one thread in JavaScript, so we allow the JavaScript implementations to place our discrete operations on the event queue.

One other point I think that’s worth mentioning is that we should never call asynchronous callbacks synchronously unless of course we’re unit testing them, in which case we should be rarely calling them asynchronously. Always allow the JavaScript engine to put the callback into the event queue rather than calling it immediately, even if you already have the result to pass to the callback. By ensuring the callback executes on a subsequent turn of the event loop you are providing strict separation of the callback being allowed to change data that’s shared between itself (usually via closure) and the currently executing function. There are many ways to ensure the callback is run on a subsequent turn of the event loop. Using asynchronous API’s like setTimeout and setImmediate allow you to schedule your callback to run on a subsequent turn. The Promises/A+ specification (discussed below) for example specifies this.

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var nestedCoffee = sUTDirectory('nestedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function (done) {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the ugly nested callback coffee machine', function (done) {

      var result = function (error, state) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: ' + state.description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(error, expectedErrorOutCome, 'brew encountered an error. The following are the error details: ' + error.message);
         }
         done();
      };

      nestedCoffee().brew(result);
   });
});
lets test

The System Under Test

'use strict';

module.exports = function nestedCoffee() {

   // We don't do instant coffee ####################################

   var boilJug = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
   };
   var addInstantCoffeePowder = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Crappy instant coffee powder is being added.');
   };
   var addSugar = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Sugar is being added.');
   };
   var addBoilingWater = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Boiling water is being added.');
   };
   var stir = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Coffee is being stirred. Hmm...');
   };

   // We only do real coffee ########################################

   var heatEspressoMachine = function (state, callback) {
      var error = undefined;
      var wrappedCallback = function () {
         console.log('Espresso machine heating cycle is done.');
         if(!error) {
            callback(error, state);
         } else
            console.log('wrappedCallback encountered an error. The following are the error details: ' + error);
      };
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      // If there is an error, wrap callback with our own error function

      // Even if you call setTimeout with a time of 0 ms, the callback you pass is placed on the event queue to be called on a subsequent turn of the event loop.
      // Also be aware that setTimeout has a minimum granularity of 4ms for timers nested more than 5 deep. For several reasons we prefer to use setImmediate if we don't want a 4ms minimum wait.
      // setImmediate will schedule your callbacks on the next turn of the event loop, but it goes about it in a smarter way. Read more about it here: https://developer.mozilla.org/en-US/docs/Web/API/Window.setImmediate
      // If you are using setImmediate and it's not available in the browser, use the polyfill: https://github.com/YuzuJS/setImmediate
      // For this, we need to wait for our huge hunk of copper to heat up, which takes a lot longer than a few milliseconds.
      setTimeout(
         // Once espresso machine is hot callback will be invoked on the next turn of the event loop...
         wrappedCallback, espressoMachineHeatTime.milliseconds
      );
   };
   var grindDoseTampBeans = function (state, callback) {
      // Perform long running action.
      console.log('We are now grinding, dosing, then tamping our dose.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var mountPortaFilter = function (state, callback) {
      // Perform long running action.
      console.log('Porta filter is now being mounted.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var positionCup = function (state, callback) {
      // Perform long running action.
      console.log('Placing cup under portafilter.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var preInfuse = function (state, callback) {
      // Perform long running action.
      console.log('10 second preinfuse now taking place.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var extract = function (state, callback) {
      // Perform long running action.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.

      // Uncomment the below to test the error.
      //callback({message: 'Oh no, something has gone wrong!'})
      callback(null, state);
   };
   var espressoMachineHeatTime = {
      // if you don't want to wait for the machine to heat up assign minutes: 0.2.
      minutes: 30,
      get milliseconds() {
         return this.minutes * 60000;
      }
   };
   var state = {
      description: ''
      // Other properties
   };
   var brew = function (onCompletion) {
      // Some prep work here possibly.
      heatEspressoMachine(state, function (err, resultFromHeatEspressoMachine) {
         if(!err) {
            grindDoseTampBeans(state, function (err, resultFromGrindDoseTampBeans) {
               if(!err) {
                  mountPortaFilter(state, function (err, resultFromMountPortaFilter) {
                     if(!err) {
                        positionCup(state, function (err, resultFromPositionCup) {
                           if(!err) {
                              preInfuse(state, function (err, resultFromPreInfuse) {
                                 if(!err) {
                                    extract(state, function (err, resultFromExtract) {
                                       if(!err)
                                          onCompletion(null, state);
                                       else
                                          onCompletion(err, null);
                                    });
                                 } else
                                    onCompletion(err, null);
                              });
                           } else
                              onCompletion(err, null);
                        });
                     } else
                        onCompletion(err, null);
                  });
               } else
                  onCompletion(err, null);
            });
         } else
            onCompletion(err, null);
      });
   };
   return {
      // Publicise brew.
      brew: brew
   };
};

What’s wrong with it?

  1. It’s hard to read, reason about and maintain
  2. The debugging experience isn’t very informative
  3. It creates more garbage than adding your functions to a prototype
  4. Dangers of leaking memory due to retaining closure references
  5. Many more…

What’s right with it?

  • It’s asynchronous

Closures are one of the language features in JavaScript that they got right. There are often issues in how we use them though.   Be very careful of what you’re doing with closures. If you’ve got hot code, don’t create a new function every time you want to execute it.

Resources

  • Chapter 7 Concurrency of the Effective JavaScript book by David Herman

 

Alternative Approaches

Ranging from marginally good approaches to better approaches. Keeping in mind that all these techniques add value and some make more sense in some situations than others. They are all approaches for making the callback hell more manageable and often encapsulating it completely, so much so that the underlying workings are no longer just a bunch of callbacks but rather well thought out implementations offering up a consistent well recognised API. Try them all, get used to them all, then pick the one that suites your particular situation. The first two examples from here are blocking though, so I wouldn’t use them as they are, they are just an example of how to make some improvements.

Name your anonymous functions

  1. They’ll be easier to read and understand
  2. You’ll get a much better debugging experience, as stack traces will reference named functions rather than “anonymous function”
  3. If you want to know where the source of an exception was
  4. Reveals your intent without adding comments
  5. In itself will allow you to keep your nesting shallow
  6. A first step to creating more extensible code

We’ve made some improvements in the next two examples, but introduced blocking in the arrays prototypes forEach loop which we really don’t want to do.

Example of Anonymous Functions

var boilJug = function () {
   // Perform long running action
};
var addInstantCoffeePowder = function () {
   // Perform long running action
   console.log('Crappy instant coffee powder is being added.');
};
var addSugar = function () {
   // Perform long running action
   console.log('Sugar is being added.');
};
var addBoilingWater = function () {
   // Perform long running action
   console.log('Boiling water is being added.');
};
var stir = function () {
   // Perform long running action
   console.log('Coffee is being stirred. Hmm...');
};
var heatEspressoMachine = function () {
   // Flick switch, check water.
   console.log('Espresso machine is being turned on and is now heating.');
};
var grindDoseTampBeans = function () {
   // Perform long running action
   console.log('We are now grinding, dosing, then tamping our dose.');
};
var mountPortaFilter = function () {
   // Perform long running action
   console.log('Portafilter is now being mounted.');
};
var positionCup = function () {
   // Perform long running action
   console.log('Placing cup under portafilter.');
};
var preInfuse = function () {
   // Perform long running action
   console.log('10 second preinfuse now taking place.');
};
var extract = function () {
   // Perform long running action
   console.log('Cranking leaver down and extracting pure goodness.');
};

(function () {
   // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
   return [
      'heatEspressoMachine',
      'grindDoseTampBeans',
      'mountPortaFilter',
      'positionCup',
      'preInfuse',
      'extract',
   ].forEach(
      function (brewStep) {
         this[brewStep]();
      }
   );
}());

anonymous functions

Example of Named Functions

Now satisfies all the points above, providing the same output. Hopefully you’ll be able to see a few other issues I’ve addressed with this example. We’re also no longer clobbering the global scope. We can now also make any of the other types of coffee simply with an additional single line function call, so we’re removing duplication.

var BINARYMIST = (function (binMist) {
   binMist.coffee = {
      action: function (step) {

         return {
            boilJug: function () {
               // Perform long running action
            },
            addInstantCoffeePowder: function () {
               // Perform long running action
               console.log('Crappy instant coffee powder is being added.');
            },
            addSugar: function () {
               // Perform long running action
               console.log('Sugar is being added.');
            },
            addBoilingWater: function () {
               // Perform long running action
               console.log('Boiling water is being added.');
            },
            stir: function () {
               // Perform long running action
               console.log('Coffee is being stirred. Hmm...');
            },
            heatEspressoMachine: function () {
               // Flick switch, check water.
               console.log('Espresso machine is being turned on and is now heating.');
            },
            grindDoseTampBeans: function () {
               // Perform long running action
               console.log('We are now grinding, dosing, then tamping our dose.');
            },
            mountPortaFilter: function () {
               // Perform long running action
               console.log('Portafilter is now being mounted.');
            },
            positionCup: function () {
               // Perform long running action
               console.log('Placing cup under portafilter.');
            },
            preInfuse: function () {
               // Perform long running action
               console.log('10 second preinfuse now taking place.');
            },
            extract: function () {
               // Perform long running action
               console.log('Cranking leaver down and extracting pure goodness.');
            }
         }[step]();
      },
      coffeeType: function (type) {
         return {
            'cappuccino': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'instant': {
               brewSteps: function () {
                  return [
                     'addInstantCoffeePowder',
                     'addSugar',
                     'addBoilingWater',
                     'stir'
                  ];
               }
            },
            'macchiato': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'mocha': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'short black': {
               brewSteps: function () {
                  return [
                     'heatEspressoMachine',
                     'grindDoseTampBeans',
                     'mountPortaFilter',
                     'positionCup',
                     'preInfuse',
                     'extract',
                  ];
               }
            }
         }[type];
      },
      'brew': function (requestedCoffeeType) {
         var that = this;
         var brewSteps = this.coffeeType(requestedCoffeeType).brewSteps();
         // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
         brewSteps.forEach(function runCoffeeMakingStep(brewStep) {
            that.action(brewStep);
         });
      }
   };
   return binMist;

} (BINARYMIST || {/*if BINARYMIST is falsy, create a new object and pass it*/}));

BINARYMIST.coffee.brew('short black');

named functions


Web Workers

I’ll address these in another post.

Create Modules

Everywhere.

Legacy Modules (Server or Client side)

AMD Modules using RequireJS

CommonJS type Modules in Node.js

In most of the examples I’ve created in this post I’ve exported the system under test (SUT) modules and then required them into the test. Node modules are very easy to create and consume. requireFrom is a great way to require your local modules without explicit directory traversal, thus removing the need to change your require statements when you move your files that are requiring your modules.

NPM Packages

Browserify

Here we get to consume npm packages in the browser.

Universal Module Definition (UMD)

ES6 Modules

That’s right, we’re getting modules as part of the specification (15.2). Check out this post by Axel Rauschmayer to get you started.


Recursion

I’m not going to go into this here, but recursion can be used as a light weight solution to provide some logic to determine when to run the next asynchronous piece of work. Item 64 “Use Recursion for Asynchronous Loops” of the Effective JavaScript book provides some great examples. Do your self a favour and get a copy of David Herman’s book. Oh, we’re also getting tail-call optimisation in ES6.

 


EventEmitter

Still creates more garbage unless your functions are on the prototype, but does provide asynchronicity. Now we can put our functions on the prototype, but then they’ll all be public and if they’re part of a process then we don’t want our coffee process spilling all it’s secretes about how it makes perfect coffee. In saying that, if our code is hot and we’ve profiled it and it’s a stand-out for using to much memory, we could refactor EventEmittedCoffee to have its function declarations added to EventEmittedCoffee.prototype and perhaps hidden another way, but I wouldn’t worry about it until it’s been proven to be using to much memory.

Events are used in the well known Ganf Of Four Observer (behavioural) pattern (which I discussed the C# implementation of here) and at a higher level the Enterprise Integration Publish/Subscribe pattern. The Observer pattern is used in quite a few other patterns also. The ones that spring to mind are Model View Presenter, Model View Controller. The pub/sub pattern is slightly different to the Observer in that it has a topic/event channel that sits between the publisher and the subscriber and it uses contractual messages to encapsulate and transmit it’s events.

Here’s an example of the EventEmitter …

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var eventEmittedCoffee = sUTDirectory('eventEmittedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the event emitted coffee machine', function (done) {

      function handleSuccess(state) {
         var stateOutCome = 'The state of the ordered coffee is: ' + state.description;
         stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         done();
      }

      function handleFailure(error) {
         assert.fail(error, 'brew encountered an error. The following are the error details: ' + error.message);
         done();
      }

      // We could even assign multiple event handlers to the same event. We're not here, but we could.
      eventEmittedCoffee.on('successfulOrder', handleSuccess).on('failedOrder', handleFailure);

      eventEmittedCoffee.brew();
   });
});

The System Under Test

'use strict';

var events = require('events'); // Core node module.
var util = require('util'); // Core node module.

var eventEmittedCoffee;
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: ''
};

function EventEmittedCoffee() {

   var eventEmittedCoffee = this;

   function heatEspressoMachine(state) {
      // No need for callbacks. We can emit a failedOrder event at any stage and any subscribers will be notified.

      function emitEspressoMachineHeated() {
         console.log('Espresso machine heating cycle is done.');
         eventEmittedCoffee.emit('espressoMachineHeated', state);
      }
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      setTimeout(
         // Once espresso machine is hot event will be emitted on the next turn of the event loop...
         emitEspressoMachineHeated, espressoMachineHeatTime.milliseconds
      );
   }

   function grindDoseTampBeans(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('We are now grinding, dosing, then tamping our dose.');
      eventEmittedCoffee.emit('groundDosedTampedBeans', state);
   }

   function mountPortaFilter(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Porta filter is now being mounted.');
      eventEmittedCoffee.emit('portaFilterMounted', state);
   }

   function positionCup(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Placing cup under portafilter.');
      eventEmittedCoffee.emit('cupPositioned', state);
   }

   function preInfuse(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('10 second preinfuse now taking place.');
      eventEmittedCoffee.emit('preInfused', state);
   }

   function extract(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      eventEmittedCoffee.emit('successfulOrder', state);
      // If you want to fail the order, replace the above two lines with the below two lines.
      // state.error = 'Oh no! That extraction came out far to fast.'
      // this.emit('failedOrder', state);
   }

   eventEmittedCoffee.on('timeToHeatEspressoMachine', heatEspressoMachine).
   on('espressoMachineHeated', grindDoseTampBeans).
   on('groundDosedTampedBeans', mountPortaFilter).
   on('portaFilterMounted', positionCup).
   on('cupPositioned', preInfuse).
   on('preInfused', extract);
}

// Make sure util.inherits is before any prototype augmentations, as it seems it clobbers the prototype if it's the other way around.
util.inherits(EventEmittedCoffee, events.EventEmitter);

// Only public method.
EventEmittedCoffee.prototype.brew = function () {
   this.emit('timeToHeatEspressoMachine', state);
};

eventEmittedCoffee = new EventEmittedCoffee();

module.exports = eventEmittedCoffee;

With using raw callbacks, we have to pass them (functions) around. With events, we can have many interested parties request (subscribe) to be notified when something that our interested parties are interested in happens (the event). The Observer pattern promotes loose coupling, as the thing (publisher) wanting to inform interested parties of specific events has no knowledge of it’s subscribers, this is essentially what a service is.

Resources


Async.js

Provides a collection of methods on the async object that:

  1. take an array and perform certain actions on each element asynchronously
  2. take a collection of functions to execute in specific orders asynchronously, some based on different criteria. The likes of async.waterfall allow you to pass results of a previous function to the next. Don’t underestimate these. There are a bunch of very useful routines.
  3. are asynchronous utilities

Here’s an example…

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var asyncCoffee = sUTDirectory('asyncCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the async coffee machine', function (done) {

      var result = function (error, resultsFromAllAsyncSeriesFunctions) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: '
               + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(
               error,
               expectedErrorOutCome,
               'brew encountered an error. The following are the error details. message: '
                  + error.message
                  + '. The finished state of the ordered coffee is: '
                  + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description
            );
         }
         done();
      };

      asyncCoffee().brew(result)
   });
});

The System Under Test

'use strict';

var async = require('async');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: null
};

module.exports = function asyncCoffee() {

   var brew = function (onCompletion) {
      async.series([
         function heatEspressoMachine(heatEspressoMachineDone) {
            // No need for callbacks. We can just pass an error to the async supplied callback at any stage and the onCompletion callback will be invoked with the error and the results immediately.

            function espressoMachineHeated() {
               console.log('Espresso machine heating cycle is done.');
               heatEspressoMachineDone(state.error);
            }
            // Flick switch, check water.
            console.log('Espresso machine has been turned on and is now heating.');
            // Mutate state.
            setTimeout(
               // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
               espressoMachineHeated, espressoMachineHeatTime.milliseconds
            );
         },
         function grindDoseTampBeans(grindDoseTampBeansDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('We are now grinding, dosing, then tamping our dose.');
            grindDoseTampBeansDone(state.error);
         },
         function mountPortaFilter(mountPortaFilterDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Porta filter is now being mounted.');
            mountPortaFilterDone(state.error);
         },
         function positionCup(positionCupDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Placing cup under portafilter.');
            positionCupDone(state.error);
         },
         function preInfuse(preInfuseDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('10 second preinfuse now taking place.');
            preInfuseDone(state.error);
         },
         function extract(extractDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Cranking leaver down and extracting pure goodness.');
            // If you want to fail the order, uncomment the below line. May as well change the description too.
            // state.error = {message: 'Oh no! That extraction came out far to fast.'};
            state.description = 'beautiful shot!';
            extractDone(state.error, state);

         }
      ],
      onCompletion);
   };

   return {
      // Publicise brew.
      brew: brew
   };
};

Other Similar Useful libraries


Adding to Prototype

Check out my post on prototypes. If profiling reveals you’re spending to much memory or processing time creating the objects that contain the functions that are going to be used asynchronously you could add the functions to the objects prototype like we did with the public brew method of the EventEmitter example above.


Promises

The concepts of promises and futures which are quite similar, have been around a long time. their roots go back to 1976 and 1977 respectively. Often the terms are used interchangeably, but they are not the same thing. You can think of the language agnostic promise as a proxy for a value provided by an asynchronous actions eventual success or failure. a promise is something tangible, something you can pass around and interact with… all before or after it’s resolved or failed. The abstract concept of the future (discussed below) has a value that can be mutated once from pending to either fulfilled or rejected on fulfilment or rejection of the promise.

Promises provide a pattern that abstracts asynchronous operations in code thus making them easier to reason about. Promises which abstract callbacks can be passed around and the methods on them chained (AKA Promise pipelining). Removing temporary variables makes it more concise and clearer to readers that the extra assignments are an unnecessary step.

JavaScript Promises

A promise (Promises/A+ thenable) is an object or sometimes more specifically a function with a then (JavaScript specific) method.
A promise must only change it’s state once and can only change from either pending to fulfilled or pending to rejected.

Semantically a future is a read-only property.
A future can only have its value set (sometimes called resolved, fulfilled or bound) once by one or more (via promise pipelining) associated promises.
Futures are not discussed explicitly in the Promises/A+, although are discussed implicitly in the promise resolution procedure which takes a promise as the first argument and a value as the second argument.
The idea is that the promise (first argument) adopts the state of the second argument if the second argument is a thenable (a promise object with a then method). This procedure facilitates the concept of the “future”

We’re getting promises in ES6. That means JavaScript implementers are starting to include them as part of the language. Until we get there, we can use the likes of these libraries.

One of the first JavaScript promise drafts was the Promises/A (1) proposal. The next stage in defining a standardised form of promises for JavaScript was the Promises/A+ (2) specification which also has some good resources for those looking to use and implement promises based on the new spec. Just keep in mind though, that this has nothing to do with the EcmaScript specification, although it is/was a precursor.

Then we have Dominic Denicola’s promises-unwrapping repository (3) for those that want to stay ahead of the solidified ES6 draft spec (4). Dominic’s repo is slightly less specky and may be a little more convenient to read, but if you want gospel, just go for the ES6 draft spec sections 25.4 Promise Objects and 7.5 Operations on Promise Objects which is looking fairly solid now.

The 1, 2, 3 and 4 are the evolutionary path of promises in JavaScript.

Node Support

Although it was decided to drop promises from Node core, we’re getting them in ES6 anyway. V8 already supports the spec and we also have plenty of libraries to choose from.

Node on the other hand is lagging. Node stable 0.10.29 still looks to be using version 3.14.5.9 of V8 which still looks to be about 17 months from the beginnings of the first sign of native ES6 promises according to how I’m reading the V8 change log and the Node release notes.

So to get started using promises in your projects whether your programming server or client side, you can:

  1. Use one of the excellent Promises/A+ conformant libraries which will give you the flexibility of lots of features if that’s what you need, or
  2. Use the native browser promise API of which all ES6 methods on Promise work in Chrome (V8 -> Soon in Node), Firefox and Opera. Then polyfill using the likes of yepnope, or just check the existence of the methods you require and load them on an as needed basis. The cujojs or jakearchibald shims would be good starting points.

For my examples I’ve decided to use when.js for several reasons.

  • Currently in Node we have no native support. As stated above, this will be changing soon, so we’d be polyfilling everything.
  • It’s performance is the least worst of the Promises/A+ compliant libraries at this stage. Although don’t get to hung up on perf stats. In most cases they won’t matter in context of your module. If you’re concerned, profile your running code.
  • It wraps non Promises/A+ compliant promise look-a-likes like jQuery’s Deferred which will forever remain broken.
  • Is compliant with spec version 1.1

The following example continues with the coffee making procedure concept. Now we’ve taken this from raw callbacks to using the EventEmitter to using the Async library and finally to what I think is the best option for most of our asynchronous work, not only in Node but JavaScript anywhere. Promises. Now this is just one way to implement the example. There are many and probably many of which are even more elegant. Go forth explore and experiment.

The Test

var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var promisedCoffee = sUTDirectory('promisedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the coffee machine of promises', function (done) {

      var numberOfSteps = 7;
      // We could use a then just as we've used the promises done method, but done is semantically the better choice. It makes a bigger noise about handling errors. Read the docs for more info.
      promisedCoffee().brew().done(
         function handleValue(valueOrErrorFromPromiseChain) {
            console.log(valueOrErrorFromPromiseChain);
            valueOrErrorFromPromiseChain.errors.should.have.length(0);
            valueOrErrorFromPromiseChain.stepResults.should.have.length(numberOfSteps);
            done();
         }
      );
   });

});

The System Under Test

'use strict';

var when = require('when');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   errors: [],
   stepResults: []
};
function CustomError(message) {
   this.message = message;
   // return false
   return false;
}

function heatEspressoMachine(resolve, reject) {
   state.stepResults.push('Espresso machine has been turned on and is now heating.');
   function espressoMachineHeated() {
      var result;
      // result will be wrapped in a new promise and provided as the parameter in the promises then methods first argument.
      result = 'Espresso machine heating cycle is done.';
      // result could also be assigned another promise
      resolve(result);
      // Or call the reject
      //reject(new Error('Something screwed up here')); // You'll know where it originated from. You'll get full stack trace.
   }
   // Flick switch, check water.
   console.log('Espresso machine has been turned on and is now heating.');
   // Mutate state.
   setTimeout(
      // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
      espressoMachineHeated, espressoMachineHeatTime.milliseconds
   );
}

// The promise takes care of all the asynchronous stuff without a lot of thought required.
var promisedCoffee = when.promise(heatEspressoMachine).then(
   function fulfillGrindDoseTampBeans(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'We are now grinding, dosing, then tamping our dose.';
      // Or if something goes wrong:
      // throw new Error('Something screwed up here'); // You'll know where it originated from. You'll get full stack trace.
   },
   function rejectGrindDoseTampBeans(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillMountPortaFilter(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Porta filter is now being mounted.';
   },
   function rejectMountPortaFilter(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new Error(error.message);
   }
).then(
   function fulfillPositionCup(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Placing cup under portafilter.';
   },
   function rejectPositionCup(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillPreInfuse(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return '10 second preinfuse now taking place.';
   },
   function rejectPreInfuse(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillExtract(result) {
      state.stepResults.push(result);
      state.description = 'beautiful shot!';
      state.stepResults.push('Cranking leaver down and extracting pure goodness.');
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return state;
   },
   function rejectExtract(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);

   }
).catch(CustomError, function (e) {
      // Only deal with the error type that we know about.
      // All other errors will propagate to the next catch. whenjs also has a finally if you need it.
      // Todo: KimC. Do the dealing with e.
      e.newCustomErrorInformation = 'Ok, so we have now dealt with the error in our custom error handler.';
      return e;
   }
).catch(function (e) {
      // Handle other errors
      e.newUnknownErrorInformation = 'Hmm, we have an unknown error.';
      return e;
   }
);

function brew() {
   return promisedCoffee;
}

// when's promise.catch is only supposed to catch errors derived from the native Error (etc) functions.
// Although in my tests, my catch(CustomError func) wouldn't catch it. I'm assuming there's a bug as it kept giving me a TypeError instead.
// Looks like it came from within the library. So this was a little disappointing.
CustomError.prototype = Error;

module.exports = function promisedCoffee() {
   return {
      // Publicise brew.
      brew: brew
   };
};

Resources


Testing Asynchronous Code

All of the tests I demonstrated above have been integration tests. Usually I’d unit test the functions individually not worrying about the intrinsically asynchronous code, as most of it isn’t mine anyway, it’s C/O the EventEmitter, Async and other libraries and there is often no point in testing what the library maintainer already tests.

When you’re driving your development with tests, there should be little code testing the asynchronicity. Most of your code should be able to be tested synchronously. This is a big part of the reason why we drive our development with tests, to make sure your code is easy to test. Testing asynchronous code is a pain, so don’t do it much. Test your asynchronous code yes, but most of your business logic should be just functions that you join together asynchronously. When you’re unit testing, you should be testing units, not asynchronous code. When you’re concerned about testing your asynchronicity, that’s called integration testing. Which you should have a lot less of. I discuss the ratios here.

As of 1.18.0 Mocha now has baked in support for promises. For fluent style of testing promises we have Chai as Promised.

Resources


There are plenty of other resources around working with promises in JavaScript. For myself I found that I needed to actually work with them to solidify my understanding of them. With Chrome DevTool async option, we’ll soon have support for promise chaining.

Other Excellent Resources

And again all of the code samples can be found at GitHub.

Exploring JavaScript Prototypes

June 28, 2014

Not to be confused with the GoF Prototype pattern that defines a lot more than the simple JavaScript prototype. Although the abstract concept of the prototype is the same.

My intention with this post is to arm our developers with enough information around JavaScript prototypes to know when they are the right tool for the job as opposed to other constructs when considering how to create polymorphic JavaScript that’s performant and easy to maintain. Often performant code and easy to maintain code are in conflict with each other. I.E. if you want code that’s fast, it’s often hard to read and if you want code that’s really easy to read, it “may” not be as fast as it could/should be. So we make trade-offs.

Make your code as readable as possible in as many places as possible. The more eyes that are going to be on it, generally the more readable it needs to be. Where performance really matters, we “may” have to carefully sacrifice some precious readability to achieve the essential performance required. This really needs measuring though, because often we think we’re writing fast code that either doesn’t matter or that just isn’t fast. So we should always favour readability, then profile your running application in an environment as close to production as possible. This removes the guess work, which we usually get wrong anyway. I’m currently working on a Node.js performance blog post in which I’ll attempt to address many things to do with performance. What I’m finding a lot of the time is that techniques that I’ve been told are essential for fast code are all to often incorrect. We must measure.

Some background

Before we do the deep dive thing, lets step back for a bit. Why do prototypes matter in JavaScript? What do prototypes do for us? Where do prototypes fit into the design philosophy of JavaScript?

What do JavaScript Prototypes do for us?

Removal of Code Duplication (DRY)

Excellent for reducing unnecessary duplication of members that will need garbage collecting

Performance

Prototypes also allow us to maximise economy of memory, thus reducing Garbage Collection (GC) activity, thus increasing performance. There are other ways to get this performance though. Prototypes which obtain re-use of the parent object are not always the best way to get the performance benefits we crave. You can see here under the “Cached Functions in the Module Pattern” section that using closure (although not mentioned) which is what modules leverage, also gives us the benefit of re-use, as the free variable in the outer scope is baked into the closure. Just check the jsperf for proof.

The Design Philosophy of JavaScript and Prototypes

Prototypal inheritance was implemented in JavaScript as a key technique to support the object oriented principle of polymorphism. Prototypal inheritance provides the flexibility of being able to choose what the more specific object is going to inherit, rather than in the classical paradigm where you’re forced to inherit all the base class’s baggage whether you want it or not.

Three obvious ways to achieve polymorphism:

  1. Composition (creating an object that composes a contract to another object)(has-a relationship). Learn the pros and cons. Use when it makes sense
  2. Prototypal inheritance (is-a relationship). Learn the pros and cons. Use when it makes sense
  3. Monkey Patching courtesy of call, apply and bind
  4. Classical inheritance (is-a relationship). Why would you? Please don’t try this at home in production 😉

Of course there are other ways and some languages have unique techniques to achieve polymorphism. like templates in C++, generics in C#, first-class polymorphism in Haskell, multimethods in Clojure, etc, etc.

Diving into the Implementation Details

Before we dive into Prototypes…

What does Composition look like?

There are many great examples of how composing our objects from other object interfaces whether they’re owned by the composing object (composition), or aggregated from independent objects (aggregation), provide us with the building blocks to create complex objects to look and behave the way we want them to. This generally provides us with plenty of flexibility to swap implementation at will, thus overcoming the tight coupling of classical inheritance.

Many of the Gang of Four (GoF) design patterns we know and love leverage composition and/or aggregation to help create polymorphic objects. There is a difference between aggregation and composition, but both concepts are often used loosely to just mean creating objects that contain other objects. Composition implies ownership, aggregation doesn’t have to. With composition, when the owning object is destroyed, so are the objects that are contained within the owner. This is not necessarily the case for aggregation.

An example: Each coffee shop is composed of it’s own unique culture. Each coffee shop has a different type of culture that it fosters and the unique culture is an aggregation of its people and their attributes. Now the people that aggregate the specific coffee shop culture can also be a part of other cultures that are completely separate to the coffee shops culture, they could even leave the current culture without destroying it, but the culture of the specific coffee shop can not be the same culture of another coffee shop. Every coffee shops culture is unique, even if only slightly.

Programmer Show Pony
programmer show pony

Following we have a coffeeShop that composes a culture. We use the Strategy pattern within the culture to aggregate the customers. The Visit function provides an interface to encapsulate the Concrete Strategy, which is passed as an argument to the Visit constructor and closed over by the describe method.

// Context component of Strategy pattern.
var Programmer = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Context component of Strategy pattern.
var ShowPony = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Add more persons to make a unique culture.

var customer = {
   setCasualVisitStrategy: function (casualVisit) {
      this.casualVisit = casualVisit;
   },
   setBusinessVisitStrategy: function (businessVisit) {
      this.businessVisit = businessVisit;
   },
   doCasualVisit: function () {
      console.log(this.casualVisit.describe());
   },
   doBusinessVisit: function () {
      console.log(this.businessVisit.describe());
   }
};

// Strategy component of Strategy pattern.
var Visit = function (description) {
   // description is closed over, so it's private. Check my last post on closures for more detail
   this.describe = function () {
      return description;
   };
};

var coffeeShop;

Programmer.prototype = customer;
ShowPony.prototype = customer;

coffeeShop = (function () {
   var culture = {};
   var flavourOfCulture = '';
   // Composes culture. The specific type of culture exists to this coffee shop alone.
   var whatWeWantExposed = {
      culture: {
         looksLike: function () {
            console.log(flavourOfCulture);

         }
      }
   };

   // Other properties ...
   (function createCulture() {
      var programmer = new Programmer();
      var showPony = new ShowPony();
      var i = 0;
      var propertyName;

      programmer.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.')
      );
      programmer.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.')
      );
      showPony.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony cycles to coffee shop in lycra pretending he\'s just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.')
      );
      showPony.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.')
      );

      culture.members = [programmer, showPony, /*lots more*/];

      for (i = 0; i < culture.members.length; i++) {
         for (propertyName in culture.members[i]) {
            if (culture.members[i].hasOwnProperty(propertyName)) {
               flavourOfCulture += culture.members[i][propertyName].describe() + '\n';
            }
         }
      }

   }());
   return whatWeWantExposed;
}());

coffeeShop.culture.looksLike();
// Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.
// Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.
// Show pony cycles to coffee shop in lycra pretending he's just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.
// Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.

Now for Prototype

EcmaScript 5

In ES5 we’re a bit spoilt as we have a selection of methods on Object that help with prototypal inheritance.

Object.create takes an argument that’s an object and an optional properties object which is a EcmaScript 5 property descriptor like the second parameter of Object.defineProperties and returns a new object with the first argument passed as it’s prototype and the properties described in the property descriptor (if present) added to the returned object.

prototypal inheritance
// The object we use as the prototype for hobbit.
var person = {
   personType: 'Unknown',
   backingOccupation: 'Unknown occupation',
   age: 'Unknown'
};

var hobbit = Object.create(person);

Object.defineProperties(person, {
   'typeOfPerson': {
      enumerable: true,
      value: function () {
         if(arguments.length === 0)
            return this.personType;
         else if(arguments.length === 1 && typeof arguments[0] === 'string')
            this.personType = arguments[0];
         else
            throw 'Number of arguments not supported. Pass 0 arguments to get. Pass 1 string argument to set.';
      }
   },
   'greeting': {
      enumerable: true,
      value: function () {
         console.log('Hi, I\'m a ' + this.typeOfPerson() + ' type of person.');
      }
   },
   'occupation': {
      enumerable: true,
      get: function () {return this.backingOccupation;},
      // Would need to add some parameter checking on the setter.
      set: function (value) {this.backingOccupation = value;}
   }
});

// Add another property to hobbit.
hobbit.fatAndHairyFeet = 'Yes indeed!';
console.log(hobbit.fatAndHairyFeet); // 'Yes indeed!'
// prototype is unaffected
console.log(person.fatAndHairyFeet); // undefined

console.log(hobbit.typeOfPerson()); // 'Unknown '
hobbit.typeOfPerson('short and hairy');
console.log(hobbit.typeOfPerson()); // 'short and hairy'
console.log(person.typeOfPerson()); // 'Unknown'

hobbit.greeting(); // 'Hi, I'm a short and hairy type of person.'

person.greeting(); // 'Hi, I'm a Unknown type of person.'

console.log(hobbit.age); // 'Unknown'
hobbit.age = 'young';
console.log(hobbit.age); // 'young'
console.log(person.age); // 'Unknown'

console.log(hobbit.occupation); // 'Unknown occupation'
hobbit.occupation = 'mushroom hunter';
console.log(hobbit.occupation); // 'mushroom hunter'
console.log(person.occupation); // 'Unknown occupation'

Object.getPrototypeOf

console.log(Object.getPrototypeOf(hobbit));
// Returns the following:
// { personType: 'Unknown',
//   backingOccupation: 'Unknown occupation',
//   age: 'Unknown',
//   typeOfPerson: [Function],
//   greeting: [Function],
//   occupation: [Getter/Setter] }

 

EcmaScript 3

One of the benefits of programming in ES 3, is that we have to do more work ourselves, thus we learn how some of the lower level language constructs actually work rather than just playing with syntactic sugar. Syntactic sugar is generally great for productivity, but I still think there is danger of running into problems when you don’t really understand what’s happening under the covers.

So lets check out what really goes on with….

Prototypal Inheritance

What is a Prototype?

All objects have a prototype, but not all objects reveal their prototype directly by a property called prototype. All prototypes are objects.

So, if all objects have a prototype and all prototypes are objects, we have an inheritance chain right? That’s right. See the debug image below.

All properties that you may want to add to an objects prototype are shared through inheritance by all objects sharing the prototype.

So, if all objects have a prototype, where is it stored? All objects in JavaScript have an internal property called [[Prototype]]. You won’t see this internal property. All prototypes are stored in this internal property. How this internal property is accessed is dependant on whether it’s object is an object (object literal or object returned from a constructor) or a function. I discuss how this works below. When you dereference an object in order to find a property, the engine will first look on the current object, then the prototype of the current object, then the prototype of the prototype object and so on up the prototype chain. It’s a good idea to try and keep your inheritance hierarchies as shallow as possible for performance reasons.

Prototypes in Functions

Every function object is created with a prototype property, whether it’s a constructor or not. The prototype property has a value which is a constructor property which has a value that’s actually the function. See the below example to help clear it up. ES3 and ES5 spec 13.2 say pretty much the same thing.

var MyConstructor = function () {};
console.log(MyConstructor.prototype.constructor === MyConstructor); // true

and to help with visualising, see the below example and debug. myObj and myObjLiteral are for the two code examples below the debug image.

var MyConstructor = function () {};
var myObj = new MyConstructor();
var myObjLiteral = {};

Accessing JavaScript Prototypes

 

Up above in the composition example on line 40 and 41, you can see how we access the prototype of the constructor. We can also access the prototype of the object returned from the constructor like this:

var MyConstructor = function () {};
var myObj = new MyConstructor();
console.log(myObj.constructor.prototype === MyConstructor.prototype); // true

We can also do similar with an object literal. See below.

Prototypes in Objects that are Not Functions

Every object that is not a function is not created with a prototype property (All objects do have the hidden internal [[Prototype]] property though). Now sometimes you’ll see Object.prototype talked about. Even MDN make the matter a little confusing IMHO. In this case, the Object is the Object constructor function and as discussed above, all functions have the prototype property.

When we create object literals, the object we get is the same as if we ran the expression new Object(); (see ES3 and ES5 11.1.5)
So although we can access the prototype property of functions (that may or not be constructors), there is no such exposed prototype property directly on objects returned by constructors or on object literals.
There is however conveniently a constructor property directly on all objects returned by constructors and on object literals (as you can think of their construction procedure producing the same result). This looks similar to the above debug image:

var myObjLiteral = {};
            // ES3 ->                              // ES5 ->
console.log(myObjLiteral.constructor.prototype === Object.getPrototypeOf(myObjLiteral)); // true

I’ve purposely avoided discussing the likes of __proto__ as it’s not defined in EcmaScript and there’s no valid reason to use something that’s not standard.

Polyfilling to ES5

Now to get a couple of terms used in web development well defined before we start talking about them:

  • A shim is a library that brings a new API to an environment that doesn’t support it by using only what the older environment supports to support the new API.
  • A polyfill is some code in the form of a function, module, plugin, etc that provides the functionality of a later environment (ES5 for example) if it doesn’t exist for an older environment (ES3 for example). The polyfill often acts as a fallback. The programmer writes code targeting the newer environment as though the older environment doesn’t exist, but when the code is pulled into the older environment the polyfill kicks into action as the new language feature isn’t yet implemented natively.

If you’re supporting older browsers that don’t have full support for ES5, you can still use the ES5 additions so long as you provide ES5 polyfills. es5-shim is a good choice for this. Checkout the html5please ECMAScript 5 section for a little more detail. Also checkout Kangax’s ECMAScript 5 compatibility table to see which browsers currently support which ES5 language features. A good approach and one I like to take is to use a custom build of a library such as Lo-Dash to provide a layer of abstraction so I don’t need to care whether it’ll be in an ES5 or ES3 environment. Then for anything that the abstraction library doesn’t provide I’ll use a customised polyfill library such as es5-shim to fall back on. I prefer to use Lo-Dash over Underscore too, as I think Lo-Dash is starting to leave Underscore behind in terms of performance and features. I also like to use the likes of yepnope.js to conditionally load my polyfills based on whether they’re actually needed in the users browser. As there’s no point in loading them if we have browser support now is there?

Polyfilling Object.create as discussed above, to ES5

You could use something like the following that doesn’t accommodate an object of property descriptors. Or just go with the following next two choices which is what I do:

  1. Use an abstraction like the lodash create method which takes an optional second argument object of properties and treats them the same way
  2. Use a polyfill like this one.
if (typeof Object.create !== 'function') {
   (function () {
      var F = function () {};
      Object.create = function (proto) {
         if (arguments.length > 1) {
            throw Error('Second argument not supported');
         }
         if (proto === null) {
            throw Error('Cannot set a null [[Prototype]]');
         }
         if (typeof proto !== 'object') {
            throw TypeError('Argument must be an object');
         }
         F.prototype = proto;
         return new F();
      };
   })();
};

Polyfilling Object.getPrototypeOf as discussed above, to ES5

  1. Use an abstraction like the lodash isPlainObject method (source here), or…
  2. Use a polyfill like this one. Just keep in mind the gotcha.

 

EcmaScript 6

I got a bit excited when I saw an earlier proposed prototype-for (also seen with the name prototype-of) operator: <| . Additional example here. This would have provided a terse syntax for providing an object literal with an object to use as its prototype. It looks like it must have lost traction though as it was removed in the June 15, 2012 Draft.

There are a few extra methods in ES6 that deal with prototypes, but on trawling the EcmaScript 6 draft spec, nothing at this stage that really stands out as revolutionising the way I write JavaScript or being a mental effort/time saver for me. Of course I may have missed something. I’d like to hear from anyone that has seen something interesting to the contrary?

Yes we’re getting class‘s in ES6, but they are just an abstraction giving us a terse and declarative mechanism for doing what we already do with functions that we use as constructors, prototypes and the objects (or instances if you will) that are returned from our functions that we’ve chosen to act as constructors.

Architectural Ideas that Prototypes Help With

This is a common example that I often use for domain objects that are fairly hot that use one set of accessor properties added to the business objects prototype, as you can see on line 13 of my Hobbit module (Hobbit.js) below.

First a quick look at the tests/spec to drive the development. This is being run using mocha with the help of a Makefile in the root directory of my module under test.

  • Makefile
# The relevant section.
unit-test:
	@NODE_ENV=test ./node_modules/.bin/mocha \
		test/unit/*test.js test/unit/**/*test.js
  • Hobbit-test.js
var requireFrom = require('requirefrom');
var assert = require('assert');
var should = require('should');
var shire = requireFrom('shire/');

// Hardcode $NODE_ENV=test for debugging.
process.env.NODE_ENV='test';

describe('shire/Hobbit business object unit suite', function () {
   it('Should be able to instantiate a shire/Hobbit business object.', function (done) {
      // Uncomment below lines if you want to debug.
      //this.timeout(444000);
      //setTimeout(done, 444000);

      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();

      // Properties should be declared but not initialised.
      // No good checking for undefined alone, as that would be true whether it was declared or not.

      hobbit.should.have.property('id');
      (hobbit.id === undefined).should.be.true;
      hobbit.should.have.property('typeOfPerson');
      (hobbit.typeOfPerson === undefined).should.be.true;
      hobbit.should.have.property('greeting');
      (hobbit.greeting === undefined).should.be.true;
      hobbit.should.have.property('occupation');
      (hobbit.occupation === undefined).should.be.true;
      hobbit.should.have.property('emailFrom');
      (hobbit.emailFrom === undefined).should.be.true;
      hobbit.should.have.property('name');
      (hobbit.name === undefined).should.be.true;      

      done();
   });

   it('Should be able to set and get all properties of a shire/Hobbit business object.', function (done){
      // Uncomment below lines if you want to debug.
      this.timeout(444000);
      setTimeout(done, 444000);

      // Arrange
      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();      

      // Act
      hobbit.id = '32f4d01e-74dc-45e8-b3a8-9aa24840bc6a';
      hobbit.typeOfPerson = 'short and hairy';
      hobbit.greeting = {
         intro: 'Hi, I\'m a ',
         outro: ' type of person.'};
      hobbit.occupation = 'mushroom hunter';
      hobbit.emailFrom = 'Bilbo.Baggins@theshire.arn';
      hobbit.name = 'Bilbo Baggins';

      // Assert
      hobbit.id.should.equal('32f4d01e-74dc-45e8-b3a8-9aa24840bc6a');
      hobbit.typeOfPerson.should.equal('short and hairy');
      hobbit.greeting.should.equal('Hi, I\'m a short and hairy type of person.');
      hobbit.occupation.should.equal('mushroom hunter');
      hobbit.emailFrom.should.equal('Bilbo.Baggins@theshire.arn');
      hobbit.name.should.eql('Bilbo Baggins');

      done();
   });
});
  • Now the business object itself Hobbit.js

    Now what’s happening here is that on instance creation of new Hobbit, the empty members object you see created on line 9 is the only instance data. All of the Hobbit‘s accessor properties are defined once per export of the Hobbit module which is assigned the constructor function object. So what we store on each instance are the values assigned in the Hobbit-test.js from lines 47 through 54. That’s just the strings. So very little space is used for each instance of the Hobbit function returned by invoking the Hobbit constructor that the Hobbit module exports.
// Could achieve a cleaner syntax with Object.create, but constructor functions are a little faster.
// As this will be hot code, it makes sense to favour performance in this case.
// Of course profiling may say it's not worth it, in which case this could be rewritten.
var Hobbit = (function () {
   function Hobbit (/*Optionally Construct with DTO and serializer*/) {
      // Todo: Implement pattern for enforcing new.
      Object.defineProperty (this, 'members', {
         value: {}
      });
   }

   (function definePublicAccessors (){
      Object.defineProperties(Hobbit.prototype, {
         id: {
            get: function () {return this.members.id;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.id = newValue;
            },
            configurable: false, enumerable: true
         },
         typeOfPerson: {
            get: function () {return this.members.typeOfPerson;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.typeOfPerson = newValue;
            },
            configurable: false, enumerable: true
         },
         greeting: {
            get: function () {
               return this.members.greeting === undefined ?
                  undefined :
               this.members.greeting.intro +
                  this.typeOfPerson +
                  this.members.greeting.outro;
            },
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.greeting = newValue;
            },
            configurable: false, enumerable: true
         },
         occupation: {
            get: function () {return this.members.occupation;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.occupation = newValue;
            },
            configurable: false, enumerable: true
         },
         emailFrom: {
            get: function () {return this.members.emailFrom;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.emailFrom = newValue;
            },
            configurable: false, enumerable: true
         },
         name: {
            get: function () {return this.members.name;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.name = newValue;
            },
            configurable: false, enumerable: true
         }
      });

   })();
   return Hobbit;
})();

// JSON.parse provides a hydrated hobbit from the DTO.
//    So you would call this to populate this DO from a DTO
// JSON.stringify provides the DTO from a hydrated hobbit

module.exports = Hobbit;
  • Now running the test
lets test

 

Flyweights using Prototypes

A couple of interesting examples of the Flyweight pattern implemented in JavaScript are by the GoF and Addy Osmani.

The GoF’s implementation of the FlyweightFactory makes extensive use of closure to store its flyweights and uses aggregation in order to create it’s ConcreteFlyweight from the Flyweight. It doesn’t use prototypes.

Addy Osmani has a free book “JavaScript Design Patterns” containing an example of the Flyweight pattern, which IMO is considerably simpler and more elegant. In saying that, the GoF want you to buy their product, so maybe they do a better job when you give them money. In this example closure is also used extensively, but it’s a good example of how to leverage prototypes to share your less specific behaviour.

Mixins using Prototypes

Again if you check out the last example of Mixins in Addy Osmani’s book, there is quite an elegant example.

We can even do multiple inheritance using mixins, by adding which ever properties we want from what ever objects we want to the target objects prototype.

This is a similar concept to the post I wrote on Monkey Patching.

Mixins support the Open/Closed principle, where objects should be able to have their behaviour modified without their source code being altered.

Keep in mind though, that you shouldn’t just expect all consumers to know you’ve added additional behaviour. So think this through before using.

Factory functions using Prototypes

Again a decent example of the Factory function pattern is implemented in the “JavaScript Design Patterns” book here.

There are many other areas you can get benefits from using prototypes in your code.

Prototypal Inheritance: Not Right for Every Job

Prototypes give us the power to share only the secrets of others that need to be shared. We have fine grained control. If you’re thinking of using inheritance be it classical or prototypal, ask yourself “Is the class/object I’m wanting to provide a parent for truly a more specific version of the proposed parent?”. This is the idea behind the Liskov Substitution Principle (LSP) and Design by Contract (DbC) which I posted on here. Don’t just inherit because it’s convenient In my “javascript object creation patterns” post I also discussed inheritance.

The general consensus is that composition should be favoured over inheritance. If it makes sense to compose once you’ve considered all options, then go for it, if not, look at inheritance. Why should composition be favoured over inheritance? Because when you compose your object from another contract of an object, your sub object (the object doing the composing) doesn’t inherit anything or need to know anything about the composed objects secrets. The object being composed has complete freedom as to how it minds it’s own business, so long as it provides a consistent contract for consumers. This gives us the much loved polymorphism we crave without the crazy tight coupling of classical inheritance (inherit everything, even your fathers drinking problem :-s).

I’m pretty much in agreement with this when we’re talking about classical inheritance. When it comes to prototypal inheritance, we have a lot more flexibility and control around how we use the object that we’re deriving from and exactly what we inherit from it. So we don’t suffer the same “all or nothing” buy in and tight coupling as we do with classical inheritance. We get to pick just the good parts from an object that we decide we want as our parent. The other thing to consider is the memory savings of inheriting from a prototype rather than achieving your polymorphic behaviour by way of composition, which has us creating the composed object each time we want another specific object.

So in JavaScript, we really are spoilt for choice when it comes to how we go about getting our fix of polymorphism.

When surveys are carried out on..

Why Software Projects Fail

the following are the most common causes:

  • Ambiguous Requirements
  • Poor Stakeholder Involvement
  • Unrealistic Expectations
  • Poor Management
  • Poor Staffing (not enough of the right skills)
  • Poor Teamwork
  • Forever Changing Requirements
  • Poor Leadership
  • Cultural & Ethical Misalignment
  • Inadequate Communication

You’ll notice that technical reasons are very low on the list of why projects fail. You can see the same point mentioned by many of our software greats, but when a project does fail due to technical reasons, it’s usually because the complexity got out of hand. So as developers when focusing on the art of creating good code, our primary concern should be to reduce complexity, thus enhance the ability to maintain the code going forward.

I think one of Edsger W. Dijkstra’s phrases sums it up nicely. “Simplicity is prerequisite for reliability”.

Stratification is a design principle that focuses on keeping the different layers in code autonomous, I.E. you should be able to work in one layer without having to go up or down adjacent layers in order to fully understand the current layer you’re working in. Its internals should be able to move independently of the adjacent layers without effecting them or being concerned that a change in it’s own implementation will affect other layers. Modules are an excellent design pattern used heavily to build medium to large JavaScript applications.

With composition, if your composing with contracts, this is exactly what you get.

References and interesting reads

 

Exploring JavaScript Closures

May 31, 2014

Just before we get started, we’ll be using the terms lexical scope and dynamic scope a bit. In computer science the term lexical scope is synonymous with static scope.

  • lexical or static scope is where name resolution of “part of a program” depends on the location in the source code
  • dynamic scope is whether name resolution depends on the program state (dependent on execution context or calling context) when the name is encountered.

What are Closures?

Now establishing the formal definition has been quite an interesting journey, with quite a few sources not quite getting it right. Although the ES3 spec talks about closure, there is no formal definition of what it actually is. The ES5 spec on the other hand does discuss what closure is in two distinct locations.

  1. “11.1.5 Object Initialiser” section under the section that talks about accessor properties This is the relevant text: (In relation to getters): “Let closure be the result of creating a new Function object as specified in 13.2 with an empty parameter list (that’s getter specific) and body specified by FunctionBody. Pass in the LexicalEnvironment of the running execution context as the Scope.
  2. “13 Function Definition” section This is the relevant text: “Let closure be the result of creating a new Function object as specified in 13.2 with parameters specified by FormalParameterList (which are optional) and body specified by FunctionBody. Pass in funcEnv as the Scope.

Now what are the differences here that stand out?

  1. We see that 1 specifies a function object with no parameters, and 2 specifies some parameters (optional). So from this we can establish that it’s irrelevant whether arguments are passed or not to create closure.
  2. 1 also mentions passing in the LexicalEnvironment, where as 2 passes in funcEnv. funcEnv is the result of “calling NewDeclarativeEnvironment passing the running execution context‘s LexicalEnvironment as the argument“. So basically there is no difference.

Now 13.2 just specifies how functions are created. Given an optional parameter list, a body, a LexicalEnvironment specified by Scope, and a Boolean flag (for strict mode (ignore this for the purposes of establishing a formal definition)). Now the Scope mentioned above is the lexical environment of the running execution context (discussed here in depth) at creation time. The Scope is actually [[Scope]] (an internal property).

The ES6 spec draft runs along the same vein.

Lets get abstract

Every problem in computer science is just a more specific problem of a problem we’re familiar with in the natural world. So often it helps to find the abstract problem that we are already familiar with in order to help us understand the more specific problem we are dealing with. Patterns are an example of this. Before I was programming as a profession I was a carpenter. I find just about every problem I deal with in programming I’ve already dealt with in physical carpentry and at a higher level still with physical architecture.

In search of the true formal definition I also looked outside of JavaScript at the language agnostic term, which should just be an abstraction of the JavaScript closure anyway. Yip… Wikipedias definition “In programming languages, a closure (also lexical closure or function closure) is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables or upvalues) of that function. A closure—unlike a plain function pointer—allows a function to access those non-local variables even when invoked outside its immediate lexical scope.

My abstract formal definition

A closure is a function containing a reference to the lexical (static) environment via the function objects internal [[Scope]] property (ES5 spec 13.2.9) that it is defined within at creation time, not call time (ES5 spec 13.2.1). The closure is closed over it’s parent lexical environment and all of it’s properties. You can access these properties as variables, but not as properties, because you don’t have access to the internal [[Scope]] property directly in order to reference it’s properties. So this example fails. More correctly (ES5 spec 8.6.2) “Of the standard built-in ECMAScript objects, only Function objects implement [[Scope]].

var outerObjectLiteral = {

   x: 10,

   foo: function () {
      console.log(x); // ReferenceError: x is not defined obviously
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

See here for an explanation on the differences between properties and variables. That’s basically it. Of course there are many ways we can use a closure and that’s often where confusion creeps in about what a closure actually is and is not. Feel free to bring your perspective on this in the comments section below.

When is a closure born?

So lets get this closure closing over something. JavaScript addresses the funarg problem with closure.

var x = 10;

var outerObjectLiteral = {   

   foo: function () {
      // Because our internal [[Scope]] property now has a property (more specifically a free variable) x, we can access it directly.
      console.log(x); // Writes 10 to the console.
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

The closure is created on line 13. Now at line 9 we have access to the closed over lexical environment. When we print x on line 7, we get 10 which is the value of x on [[Scope]] that our closure was statically bound to at function object creation time (not the dynamically scoped x = 20). Now of course you can change the value of the free variable x and it’ll be reflected where ever you use the closed over variable because the closure was bound to the free variable x, not the value of the free variable x.

This is what you’ll see in Chrome Dev Tools when execution is on line 10. Bear in mind though that both foo and invokeMe closures were created at line 13.

Closure

Now I’m going to attempt to explain what the structure looks like in a simplified form with a simple hash. I don’t know how it’s actually implemented in the varius EcmaScript implementations, but I do know what the specification (single source of truth) tells us, it should look something like the following:

////////////////
// pseudocode //
////////////////
foo = closure {
   FormalParameterList: {}, // Optional
   FunctionBody: <...>,
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10
   }
}

The closure is born when the function is created (“the result of creating a new Function object” as stated above). Not when it’s returned by the outer function (I.E. upwards funarg problem) and not when it’s invoked as Angus Croll mentioned here under the “The [[Scope]] property” section.

Angus quotes the ES5 spec 10.4.3.5-7. On studying this section I’m pretty sure it is meant for the context of actually creating the function object rather than invoking an existing function object. The clauses I’ve detailed above (11.1.5 Object Initialiser and 13 Function Definition), confirm this.

The ES6 spec draft “14.1.22 Runtime Semantics: Evaluation” also confirms this theory. Although it’s titled Runtime Semantics, it has several points that confirm my theory… The so called runtime semantics are the runtime semantics of function object creation rather than function object invocation. As some of the steps specified are FunctionCreate, MakeMethod and MakeConstructor (not FunctionInvoke, InvokeMethod or InvokeConstructor). The ES6 spec draft “14.2.17 Runtime Semantics: Evaluation” and also 14.3.8 are similar.

Why do we care about Closure?

Without closures, we wouldn’t have the concept of modules which I’ve discussed in depth here.

Modules are used very heavily in JavaScript both client and server side (think NPM), and for good reason. Until ES6 there is no baked in module system. In ES6 modules become part of the language. The entire Node.js ecosystem exists to install modules via the CommonJS initiative. Modules on the client side most often use the Asynchronous Module Definition (AMD) implementation RequireJS to load modules, but can also use the likes of CommonJS via Browserify, which allows us to load node.js packages in the browser.

As of writing this, the TC39 committee have looked at both the AMD and CommonJS approaches and come up with something completely different for the ES6 module draft spec. Modules provide another mechanism for not allowing secrets to leak into the global object.

Modules are not new. David Parnas wrote a paper titled “On the Criteria To Be Used in Decomposing Systems into Modules” in 1972. This explores the idea of secrets. Design and implementation decisions that should be hidden from the rest of the programme.

Here is an example of the Module pattern that includes both private and public methods. my.moduleMethod has access to private variables outside of it’s VariableEnvironment (the current scope) via the Environment record which references the outer LexicalEnvironment via it’s internal [[Scope]] property.

Information hiding: state and implementation. In JavaScript we don’t have access modifiers, but we don’t need them either. We can hide our secrets with various patterns. Closure is a key concept for many of these patterns. Closure is a key building block for helping us to programme against contract rather than implementation, helping us to form consistent abstractions, giving us the ability to engage with a concept while safely ignoring some of its details. Thus hiding unnecessary complexity from consumers.

I think Steve McConnell explains this very well in his classic “Code Complete” book. Steve uses the house abstraction as his metaphor. “People use abstraction continuously. If you had to deal with individual wood fibers, varnish molecules, and steel molecules every time you used your front door, you’d hardly make it in or out of your house each day. Abstraction is a big part of how we deal with complexity in the real world. Software developers sometimes build systems at the wood-fiber, varnish-molecule, and steel-molecule level. This makes the systems overly complex and intellectually hard to manage. When programmers fail to provide larger programming abstractions, the system itself sometimes fails to make it through the front door. Good programmers create abstractions at the routine-interface level, class-interface level, and package-interface level-in other words, the doorknob level, door level, and house level-and that supports faster and safer programming.

Encapsulation: you can not look at the details (the internal implementation, the secrets).

Partial function application and Currying: I have a set of posts on this topic. Closure is an integral building block of these constructs. Part 1, Part 2 and Part 3.

Functional JavaScript relies heavily on closure.

Are there any Costs or Gotchas of using Closures?

Of course. You didn’t think you’d get all this expressive power without having to think about how you’re going to use it did you? As we’ve discussed, closures were created to address the funarg problem. In doing that, the closure references the lexical (static) scope of the outer scope. So even once the free variables are out of scope, closure will still reference them if they were saved at function creation time. They can not be garbage collected until the function that references (is closed over) the outer scope has fallen out of scope. I.E. the reference count is 0.

var x = 10;
var noOneLikesMe = 20;
var globalyAccessiblePrivilegedFunction;

function globalyScopedFunction(z) {

  var noOneLikesMeInner = 40;

  function privilegedFunction() {
    return x + z;
  }

  return privilegedFunction;

}

// This is where privilegedFunction is created.
globalyAccessiblePrivilegedFunction = globalyScopedFunction(30);

// This is where privilegedFunction is applied.
globalyAccessiblePrivilegedFunction();

Now only the free variables that are needed are saved at function creation time. We see that when execution arrives at line 7, the currently scoped closure has the x free variable saved to it, but not z, noOneLikesMe, or noOneLikesMeInner.

noOneLikesMe

When we enter innerFunction on line 10, we see the hidden [[Scope]] property has both the outer scope and the global scope saved to it.

TwoClosures

Say for example execution has passed the above code snippet. If the closed over variables can still be referenced by calling globalyAccessiblePrivilegedFunction again, then they can not be garbage collected. This is a frequently abused problem with the upwards funarg problem. If you’ve got hot code that is creating many functions, make sure the functions that are closed over free variables are dropped out of scope as soon as you no longer have a need for them. This way garbage collection can deallocate the memory used by the free variables.

Looking at how the specification would look simplified, we can see that each Environment record inherits what it knows it’s going to need from the Environment record of its lexical parent. This chaining inheritance goes all the way up the lexical hierarchy to the global function object as seen below. In this case the family tree is quite short. Remember this structure is formed at function creation time, not invocation time. the free variables (not their values) are statically baked.

////////////////
// pseudocode //
////////////////
globalyScopedFunction = closure {
   FormalParameterList: { // Optional
      z: 30 // Values updated at invocation time.
   },
   FunctionBody: {
      var noOneLikesMeInner = 40;

      function privilegedFunction() {
         return x + z;
      }

      return privilegedFunction;
   },
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10 // Free variable saved because we know it's going to be used in privilegedFunction.
   },
   privilegedFunction: = closure {
      FormalParameterList: {}, // Optional
      FunctionBody: {
         return x + z;
      },
      Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
         x: 10 // Free variable inherited from the outer Environment.
         z: 30 // Formal parameter saved from outer lexical environment.
      }
   }
}

Scope

I discuss closure here very briefly and how it can be used to create block scoped variables prior to block scoping with the let keyword in ES6, supposed to be officially approved by December 2014. I discuss scoping here in a little more depth.

Closure misunderstandings

Closures are created when a function is returned

A closure is formed when one of those inner functions is made accessible outside of the function in which it was contained” found here is simply incorrect. There are also a lot of other misconceptions found at that link. I’d advise to read with a bag of salt.

Now we’ve already addressed this one above, but here is an example that confirms that the closure is in fact created at function creation time, not when the inner function is returned. Yes, it does what it looks like it does. Fiddle with it?

(function () {

   var lexicallyScopedFunction = function () {
      console.log('We\'re in the lexicalyScopedFunction');
   };

   (function innerClosure() {
      lexicallyScopedFunction();
   }());

}());

On line 8, we get to see the closure that was created from the execution of line 11.

lexicallyScopedFunction

Closures can create memory leaks

Yes they can, but not if you let the closure go out of scope. Discussed above.

Values of free variables are baked into the Closure

Also untrue. Now I’ve put in-line comments to explain what’s happening here. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      loopCountingFunctions[i] = (function printLoopCount() {
         // What you see here is that each time this code is run, it prints the last value of the loop counter i.
         // Clearly showing that for each new printLoopCount function created and saved to the loopCountingFunctions array,
         // the variable i is saved to the Environment record, not the value of the variable i.
         console.log(i);
      });
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 3
runLoopPrinter[1](); // 3
runLoopPrinter[2](); // 3

An aside… getLoopPrinter is a global function. Once execution is on line 3 you get to see that global functions also have closure… supporting my comments above

global functions have closure too

Now in the above example, this is probably not what you want to happen, so how do we give each printLoopCount function it’s on value? Well by creating a parameter for each iteration of the loop, each with the new value. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      (function (i) {
         // Now what happens here is each time the above loop runs this code,
         // inside this scope (the scope of this comment) i is a new formal parameter which of course
         // gets statically saved to each printLoopCount functions Environment record (or more simply each closure of printLoopCount).
         loopCountingFunctions[i] = (function printLoopCount() {
            console.log(i);
         });
      })(i)
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 0
runLoopPrinter[1](); // 1
runLoopPrinter[2](); // 2

As always, let me know your thoughts on this post, any thing you think I may have the wrong handle on, or anything that otherwise stood out.

References and interesting reads

Up and Running with Kali Linux and Friends

March 29, 2014

When it comes to measuring the security posture of an application or network, the best defence against an attacker is offence. What does that mean? It means your best defence is to have someone with your best interests (generally employed by you), if we’re talking about your asset, assess the vulnerabilities of your asset and attempt to exploit them.

In the words of Offensive Security (Creators of Kali Linux), Kali Linux is an advanced Penetration Testing and Security Auditing Linux distribution. For those that are familiar with BackTrack, basically Kali is a new creation based on Debian rather than Ubuntu, with significant improvements over BackTrack.

When it comes to actually getting Kali on some hardware, there is a multitude of options available.

All externally listening services by default are disabled, but very easy to turn on if/when required. The idea being to reduce chances of detecting the presence of Kali.

I’ve found the Kali Linux documentation to be of a high standard and plentiful.

In this article I’ll go over getting Kali Linux installed and set-up. I’ll go over a few of the packages in a low level of detail (due to the share number of them) that come out of the box. On top of that I’ll also go over a few programmes I like to install separately. In a subsequent article I’d like to continue with additional programmes that come with Kali Linux as there are just to many to cover in one go.

System Requirements

  1. Minimum of 8 GB disk space is required for the Kali install
  2. Minimum RAM 512 MB
  3. CD/DVD Drive or USB boot support

Supported Hardware

Officially supported architectures

i386, amd64, ARM (armel and armhf)

Unofficial (but maintained) images

You can download official Kali Linux images for the following, these are maintained on a best effort basis by Offensive Security.

  • VMware (pre-made vm with VMware tools installed)

ARM images

  • rk3306 mk/ss808CPU: dual-core 1.6 GHz A9

    RAM: 1 GB

  • Raspberry Pi
  • ODROID U2CPU: quad-core 1.7 GHz

    RAM: 2GB

    Ethernet: 10/100Mbps

  • ODROID X2CPU: quad-core Cortex-A9 MPCore

    RAM: 2GB

    USB 2: 6 ports

    Ethernet: 10/100Mbps

  • MK802/MK802 II
  • Samsung Chromebook
  • Galaxy Note 10.1
  • CuBox
  • Efika MX
  • BeagleBone Black

Create a Customised Kali Image

Kali also provides a simple way to create your own ISO image from the latest source. You can include the packages you want and exclude the ones you don’t. You can customise the kernel. The options are virtually limitless.

The default desktop environment is Gnome, but Kali also provides an easy way to configure which desktop environment you use before building your custom ISO image.

The alternative options provided are: KDE, LXDE, XFCE, I3WM and MATE.

Kali has really embraced the Debian ethos of being able to be run on pretty well any hardware with extreme flexibility. This is great to see.

Installation

You should find most if not all of what you need here. Just follow the links specific to your requirements.

As with BackTrack, the default user is “root” without the quotes. If your installing, make sure you use a decent password. Not a dictionary word or similar. It’s generally a good idea to use a mix of upper case, lower case characters, numbers and special characters and of a decent length.

I’m not going to repeat what’s already documented on the Kali site, as I think they’ve done a pretty good job of it already, but I will go over some things that I think may not be 100% clear at first attempt. Also just to be clear, I’ve done this on a Linux box.

Now once you have down loaded the image that suites your target platform,

you’re going to want to check its validity by verifying the SHA1 checksums. Now this is where the instructions can be a little confusing. You’ll need to make sure that the SHA1SUMS file that contains the specific checksum you’re going to use to verify the checksum of the image you downloaded, is in fact the authentic SHA1SUMS file. instructions say “When you download an image, be sure to download the SHA1SUMS and SHA1SUMS.gpg files that are next to the downloaded image (i.e. in the same directory on the server).”. You’ve got to read between the lines a bit here. A little further down the page has the key to where these files are. It’s buried in a wget command. Plus you have to add another directory to find them. The location was here. Now that you’ve got these two files downloaded in the same directory, verify the SHA1SUMS.gpg signature as follows:

$ gpg --verify SHA1SUMS.gpg SHA1SUMS
gpg: Signature made Thu 25 Jul 2013 08:05:16 NZST using RSA key ID 7D8D0BF6
gpg: Good signature from "Kali Linux Repository <devel@kali.org>

You’ll also get a warning about the key not being certified with a trusted signature.

Now verify the checksum of the image you downloaded with the checksum within the (authentic) SHA1SUMS file

Compare the output of the following two commands. They should be the same.

# Calculate the checksum of your downloaded image file.
$ sha1sum [name of your downloaded image file]
# Print the checksum from the SHA1SUMS file for your specific downloaded image file name.
$ grep [name of your downloaded image file] SHA1SUMS

Kali also has a live USB Install including persistence to your USB drive.

Community

IRC: #kali-linux on FreeNode. Stick to the rules.

What’s Included

> 300 security programmes packaged with the operating system:

Before installation you can view the tools included in the Kali repository.

Or once installed by issuing the following command:

# prints complete list of installed packages.
dpkg --get-selections | less

To find out a little more about the application:

dpkg-query -l '*[some text you think may exist in the package name]*'

Or if you know the package name your after:

dpkg -l [package name]

Want more info still?

man [package name]

Some of the notable applications installed by default

Metasploit

Framework that provides the infrastructure to create, re-use and automate a wide variety of exploitation tasks.

If you require database support for Metasploit, start the postgresql service.

# I like to see the ports that get opened, so I run ss -ant before and after starting the services.
ss -ant
service postgresql start
ss -ant

ss or “socket statistics” which is a new replacement programme for the old netstat command. ss gets its information from kernel space via Netlink.

Start the Metasploit service:

ss -ant
service metasploit start
ss -ant

When you start the metasploit service, it will create a database and user, both with the names msf3, providing you have your database service started. Now you can run msfconsole.

Start msfconsole:

msfconsole

The following is an image of terminator where I use the top pane for stopping/starting services, middle pane for checking which ports are opened/closed, bottom pane for running msfconsole. terminator is not installed by default. It’s as simple as apt-get install terminator

metasploit

You can find full details of setting up Metasploits database and start/stopping the services here.

You can also find the Metasploit frameworks database commands simply by typing help database at the msf prompt.

# Print the switches that you can run msfconsole with.
msfconsole -h

Once your in msf type help at the prompt to get yourself started.

There is also a really easy to navigate all encompassing set of documentation provided for msfconsole here.

You can also set-up PostgreSQL and Metasploit to launch on start-up like this:

update-rc.d postgresql enable
update-rc.d metasploit enable

Offensive Security also has a Metasploit online course here.

Armitage

Just as it was included in BackTrack, which is no longer supporting Armitage, you’ll also find Armitage comes installed out of the box in version 1.0.4 of Kali Linux. Armitage is a GUI to assist in metasploit visualisation. You can find the official documentation here. Offensive Security has also done a good job of providing their own documentation for Armitage over here. To get started with Armitage, just make sure you’ve got the postgresql service running. Armitage will start the metasploit service for you if it’s not already running. Armitage allows your red team to collaborate by using a single instance of Metasploit. There is also a commercial offering developed by Raphael Mudge’s company “Strategic Cyber LLC” which also created Armitage, called Cobalt Strike. Cobalt Strike currently costs $2500 per user per year. There is a 21 day trial though. Cobalt Strike offers a bunch of great features. Check them out here. Armitage can connect to an existing instance of Metasploit on another host.

NMap

Target use is network discovery and auditing. Provides host information for anything it can access from a network. Also now has a scripting engine that can execute arbitrary custom tasks.

I’m guessing we’ve probably all used NMap? ZenMap which Kali Linux also provides out of the box Is a gui for NMap. This was also included in BackTrack.

Intercepting Web Proxies

Burp Suite

I use burp quite regularly and have a few blog posts where I’ve detailed some of it’s use. In fact I’ve used it to reverse engineer the comms between VMware vSphere and ESXi to create a UPS solution that deals with not only virtual hosts but also the clients.

WebScarab

I haven’t really found out what webscarab’s sweet spot is if it has one. I’d love to know what it does better than burp, zap and w3af combined? There is also a next generation version which according to the google code repository hasn’t had any work done on it since March 2011, where as the classic version is still receiving fixes. The documentation has always seemed fairly minimalistic also.

In terms of web proxy/interceptors I’ve also used fiddler which relies on the .NET framework and as mono is not installed out of the box on Kali, neither is fiddler.

OWASP Zed Attack Proxy (ZAP)

Which is an OWASP flagship project, so it’s free and open source. Cross platform. It was forked from the Paros Proxy project which is not longer supported. Includes automated, passive, brute force and port scanners. Traditional and AJAX spiders. Can even find unlinked files. Provides fuzzing, port scanning. Can be run without the UI in headless mode and can be accessed via a REST API. Supports Anti CSRF tokens. The Script Console that is one of the add-ons supports any language that JSR (Java Specification Requests) 223 supports. That’s languages such as JavaScript Groovy, Python, Ruby and many more. There is plenty of info on the add-ons here. OWASP also provide directions on how to write your own extensions and they provide some sample templates. Following is the list of current extensions, which can also be managed from within Zap. “Manage Add-ons” menu → Marketplace tab. Select and click “Install Selected”

OWASP Zap

The idea is to first set Zap up as a proxy for your browser. Fetch some web pages (build history). Zap will create a history of URLs. You then right click the item of interest and click Attack->[one of the spider options], then click the play button and watch the progress bar. which will crawl all the pages you have access to according to your permissions. Then under the Analyse menu → Scan Policy… Setup your scan policy so your only scanning what you want to scan. Then hit Scan to assess your target application. Out of the box, you’ve got many scan options. Zap does a lot for you. I’m really loving this tool OWASP!

As usual with OWASP, zap has a wealth of documentation. If zap doesn’t provide enough out of the box, extend it. OWASP also provide an API for zap.

You can find the user group here (also accessible from the ZAP ‘Online’ menu.), which is good for getting help if the help file (which can also be found via ZAP itself) fails to yeild. There is also a getting started guide which is a work in progress. There is also the ZAP Blog.

FoxyProxy

Although nothing to do with Kali Linux and could possibly be in the IceWeasel add-ons section below, I’ve added it here instead as it really reduces friction with web proxy interception. FoxyProxy is a very handy add-on for both firefox and chromium. Although it seems to have more options for firefox, or at least they are more easily accessible. It allows you to set-up a list of proxies and then switch between them as you need. When I run chromium as a non root user I can’t change the proxy settings once the browser is running. I have to run the following command in order to set the proxy to my intermediary before run time like this:

chromium-browser --temp-profile –proxy-server=localhost:3001

Firefox is a little easier, but neither browsers allow you to build up lists of proxies and then switch them in mid flight. FoxyProxy provides a menu button, so with two clicks you can disable the add-on completely to revert to your previous settings, or select any or your predefined proxies. This is a real time saver.

Vulnerability Scanners

Open Vulnerability Assessment System (OpenVAS)

Forked from the last free version (closed in 2005) of Nessus. OpenVAS plugins are written in the same language that Nessus uses. OpenVAS looks for known misconfigurations and vulnerabilities common in out of date software. In fact it covers the following OWASP Top 10 items:

  • No.5 Security Misconfiguration
  • No.7 Missing Function Level Access Control (formerly known as “failure to restrict URL access”)
  • No.9 Using Components with Known Vulnerabilities.

OpenVAS also has some SQLi and other probes to test application input, but it’s primary purpose is to scan networks of machines with out of date software and bad configurations.

Tests continue to be added. Now currently at 32413 Network Vulnerability Tests (NVTs) details here.

OpenVAS

Greenbone Security Desktop (gsd) who’s package is a GUI that uses the Greenbone Security Manager, OpenVAS Manager or any other service that offers the OpenVAS Management Protocol (omp) protocol. Currently at version 1.2.2 and licensed under the GPLv2. The Greenbone Security Assistant (gsad) is currently at version 4.0.0. The Germany government also sponsor OpenVAS.

From the menu: Kali Linux → Vulnerability Analysis → OpenVAS, we have a couple of short-cuts visible. openvas-gsd is actually just the gsd package and openvas-setup which is the set-up script.

Before you run openvas-gsd, you can either:

  1. Run openvas-setup which will do all the setup which I think is already done on Kali. At the end of this, you will be prompted to add a password for a user to the Admin role. The password you add here is for a new user called “admin” (of course it doesn’t say that, so can be a little confusing as to what the password is for).
  2. Or you can just run the following command, which is much quicker because you don’t run the set-up procedure:
openvasad -c 'add_user' -n [a new administrative username of your choosing] -r Admin

You’ll be prompted to add a new password. Make sure you remember it.

Check out the man page for further options. For example the -c switch is a shortened –command and it lists a selection of commands you can use.

I think -n is for –name although not listed in the man page. -r switch is –role. Either User or Admin.

The user you’ve just added is used to connect the gsd to the:

  1. openvasmd (OpenVAS Manager daemon) which listens on port 9390
  2. openvassd (OpenVAS Scanner daemon) which listens on port 9391
  3. gsad (Greenbone Security Assistant daemon) which listens on port 9392. This is a web app, which also listens on port 443
  4. openvasad (OpenVAS Administrator daemon) which listens on 9393

The core functionality is provided by the scanner and the manager. The manager handles and organises scan results. The gsad or assistant connects to the manager and administrator to provide a fully featured user interface. There is also a CLI (omp) but I haven’t been able to get this going on Kali Linux yet. You’ll also find that the previous link has links to all the man pages for OpenVAS. You can read more about the architecture and how the different components fit together.

I’ve also found that sometimes the daemons don’t automatically start when gsd starts. So you have to start them manually.

openvasmd && openvassd && gsad && openvasad

You can also use the web app https://127.0.0.1/omp

Then try logging in to the openvasmd. When your finished with gsd you can kill the running daemons if you like. I like to keep an eye on the listening ports when I’m done to keep things as quite as possible.

Check the ports.

ss -anp

Optional to see the processes running, but not necessary.

ps -e
kill -9 <PID of openvasad> <PID of gsad> <PID of openvassd> <PID of openvasmd>

There are also plenty of options when it comes to the report. This can be output in HTML, PDF, XML, Emailed and quite a few others. The reports are colour coded and you can choose what to have put in them. The vulnerabilities are classified by risk: High, Medium, Low, OpenVAS can take quite a while to scan as it runs so many tests.

This is how to get started with gsd.

Web Vulnerability Scanners

This is the generally accepted criteria of a tool to be considered a Web Application Security Scanner.

SkipFish

A high performance active reconnaissance tool written in C. From the documentation “Multiplexing single-thread, fully asynchronous network I/O and data processing model that eliminates memory management, scheduling, and IPC inefficiencies present in some multi-threaded clients.”. OK. So it’s fast.

which prepares an interactive sitemap by carrying out a recursive crawl and probes based on existing dictionaries or ones you build up yourself. Further details in the documentation linked below.

Doesn’t conform to most of the criteria outlined in the above Web Application Security Scanner criteria.

SkipFish v2.05 is the current version packaged with Kali Linux.

SkipFish v2.10b (released Dec 2012)

Free and you can view the source code. Apache license 2.0

Performs a similar role to w3af.

Project details can be found here.

You can find the tests here.

How do you use it though? This is a good place to start. Instead of reading through the non-existent doc/dictionaries.txt, I think you can do as well by reading through /usr/share/skipfish/dictionaries/README-FIRST.

The other two documentation sources are the man page and skipfish with the -h option.

Web Application Attack and Audit Framework (w3af)

Andres Riancho has created a masterpiece. The main behavior of this application is to assess and identify vulnerabilities in a web application by sending customised HTTP requests. Results can be output in quite a few formats including email. It can also proxy, but burp suite is more focused on this role and does it well.

Can be run with a gui: w3af_gui or from the terminal: w3af_console. Written in Python and Runs on Linux BSD or Mac. Older versions used to work on Windows, but it’s not currently being tested on Windows. Open source on GitHub and released under the GPLv2 license.

You can write your own plug-ins, but check first to make sure it doesn’t already exist. The plugins are listed within the application and on the w3af.org web site along with links to their source code, unit tests and descriptions. If it doesn’t appear that the plug-in you want exists, contact Andres Riancho to make sure, write it and submit a pull request. Also looks like Andres Riancho is driving the development TDD style, which means he’s obviously serious about creating quality software. Well done Andres!

w3af provides the ability to inject your payloads into almost every part of the HTTP request by way of it’s fuzzing engine. Including: query string, POST data, headers, cookie values, content of form files, URL file-names and paths.

There’s a good set of documentation found here and you can watch the training videos. I’m really looking forward to using this in anger.

w3af

Nikto

Is a web server scanner that’s not overly stealthy. It’s built on “Rain Forest Puppies” LIbWhisker2 which has a BSD license.

Nikto is free and open source with GPLv3 license. Can be run on any platform that runs a perl interpreter. It’s source can be found here. The first release of Nikto was in December of 2001 and is still under active development. Pull requests encouraged.

Suports SSL. Supports HTTP proxies, so you can see what Nikto is actually sending. Host authentication. Attack encoding. Update local databases and plugins via the -update argument. Checks for server configuration items like multiple index files and HTTP server options. Attempts to identify installed web servers and software.

Looks like the LibWhisker web site no longer exists. Last release of LibWhisker was at the beginning of 2010.

Nikto v2.1.4 (Released Feb 20 2011) is the current version packaged with Kali Linux. Tests for multiple items, including > 6400 potentially dangerous files/CGIs. Outdated versions of > 1200 servers. Insecurities of specific versions of > 270 servers.

Nikto v2.1.5 (released Sep 16 2012) is the latest version. Tests for multiple items, including > 6500 potentially dangerous files/CGIs. Outdated versions of > 1250 servers. Insecurities of specific versions of > 270 servers.

Just spoke with the Kali developers about the old version. They are now building a package of 2.1.5 as I write this. So should be an apt-get update && apt-get upgrade away by the time you read this all going well. Actually I can see it in the repo now. Man those guys are responsive!

Most of the info you will need can be found here.

SQLNinja

sqlninja: Targets Microsoft SQL Servers. Uses SQL injection vulnerabilities on a web app. Focuses on popping remote shells on the target database server and uses them to gain a foothold over the target network. You can set-up graphical access via a VNC server injection. Can upload executables by using HTTP requests via vbscript or debug.exe. Supports direct and reverse bindshell. Quite a few other methods of obtaining access. Documentation here.

Text Editors

  1. Vim. Shouldn’t need much explanation.
  2. Leafpad. This is a very basic graphical text editor. A bit like Windows Notepad.
  3. Gvim. This is the Graphical version of Vim. I’ve mostly used sublime text 2 & 3, gedit on Linux, but Gvim is really quite powerful too.

Note Keeping

  1. KeepNote. Supported on Linux, Windows and MacOS X. Easy to transport notes by zipping or copying a folder. Notes stored in HTML and XML.
  2. Zim Desktop Wiki.

Other Notable Features

  • Offensive Securities Kali Linux is free and always will be. It’s also completely open (as it’s based on debian) to modification of it’s OS or programmes.
  • FHS compliant. That means the file system complies to the Linux Filesystem Hierarchy Standard
  • Wireless device support is vast. Including USB devices.
  • Forensics Mode. As with BackTrack 5, the Kali ISO also has an option to boot into the forensic mode. No drives are written to (including swap). No drives will be auto mounted upon insertion.

Customising installed Kali

Wireless Card

I had a little trouble with my laptop wireless card not being activated. Turned out to be me just not realising that an external wi-fi switch had to be turned on. I had wireless enabled in the BIOS. The following where the steps I took to resolve it:

Read Kali Linux documentation on Troubleshooting Wireless Drivers  and found the card listed with lspci. Opened /var/log/dmesg with vi. Searched for the name of the card:

#From command mode to make search case insensitive:
:set ic
#From command mode to search
/[name of my wireless card]

There were no errors. So ran iwconfig (similar to ifconfig but dedicated to wireless interfaces). I noticed that the card was definitely present and the Tx-Power was off. I then thought I’d give rfkill a spin and it’s output made me realise I must have missed a hardware switch somewhere.

rfkill

Found the hard switch and turned it on and we now have wireless.

Adding Shortcuts to your Panel

[Alt]+[right click]->[Add to Panel…]

Or if your Kali install is on VirtualBox:

[Windows]+[Alt]+[right click]->[Add to Panel…]

Caching Debian Packages

If you want to:

  1. save on bandwidth
  2. have a large number of your packages delivered at your network speed rather than your internet speed
  3. have several debian based machines on your network

I’d recommend using apt-cacher-ng. If not already, you’ll have to set this up on a server and add the following file to each of your debian based machines.

/etc/apt/apt.conf with the following contents and set it’s permissions to be the same as your sources.list:

Acquire::http::Proxy “http://[ip address of your apt-cacher server]:3142”;

IceWeasel add-ons

  • Firebug
  • NoScript
  • Web Developer
  • FoxyProxy (more details mentioned above)
  • HackBar. Somewhat useful for (en/de)coding (Base64, Hex, MD5, SHA-(1/256), etc), manipulating and splitting URLs

SQL Inject Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. SQL Inject Me is a component of the Exploit-Me suite. Allows you to test all or any number of input fields on all or any of a pages forms. You just fill in the fields with valid data, then test with all the tools attacks or with the top that you’ve defined in the options menu. It then looks for database errors which are rendered into the returned HTML as a result of sending escape strings, so doesn’t cater for blind injection. You can also add remove escape strings and resulting error strings that SQL Inject Me should look for on response. The order in which each escape string can be tried can also be changed. All you need to know can be found here.

XSS Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. XSS Me is also a component of the Exploit-Me suite. This tool’s behaviour is very similar to SQL Inject Me (follows the POLA) which makes using the tools very easy. Both these add-ons have next to no learning curve. The level of entry is very low and I think are exactly what web developers that make excuses for not testing their own security need. The other thing is that it helps developers understand how these attacks can be carried out. XSS Me currently only tests for reflected XSS. It doesn’t attempt to compromise the security of the target system. Both XSS Me and SQL Inject Me are reconnaissance tools, where the information is the vulnerabilities found. XSS Me doesn’t support stored XSS or user supplied data from sources such as cookies, links, or HTTP headers. How effective XSS Me is in finding vulnerabilities is also determined by the list of attack strings the tool has available. Out of the box the list of XSS attack strings are derived from RSnakes collection which were donated to OWASP who now maintains it as one of their cheatsheets.. Multiple encodings are not yet supported, but are planned for the future. You can help to keep the collection up to date by submitting new attack strings.

Chromium

Because it’s got great developer tools that I’m used to using. In order to run this under the root account, you’ll need to add the following parameter to /etc/chromium/default between the quotes for CHROMIUM_FLAGS=””

--user-data-dir

I like to install the following extensions: Cookies, ScriptSafe

Terminator

Because I like a more powerful console than the default. Terminator adds split screen on top of multi tabs. If you live at the command line, you owe it to yourself to get the best console you can find. So far terminator still fits this bill for me.

KeePass

The password database app. Because I like passwords to be long, complex, unique for everything and as secure as possible.

Exploits

I was going to go over a few exploits we could carry out with the Kali Linux set-up, but I ran out of time and page space. In fact there are still many tools I wanted to review, but there just isn’t enough time or room in this article. Feel free to subscribe to my blog and you’ll get an update when I make posts. I’d like to extend on this by reviewing more of the tools offered in Kali Linux

Input Sanitisation

This has been one of my pet topics for a while. Why? Because the lack of it is so often abused. In fact this is one of the primary techniques for No.1 (Injection) and No.3 (XSS) of this years OWASP Top 10 List (unchanged from 2010). I’d encourage any serious web developers to look at my Sanitising User Input From Browser. Part 1” and Part 2

Part 1 deals with the client side (untrused) code.

Part 2 deals with the server side (trusted) code.

I provide source code, sources and discuss the following topics:

  1. Minimising the attack surface
  2. Defining maximum field lengths (validation)
  3. Determining a white list of allowable characters (validation)
  4. Escaping untrusted data
  5. External libraries, cheat sheets, useful code and sites, I used. Also discuss the less useful resources and why.
  6. The point of validating client side when the server side is going to do it again anyway
  7. Full set of server side tests to test the sanitisation is doing what is expected