Archive for the ‘Scripting’ Category

Captcha Considerations

December 31, 2015

Risks

Exploiting Captcha

Lack of captchas are a risk, but so are captchas themselves…

Let’s look at the problem here? What are we trying to stop with captchas?

Bots submitting. What ever it is, whether:

  • Advertising
  • Creating an unfair advantage over real humans
  • Link creation in attempt to increase SEO
  • Malicious code insertion

You are more than likely not interested in accepting it.

What do we not want to block?

People submitting genuinely innocent input. If a person is prepared to fill out a form manually, even if it is spam, then a person can view the submission and very quickly delete the validated, filtered and possibly sanitised message.

Countermeasures

PreventionVERYEASY

Types

Text Recognition

recaptcha uses this technique. See below for details.

Image Recognition

Uses images which users have to perform certain operations on, like dragging them to another image. For example: “Please drag all cat images to the cat mat.”, or “Please select all images of things that dogs eat.” sweetcaptcha is an example of this type of captcha. This type completely rules out the visually impaired users.

Friend Recognition

Pioneered by… you guessed it. Facebook. This type of captcha focusses on human hackers, the idea being that they will not know who your friends are.

Instead of showing you a traditional captcha on Facebook, one of the ways we may help verify your identity is through social authentication. We will show you a few pictures of your friends and ask you to name the person in those photos. Hackers halfway across the world might know your password, but they don’t know who your friends are.

I disagree with that statement. A determined hacker will usually be able to find out who your friends are. There is another problem, do you know who all of your friends are? Every acquaintance? I am terrible with names and so are many people. This is supposed to be used to authenticate you. So you have to be able to answer the questions before you can log in.

Logic Questions

This is what textcaptcha uses. Simple logic questions designed for the intelligence of a seven year old child. These are more accessible than image and textual image recognition, but they can take longer than image recognition to answer, unless the user is visually impared. The questions are usually language specific also, usually targeting the English language.

User Interaction

This is a little like image recognition. Users have to perform actions that virtual intelligence can not work out… yet. Like dragging a slider a certain number of notches.
If an offering gets popular, creating some code to perform the action may not be that hard and would definitely be worth the effort for bot creators.
This is obviously not going to work for the visually impaired or for people with handicapped motor skills.

 

In NPM land, as usual there are many options to choose from. The following were the offerings I evaluated. None of which really felt like a good fit:

Offerings

  • total-captcha. Depends on node-canvas. Have to install cairo first, but why? No explanation. Very little of anything here. Move on. How does this work? Do not know. What type is it? Presume text recognition.
  • easy-captcha is a text recognition offering generating images
  • simple-captcha looks like another text recognition offering. I really do not want to be writing image files to my server.
  • node-captcha Depends on canvas. By the look of the package this is another text recognition in a generated image.
  • re-captcha was one of the first captcha offerings, created at the Carnegie Mellon University by Luis von Ahn, Ben Maurer, Colin McMillen, David Abraham and Manuel Blum who invented the term captcha. Google later acquired it in September 2009. recaptcha is a text recognition captcha that uses scanned text that optical character recognition (OCR) technology has failed to interpret, which has the added benefit of helping to digitise text for The New York Times and Google Books.
    recaptcha
  • sweetcaptcha uses the sweetcaptcha cloud service of which you must abide by their terms and conditions, requires another node package, and requires some integration work. sweetcaptcha is an image recognition type of captcha.
    sweetcaptcha
  • textcaptcha is a logic question captcha relying on an external service for the questions and md5 hashes of the correct lower cased answers. This looks pretty simple to set up, but again expects your users to use their brain on things they should not have to.

 

After some additional research I worked out why the above types and offerings didn’t feel like a good fit. It pretty much came down to user experience.

Why should genuine users/customers of your web application be disadvantaged by having to jump through hoops because you have decided you want to stop bots spamming you? Would it not make more sense to make life harder for the bots rather than for your genuine users?

Some other considerations I had. Ideally I wanted a simple solution requiring few or ideally no external dependencies, no JavaScript required, no reliance on the browser or anything out of my control, no images and it definitely should not cost any money.

Alternative Approaches

  • Services like Disqus can be good for commenting. Obviously the comments are all stored somewhere in the cloud out of your control and this is an external dependency. For simple text input, this is probably not what you want. Similar services such as all the social media authentication services can take things a bit too far I think. They remove freedoms from your users. Why should your users be disadvantaged by leaving a comment or posting a message on your web application? Disqus tracks users activities from hosting website to website whether you have an account, are logged in or not. Any information they collect such as IP address, web browser details, installed add-ons, referring pages and exit links may be disclosed to any third party. When this data is aggregated it is useful for de-anonymising users. If users choose to block the Disqus script, the comments are not visible. Disqus has also published its registered users entire commenting histories, along with a list of connected blogs and services on publicly viewable user profile pages. Disqus also engage in add targeting and blackhat SEO techniques from the websites in which their script is installed.
  • Services like Akismet and Mollom which take user input and analyse for spam signatures. Mollom sometimes presents a captcha if it is unsure. These two services learn from their mistakes if they mark something as spam and you unmark it, but of course you are going to have to be watching for that. Matt Mullenweg created Akismet so that his mother could blog in safety. “His first attempt was a JavaScript plugin which modified the comment form and hid fields, but within hours of launching it, spammers downloaded it, figured out how it worked, and bypassed it. This is a common pitfall for anti-spam plugins: once they get traction“. My advice to this is not to use a common plugin, but to create something custom. I discuss this soon.

The above solutions are excellent targets for creating exploits that will have a large pay off due to the fact that so many websites are using them. There are exploits discovered for these services regularly.

Still not cutting it

Given the fact that many clients count on conversions to make money, not receiving 3.2% of those conversions could put a dent in sales. Personally, I would rather sort through a few SPAM conversions instead of losing out on possible income.

Casey Henry: Captchas’ Effect on Conversion Rates

Spam is not the user’s problem; it is the problem of the business that is providing the website. It is arrogant and lazy to try and push the problem onto a website’s visitors.

Tim Kadlec: Death to Captchas

User Time Expenditure

Recording how long it takes from fetch to submit. This is another technique, in which the time is measured from fetch to submit. For example if the time span is under five seconds it is more than likely a bot, so handle the message accordingly.

Bot Pot

Spamming bots operating on custom mechanisms will in most cases just try, then move on. If you decide to use one of the common offerings from above, exploits will be more common, depending on how wide spread the offering is. This is one of the cases where going custom is a better option. Worse case is you get some spam and you can modify your technique, but you get to keep things simple, tailored to your web application, your users needs, no external dependencies and no monthly fees. This is also the simplest technique and requires very little work to implement.

Spam bots:

  • Love to populate form fields
  • Usually ignore CSS. For example, if you have some CSS that hides a form field and especially if the CSS is not inline on the same page, they will usually fail at realising that the field is not supposed to be visible.

So what we do is create a field that is not visible to humans and is supposed to be kept empty. On the server once the form is submitted, we check that it is still empty. If it is not, then we assume a bot has been at it.

This is so simple, does not get in the way of your users, yet very effective at filtering bot spam.

Client side:

form .bot-pot {
   display: none;
}
<form>
   <!--...-->
   <div>
      <input type="text" name="bot-pot" class="bot-pot">
   </div>
   <!--...-->
</form>

Server side:

I show the validation code middle ware of the route on line 30 below. The validation is performed on line 16

var form = require('express-form');
var fieldToValidate = form.field;
//...

function home(req, res) {
   res.redirect('/');
}

function index(req, res) {
   res.render('home', { title: 'Home', id: 'home', brand: 'your brand' });
}

function validate() {
   return form(
      // Bots love to populate everything.
      fieldToValidate('bot-pot').maxLength(0)
   );
}

function contact(req, res) {

   if(req.form.isValid)
      // We know the bot-pot is of zero length. So no bots.
   //...
}

module.exports = function (app) {
   app.get('/', index);
   app.get('/home', home);
   app.post('/contact', validate(), contact);
};

So as you can see, a very simple solution. You could even consider combining the above two techniques.

Lack of Visibility in Web Applications

November 26, 2015

Risks

I see this as an indirect risk to the asset of web application ownership (That’s the assumption that you will always own your web application).

Not being able to introspect your application at any given time or being able to know how the health status is, is not a comfortable place to be in and there is no reason you should be there.

Insufficient Logging and Monitoring

average-widespread-veryeasy-moderate

Can you tell at any point in time if someone or something is:

  • Using your application in a way that it was not intended to be used
  • Violating policy. For example circumventing client side input sanitisation.

How easy is it for you to notice:

  • Poor performance and potential DoS?
  • Abnormal application behaviour or unexpected logic threads
  • Logic edge cases and blind spots that stake holders, Product Owners and Developers have missed?

Countermeasures

As Bruce Schneier said: “Detection works where prevention fails and detection is of no use without response“. This leads us to application logging.

With good visibility we should be able to see anticipated and unanticipated exploitation of vulnerabilities as they occur and also be able to go back and review the events.

Insufficient Logging

PreventionAVERAGE

When it comes to logging in NodeJS, you can’t really go past winston. It has a lot of functionality and what it does not have is either provided by extensions, or you can create your own. It is fully featured, reliable and easy to configure like NLog in the .NET world.

I also looked at express-winston, but could not see why it needed to exist.

{
   ...
   "dependencies": {
      ...,
      "config": "^1.15.0",
      "express": "^4.13.3",
      "morgan": "^1.6.1",
      "//": "nodemailer not strictly necessary for this example,",
      "//": "but used later under the node-config section.",
      "nodemailer": "^1.4.0",
      "//": "What we use for logging.",
      "winston": "^1.0.1",
      "winston-email": "0.0.10",
      "winston-syslog-posix": "^0.1.5",
      ...
   }
}

winston-email also depends on nodemailer.

Opening UDP port

with winston-syslog seems to be what a lot of people are using. I think it may be due to the fact that winston-syslog is the first package that works well for winston and syslog.

If going this route, you will need the following in your /etc/rsyslog.conf:

$ModLoad imudp
# Listen on all network addresses. This is the default.
$UDPServerAddress 0.0.0.0
# Listen on localhost.
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
# Or the new style configuration.
Address <IP>
Port <port>
# Logging for your app.
local0.* /var/log/yourapp.log

I Also looked at winston-rsyslog2 and winston-syslogudp, but they did not measure up for me.

If you do not need to push syslog events to another machine, then it does not make much sense to push through a local network interface when you can use your posix syscalls as they are faster and safer. Line 7 below shows the open port.

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0015s latency).
PORT STATE SERVICE REASON VERSION
514/udp open|filtered syslog no-response
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Using Posix

The winston-syslog-posix package was inspired by blargh. winston-syslog-posix uses node-posix.

If going this route, you will need the following in your /etc/rsyslog.conf instead of the above:

# Logging for your app.
local0.* /var/log/yourapp.log

Now you can see on line 7 below that the syslog port is no longer open:

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0014s latency).
PORT STATE SERVICE REASON VERSION
514/udp closed syslog port-unreach
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Logging configuration should not be in the application startup file. It should be in the configuration files. This is discussed further under the Store Configuration in Configuration files section.

Notice the syslog transport in the configuration below starting on line 39.

module.exports = {
   logger: {
      colours: {
         debug: 'white',
         info: 'green',
         notice: 'blue',
         warning: 'yellow',
         error: 'yellow',
         crit: 'red',
         alert: 'red',
         emerg: 'red'
      },
      // Syslog compatible protocol severities.
      levels: {
         debug: 0,
         info: 1,
         notice: 2,
         warning: 3,
         error: 4,
         crit: 5,
         alert: 6,
         emerg: 7
      },
      consoleTransportOptions: {
         level: 'debug',
         handleExceptions: true,
         json: false,
         colorize: true
      },
      fileTransportOptions: {
         level: 'debug',
         filename: './yourapp.log',
         handleExceptions: true,
         json: true,
         maxsize: 5242880, //5MB
         maxFiles: 5,
         colorize: false
      },
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'debug',
         identity: 'yourapp_winston'
         //facility: 'local0' // default
            // /etc/rsyslog.conf also needs: local0.* /var/log/yourapp.log
            // If non posix syslog is used, then /etc/rsyslog.conf or one
            // of the files in /etc/rsyslog.d/ also needs the following
            // two settings:
            // $ModLoad imudp // Load the udp module.
            // $UDPServerRun 514 // Open the standard syslog port.
            // $UDPServerAddress 127.0.0.1 // Interface to bind to.
      },
      emailTransportOptions: {
         handleExceptions: true,
         level: 'crit',
         from: 'yourusername_alerts@fastmail.com',
         to: 'yourusername_alerts@fastmail.com',
         service: 'FastMail',
         auth: {
            user: "yourusername_alerts",
            pass: null // App specific password.
         },
         tags: ['yourapp']
      }
   }
}

In development I have chosen here to not use syslog. You can see this on line 3 below. If you want to test syslog in development, you can either remove the logger object override from the devbox1-development.js file or modify it to be similar to the above. Then add one line to the /etc/rsyslog.conf file to turn on. As mentioned in a comment above in the default.js config file on line 44.

module.exports = {
   logger: {
      syslogPosixTransportOptions: null
   }
}

In production we log to syslog and because of that we do not need the file transport you can see configured starting on line 30 above in the default.js configuration file, so we set it to null as seen on line 6 below in the prodbox-production.js file.

I have gone into more depth about how we handle syslogs here, where all of our logs including these ones get streamed to an off-site syslog server. Thus providing easy aggregation of all system logs into one user interface that DevOpps can watch on their monitoring panels in real-time and also easily go back in time to visit past events. This provides excellent visibility as one layer of defence.

There were also some other options for those using Papertrail as their off-site syslog and aggregation PaaS, but the solutions were not as clean as simply logging to local syslog from your applications and then sending off-site from there.

module.exports = {
   logger: {
      consoleTransportOptions: {
         level: {},
      },
      fileTransportOptions: null,
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'info',
         identity: 'yourapp_winston'
      }
   }
}
// Build creates this file.
module.exports = {
   logger: {
      emailTransportOptions: {
         auth: {
            pass: 'Z-o?(7GnCQsnrx/!-G=LP]-ib' // App specific password.
         }
      }
   }
}

The logger.js file wraps and hides extra features and transports applied to the logging package we are consuming.

var winston = require('winston');
var loggerConfig = require('config').logger;
require('winston-syslog-posix').SyslogPosix;
require('winston-email').Email;

winston.emitErrs = true;

var logger = new winston.Logger({
   // Alternatively: set to winston.config.syslog.levels
   exitOnError: false,
   // Alternatively use winston.addColors(customColours); There are many ways
   // to do the same thing with winston
   colors: loggerConfig.colours,
   levels: loggerConfig.levels
});

// Add transports. There are plenty of options provided and you can add your own.

logger.addConsole = function(config) {
   logger.add (winston.transports.Console, config);
   return this;
};

logger.addFile = function(config) {
   logger.add (winston.transports.File, config);
   return this;
};

logger.addPosixSyslog = function(config) {
   logger.add (winston.transports.SyslogPosix, config);
   return this;
};

logger.addEmail = function(config) {
   logger.add (winston.transports.Email, config);
   return this;
};

logger.emailLoggerFailure = function (err /*level, msg, meta*/) {
   // If called with an error, then only the err param is supplied.
   // If not called with an error, level, msg and meta are supplied.
   if (err) logger.alert(
      JSON.stringify(
         'error-code:' + err.code + '. '
         + 'error-message:' + err.message + '. '
         + 'error-response:' + err.response + '. logger-level:'
         + err.transport.level + '. transport:' + err.transport.name
      )
   );
};

logger.init = function () {
   if (loggerConfig.fileTransportOptions)
      logger.addFile( loggerConfig.fileTransportOptions );
   if (loggerConfig.consoleTransportOptions)
      logger.addConsole( loggerConfig.consoleTransportOptions );
   if (loggerConfig.syslogPosixTransportOptions)
      logger.addPosixSyslog( loggerConfig.syslogPosixTransportOptions );
   if (loggerConfig.emailTransportOptions)
      logger.addEmail( loggerConfig.emailTransportOptions );
};

module.exports = logger;
module.exports.stream = {
   write: function (message, encoding) {
      logger.info(message);
   }
};

When the app first starts it initialises the logger on line 7 below.

//...
var express = require('express');
var morganLogger = require('morgan');
var logger = require('./util/logger'); // Or use requireFrom module so no relative paths.
var app = express();
//...
logger.init();
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
//...
// In order to utilise connect/express logger module in our third party logger,
// Pipe the messages through.
app.use(morganLogger('combined', {stream: logger.stream}));
//...
app.use(express.static(path.join(__dirname, 'public')));
//...
require('./routes')(app);

if ('development' == app.get('env')) {
   app.use(errorHandler({ dumpExceptions: true, showStack: true }));
   //...
}
if ('production' == app.get('env')) {
   app.use(errorHandler());
   //...
}

http.createServer(app).listen(app.get('port'), function(){
   logger.info(
      "Express server listening on port " + app.get('port') + ' in '
      + process.env.NODE_ENV + ' mode'
   );
});

* You can also optionally log JSON metadata
* You can provide an optional callback to do any work required, which will be called once all transports have logged the specified message.

Here are some examples of how you can use the logger. The logger.log(<level> can be replaced with logger.<level>( where level is any of the levels defined in the default.js configuration file above:

// With string interpolation also.
logger.log('info', 'test message %s', 'my string');
logger.log('info', 'test message %d', 123);
logger.log('info', 'test message %j', {aPropertyName: 'Some message details'}, {});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);

Also consider hiding cross cutting concerns like logging using Aspect Oriented Programing (AOP)

Insufficient Monitoring

PreventionEASY

There are a couple of ways of approaching monitoring. You may want to see the health of your application even if it is all fine, or only to be notified if it is not fine (sometimes called the dark cockpit approach).

Monit is an excellent tool for the dark cockpit approach. It’s easy to configure. Has excellent short documentation that is easy to understand and the configuration file has lots of examples commented out ready for you to take as is and modify to suite your environment. I’ve personally had excellent success with Monit.

 

Risks that Solution Causes

Lack of Visibility

With the added visibility, you will have to make decisions based on the new found information you now have. There will be no more blissful ignorance if there was before.

Insufficient Logging and Monitoring

There will be learning and work to be done to become familiar with libraries and tooling. Code will have to be written around logging as in wrapping libraries, initialising and adding logging statements or hiding them using AOP.

 

Costs and Trade-offs

Insufficient Logging and Monitoring

You can do a lot for little cost here. I would rather trade off a few days work in order to have a really good logging system through your code base that is going to show you errors fast in development and then show you different errors in the places your DevOps need to see them in production.

Same for monitoring. Find a tool that you find working with a pleasure. There are just about always free and open source tools to every commercial alternative. If you are working with a start-up or young business, the free and open source tools can be excellent to keep ongoing costs down. Especially mature tools that are also well maintained like Monit.

Additional Resources

Consuming Free and Open Source

October 29, 2015

Risks

average-widespread-difficult-moderate

This is where A9 (Using Components with Known Vulnerabilities) of the 2013 OWASP Top 10 comes in.
We are consuming far more free and open source libraries than we have ever before. Much of the code we are pulling into our projects is never intentionally used, but is still adding surface area for attack. Much of it:

  • Is not thoroughly tested (for what it should do and what it should not do). We are often relying
    on developers we do not know a lot about to have not introduced defects. Most developers are more focused on building than breaking, they do not even see the defects they are introducing.
  • Is not reviewed evaluated. That is right, many of the packages we are consuming are created by solo developers with a single focus of creating and little to no focus of how their creations can be exploited. Even some teams with a security hero are not doing a lot better.
  • Is created by amateurs that could and do include vulnerabilities. Anyone can write code and publish to an open source repository. Much of this code ends up in our package management repositories which we consume.
  • Does not undergo the same requirement analysis, defining the scope, acceptance criteria, test conditions and sign off by a development team and product owner that our commercial software does.

Many vulnerabilities can hide in these external dependencies. It is not just one attack vector any more, it provides the opportunity for many vulnerabilities to be sitting waiting to be exploited. If you do not find and deal with them, I can assure you, someone else will. See Justin Searls talk on consuming all the open source.

Running install or any scripts from non local sources without first downloading them and inspecting can destroy or modify your and any other reachable systems, send sensitive information to an attacker, or any number of other criminal activities.

Countermeasures

prevention easy

Process

Dibbe Edwards discusses some excellent initiatives on how they do it at IBM. I will attempt to paraphrase some of them here:

  • Implement process and governance around which open source libraries you can use
  • Legal review: checking licenses
  • Scanning the code for vulnerabilities, manual and automated code review
  • Maintain a list containing all the libraries that have been approved for use. If not on the list, make request and it should go through the same process.
  • Once the libraries are in your product they should become as part of your own code so that they get the same rigour over them as any of your other code written in-house
  • There needs to be automated process that runs over the code base to check that nothing that is not on the approved list is included
  • Consider automating some of the suggested tooling options below

There is an excellent paper by the SANS Institute on Security Concerns in Using Open Source Software for Enterprise Requirements that is well worth a read. It confirms what the likes of IBM are doing in regards to their consumption of free and open source libraries.

Consumption is Your Responsibility

As a developer, you are responsible for what you install and consume. Malicious NodeJS packages do end up on NPM from time to time. The same goes for any source or binary you download and run. The following commands are often encountered as being “the way” to install things:

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(wget https://raw.github.com/account/repo/install.sh -O -)"
# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(curl -fsSL https://raw.github.com/account/repo/install.sh)"

Below is the official way to install NodeJS. Do not do this. wget or curl first, then make sure what you have just downloaded is not malicious.

Inspect the code before you run it.

1. The repository could have been tampered with
2. The transmission from the repository to you could have been intercepted and modified.

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Keeping Safe

wget, curl, etc

Please do not wget, curl or fetch in any way and pipe what you think is an installer or any script to your shell without first verifying that what you are about to run is not malicious. Do not download and run in the same command.

The better option is to:

  1. Verify the source that you are about to download, if all good
  2. Download it
  3. Check it again, if all good
  4. Only then should you run it

npm install

As part of an npm install, package creators, maintainers (or even a malicious entity intercepting and modifying your request on the wire) can define scripts to be run on specific NPM hooks. You can check to see if any package has hooks (before installation) that will run scripts by issuing the following command:
npm show [module-you-want-to-install] scripts

Recommended procedure:

  1. Verify the source that you are about to download, if all good
  2. npm show [module-you-want-to-install] scripts
  3. Download the module without installing it and inspect it. You can download it from
    http://registry.npmjs.org/%5Bmodule-you-want-to-install%5D/-/%5Bmodule-you-want-to-install%5D-VERSION.tgz
    

The most important step here is downloading and inspecting before you run.

Doppelganger Packages

Similarly to Doppelganger Domains, People often miss-type what they want to install. If you were someone that wanted to do something malicious like have consumers of your package destroy or modify their systems, send sensitive information to you, or any number of other criminal activities (ideally identified in the Identify Risks section. If not already, add), doppelganger packages are an excellent avenue for raising the likelihood that someone will install your malicious package by miss typing the name of it with the name of another package that has a very similar name. I covered this in my “0wn1ng The Web” presentation, with demos.

Make sure you are typing the correct package name. Copy -> Pasting works.

Tooling

For NodeJS developers: Keep your eye on the nodesecurity advisories. Identified security issues can be posted to NodeSecurity report.

RetireJS Is useful to help you find JavaScript libraries with known vulnerabilities. RetireJS has the following:

  1. Command line scanner
    • Excellent for CI builds. Include it in one of your build definitions and let it do the work for you.
      • To install globally:
        npm i -g retire
      • To run it over your project:
        retire my-project
        Results like the following may be generated:

        public/plugins/jquery/jquery-1.4.4.min.js
        ↳ jquery 1.4.4.min has known vulnerabilities:
        http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969
        http://research.insecurelabs.org/jquery/test/
        http://bugs.jquery.com/ticket/11290
        
    • To install RetireJS locally to your project and run as a git precommit-hook.
      There is an NPM package that can help us with this, called precommit-hook, which installs the git pre-commit hook into the usual .git/hooks/pre-commit file of your projects repository. This will allow us to run any scripts immediately before a commit is issued.
      Install both packages locally and save to devDependencies of your package.json. This will make sure that when other team members fetch your code, the same retire script will be run on their pre-commit action.

      npm install precommit-hook --save-dev
      npm install retire --save-dev
      

      If you do not configure the hook via the package.json to run specific scripts, it will run lint, validate and test by default. See the RetireJS documentation for options.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         }
      }
      

      Adding the pre-commit property allows you to specify which scripts you want run before a successful commit is performed. The following package.json defines that the lint and validate scripts will be run. validate runs our retire command.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         },
         "pre-commit": ["lint", "validate"]
      }
      

      Keep in mind that pre-commit hooks can be very useful for all sorts of checking of things immediately before your code is committed. For example running security tests using the OWASP ZAP API.

  2. Chrome extension
  3. Firefox extension
  4. Grunt plugin
  5. Gulp task
  6. Burp and OWASP ZAP plugin
  7. On-line tool you can simply enter your web applications URL and the resource will be analysed

requireSafe

provides “intentful auditing as a stream of intel for bithound“. I guess watch this space, as in speaking with Adam Baldwin, there doesn’t appear to be much happening here yet.

bithound

In regards to NPM packages, we know the following things:

  1. We know about a small collection of vulnerable NPM packages. Some of which have high fan-in (many packages depend on them).
  2. The vulnerable packages have published patched versions
  3. Many packages are still consuming the vulnerable unpatched versions of the packages that have published patched versions
    • So although we could have removed a much larger number of vulnerable packages due to their persistence on depending on unpatched packages, we have not. I think this mostly comes down to lack of visibility, awareness and education. This is exactly what I’m trying to change.

bithound supports:

  • JavaScript, TypeScript and JSX (back-end and front-end)
  • In terms of version control systems, only git is supported
  • Opening of bitbucket and github issues
  • Providing statistics on code quality, maintainability and stability. I queried Adam on this, but not a lot of information was forth coming.

bithound can be configured to not analyse some files. Very large repositories are prevented from being analysed due to large scale performance issues.

Analyses both NPM and Bower dependencies and notifies you if any are:

  • Out of date
  • Insecure. Assuming this is based on the known vulnerabilities (41 node advisories at the time of writing this)
  • Unused

Analysis of opensource projects are free.

You could of course just list all of your projects and global packages and check that there are none in the advisories, but this would be more work and who is going to remember to do that all the time?

For .Net developers, there is the likes of OWASP SafeNuGet.

Risks that Solution Causes

Some of the packages we consume may have good test coverage, but are the tests testing the right things? Are the tests testing that something can not happen? That is where the likes of RetireJS comes in.

Process

There is a danger of implementing to much manual process thus slowing development down more than necessary. The way the process is implemented will have a lot to do with its level of success. For example automating as much as possible, so developers don’t have to think about as much as possible is going to make for more productive, focused and happier developers.

For example, when a Development Team needs to pull a library into their project, which often happens in the middle of working on a product backlog item (not planned at the beginning of the Sprint), if they have to context switch while a legal review and/or manual code review takes place, then this will cause friction and reduce the teams performance even though it may be out of their hands.
In this case, the Development Team really needs a dedicated resource to perform the legal review. The manual review could be done by another team member or even themselves with perhaps another team member having a quicker review after the fact. These sorts of decisions need to be made by the Development Team, not mandated by someone outside of the team that doesn’t have skin in the game or does not have the localised understanding that the people working on the project do.

Maintaining a list of the approved libraries really needs to be a process that does not take a lot of human interaction. How ever you work out your process, make sure it does not require a lot of extra developer effort on an ongoing basis. Some effort up front to automate as much as possible will facilitate this.

Tooling

Using the likes of pre-commit hooks, the other tooling options detailed in the Countermeasures section and creating scripts to do most of the work for us is probably going to be a good option to start with.

Costs and Trade-offs

The process has to be streamlined so that it does not get in the developers way. A good way to do this is to ask the developers how it should be done. They know what will get in their way. In order for the process to be a success, the person(s) mandating it will need to get solid buy-in from the people using it (the developers).
The idea of setting up a process that notifies at least the Development Team if a library they want to use has known security defects, needs to be pitched to all stakeholders (developers, product owner, even external stakeholders) the right way. It needs to provide obvious benefit and not make anyones life harder than it already is. Everyone has their own agendas. Rather than fighting against them, include consideration for them in your pitch. I think this sort of a pitch is actually reasonably easy if you keep these factors in mind.

Attributions

Additional Resources

 

Risks and Countermeasures to the Management of Application Secrets

September 17, 2015

Risks

  • Passwords and other secrets for things like data-stores, syslog servers, monitoring services, email accounts and so on can be useful to an attacker to compromise data-stores, obtain further secrets from email accounts, file servers, system logs, services being monitored, etc, and may even provide credentials to continue moving through the network compromising other machines.
  • Passwords and/or their hashes travelling over the network.

Data-store Compromise

Exploitability

The reason I’ve tagged this as moderate is because if you take the countermeasures, it doesn’t have to be a disaster.

There are many examples of this happening on a daily basis to millions of users. The Ashley Madison debacle is a good example. Ashley Madison’s entire business relied on its commitment to keep its clients (37 million of them) data secret, provide discretion and anonymity.

Before the breach, the company boasted about airtight data security but ironically, still proudly displays a graphic with the phrase “trusted security award” on its homepage

We worked hard to make a fully undetectable attack, then got in and found nothing to bypass…. Nobody was watching. No security. Only thing was segmented network. You could use Pass1234 from the internet to VPN to root on all servers.

Any CEO who isn’t vigilantly protecting his or her company’s assets with systems designed to track user behavior and identify malicious activity is acting negligently and putting the entire organization at risk. And as we’ve seen in the case of Ashley Madison, leadership all the way up to the CEO may very well be forced out when security isn’t prioritized as a core tenet of an organization.

Dark Reading

Other notable data-store compromises were LinkedIn with 6.5 million user accounts compromised and 95% of the users passwords cracked in days. Why so fast? Because they used simple hashing, specifically SHA-1. EBay with 145 million active buyers. Many others coming to light regularly.

Are you using well salted and quality strong key derivation functions (KDFs) for all of your sensitive data? Are you making sure you are notifying your customers about using high quality passwords? Are you informing them what a high quality password is? Consider checking new user credentials against a list of the most frequently used and insecure passwords collected.

Countermeasures

Secure password management within applications is a case of doing what you can, often relying on obscurity and leaning on other layers of defence to make it harder for compromise. Like many of the layers already discussed in my book.

Find out how secret the data that is supposed to be secret that is being sent over the network actually is and consider your internal network just as malicious as the internet. Then you will be starting to get the idea of what defence in depth is about. That way when one defence breaks down, you will still be in good standing.

defence in depth

You may read in many places that having data-store passwords and other types of secrets in configuration files in clear text is an insecurity that must be addressed. Then when it comes to mitigation, there seems to be a few techniques for helping, but most of them are based around obscuring the secret rather than securing it. Essentially just making discovery a little more inconvenient like using an alternative port to SSH to other than the default of 22. Maybe surprisingly though, obscurity does significantly reduce the number of opportunistic type attacks from bots and script kiddies.

Store Configuration in Configuration files

Prevention

Do not hard code passwords in source files for all developers to see. Doing so also means the code has to be patched when services are breached. At the very least, store them in configuration files and use different configuration files for different deployments and consider keeping them out of source control.

Here are some examples using the node-config module.

node-config

is a fully featured, well maintained configuration package that I have used on a good number of projects.

To install: From the command line within the root directory of your NodeJS application, run:

npm install node-config --save

Now you are ready to start using node-config. An example of the relevant section of an app.js file may look like the following:

// Due to bug in node-config the if statement is required before config is required
// https://github.com/lorenwest/node-config/issues/202
if (process.env.NODE_ENV === 'production')
   process.env.NODE_CONFIG_DIR = path.join(__dirname, 'config');

Where ever you use node-config, in your routes for example:

var config = require('config');
var nodemailer = require('nodemailer');
var enquiriesEmail = config.enquiries.email;

// Setting up email transport.
var transporter = nodemailer.createTransport({
   service: config.enquiries.service,
   auth: {
      user: config.enquiries.user,
      pass: config.enquiries.pass // App specific password.
   }
});

A good collection of different formats can be used for the config files: .json, .json5, .hjson, .yaml.js, .coffee, .cson, .properties, .toml

There is a specific file loading order which you specify by file naming convention, which provides a lot of flexibility and which caters for:

  • Having multiple instances of the same application running on the same machine
  • The use of short and full host names to mitigate machine naming collisions
  • The type of deployment. This can be anything you set the $NODE_ENV environment variable to for example: development, production, staging, whatever.
  • Using and creating config files which stay out of source control. These config files have a prefix of local. These files are to be managed by external configuration management tools, build scripts, etc. Thus providing even more flexibility about where your sensitive configuration values come from.

The config files for the required attributes used above may take the following directory structure:

OurApp/
|
+-- config/
| |
| +-- default.js (usually has the most in it)
| |
| +-- devbox1-development.js
| |
| +-- devbox2-development.js
| |
| +-- stagingbox-staging.js
| |
| +-- prodbox-production.js
| |
| +-- local.js (creted by build)
|
+-- routes
| |
| +-- home.js
| |
| +-- ...
|
+-- app.js (entry point)
|
+-- ...

The contents of the above example configuration files may look like the following:

module.exports = {
   enquiries: {
      // Supported services:
      // https://github.com/andris9/nodemailer-wellknown#supported-services
      // supported-services actually use the best security settings by default.
      // I tested this with a wire capture, because it is always the most fool proof way.
      service: 'FastMail',
      email: 'yourusername@fastmail.com',
      user: 'yourusername',
      pass: null
   }
   // Lots of other settings.
   // ...
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox1
      pass: 'D[6F9,4fM6?%2ULnirPVTk#Q*7Z+5n' // App specific password.
   }
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox2
      pass: 'eUoxK=S9&amp;amp;amp;lt;,`@m0T1=^(EZ#61^5H;.H' // App specific password.
   }
}
{
}
{
}
// Build creates this file.
module.exports = {
   enquiries: {
      // Password created by the build.
      pass: '10lQu$4YC&amp;amp;amp;amp;x~)}lUF&amp;amp;amp;gt;3pm]Tk&amp;amp;amp;gt;@+{N]' // App specific password.
   }
}

node-config also:

  • Provides command line overrides, thus allowing you to override configuration values at application start from command
  • Allows for the overriding of environment variables with custom environment variables from a custom-environment-variables.json file

Encrypting/decrypting credentials in code may provide some obscurity, but not much more than that.
There are different answers for different platforms. None of which provide complete security, if there is such a thing, but instead focusing on different levels of obscurity.

Windows

Store database credentials as a Local Security Authority (LSA) secret and create a DSN with the stored credential. Use a SqlServer connection string with Trusted_Connection=yes

The hashed credentials are stored in the SAM file and the registry. If an attacker has physical access to the storage, they can easily copy the hashes if the machine is not running or can be shut-down. The hashes can be sniffed from the wire in transit. The hashes can be pulled from the running machines memory (specifically the Local Security Authority Subsystem Service (LSASS.exe)) using tools such as Mimikatz, WCE, hashdump or fgdump. An attacker generally only needs the hash. Trusted tools like psexec take care of this for us. All discussed in my “0wn1ng The Web” presentation.

Encrypt Sections of a web, executable, machine-level, application-level configuration files with aspnet_regiis.exe with the -pe option and name of the configuration element to encrypt and the configuration provider you want to use. Either DataProtectionConfigurationProvider (uses DPAPI) or RSAProtectedConfigurationProvider (uses RSA). the -pd switch is used to decrypt or programatically:

string connStr = ConfigurationManager.ConnectionString["MyDbConn1"].ToString();

Of course there is a problem with this also. DPAPI uses LSASS, which again an attacker can extract the hash from its memory. If the RSAProtectedConfigurationProvider has been used, a key container is required. Mimikatz will force an export from the key container to a .pvk file. Which can then be read using OpenSSL or tools from the Mono.Security assembly.

I have looked at a few other ways using PSCredential and SecureString. They all seem to rely on DPAPI which as mentioned uses LSASS which is open for exploitation.

Credential Guard and Device Guard leverage virtualisation-based security. By the look of it still using LSASS. Bromium have partnered with Microsoft and coined it Micro-virtualization. The idea is that every user task is isolated into its own micro-VM. There seems to be some confusion as to how this is any better. Tasks still need to communicate outside of their VM, so what is to stop malicious code doing the same? I have seen lots of questions but no compelling answers yet. Credential Guard must run on physical hardware directly. Can not run on virtual machines. This alone rules out many
deployments.

Bromium vSentry transforms information and infrastructure protection with a revolutionary new architecture that isolates and defeats advanced threats targeting the endpoint through web, email and documents

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

This is marketing talk. Please don’t take this literally.

vSentry empowers users to access whatever information they need from any network, application or website, without risk to the enterprise

Traditional security solutions rely on detection and often fail to block targeted attacks which use unknown “zero day” exploits. Bromium uses hardware enforced isolation to stop even “undetectable” attacks without disrupting the user.

Bromium

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and
a joy to use

Bromium

These seem like bold claims.

Also worth considering is that Microsofts new virtualization-based security also relies on UEFI Secure Boot, which has been proven insecure.

Linux

Containers also help to provide some form of isolation. Allowing you to only have the user accounts to do what is necessary for the application.

I usually use a deployment tool that also changes the permissions and ownership of the files involved with the running web application to a single system user, so unprivileged users can not access the web applications files at all. The deployment script is executed over SSH in a remote shell. Only specific commands on the server are allowed to run and a very limited set of users have any sort of access to the machine. If you are using Linux Containers then you can reduce this even more if it is not already.

One of the beauties of GNU/Linux is that you can have as much or little security as you decide. No one has made that decision for you already and locked you out of the source. You are not feed lies like all of the closed source OS vendors trying to pimp their latest money spinning product. GNU/Linux is a dirty little secrete that requires no marketing hype. It just provides complete control if you want it. If you do not know what you want, then someone else will probably take that control from you. It is just a matter of time if it hasn’t happened already.

Least Privilege

Prevention

An application should have the least privileges possible in order to carry out what it needs to do. Consider creating accounts for each trust distinction. For example where you only need to read from a data store, then create that connection with a users credentials that is only allowed to read, and so on for other privileges. This way the attack surface is minimised. Adhering to the principle of least privilege. Also consider removing table access completely from the application and only provide permissions to the application to run stored queries. This way if/when an attacker is able to
compromise the machine and retrieve the password for an action on the data-store, they will not be able to do a lot anyway.

Location

Prevention

Put your services like data-stores on network segments that are as sheltered as possible and only contain similar services.

Maintain as few user accounts on the servers in question as possible and with the least privileges as possible.

Data-store Compromise

Prevention

As part of your defence in depth strategy, you should expect that your data-store is going to get stolen, but hope that it does not. What assets within the data-store are sensitive? How are you going to stop an attacker that has gained access to the data-store from making sense of the sensitive data?

As part of developing the application that uses the data-store, a strategy also needs to be developed and implemented to carry on business as usual when this happens. For example, when your detection mechanisms realise that someone unauthorised has been on the machine(s) that host your data-store, as well as the usual alerts being fired off to the people that are going to investigate and audit, your application should take some automatic measures like:

  • All following logins should be instructed to change passwords

If you follow the recommendations below, data-store theft will be an inconvenience, but not a disaster.

Consider what sensitive information you really need to store. Consider using the following key derivation functions (KDFs) for all sensitive data. Not just passwords. Also continue to remind your customers to always use unique passwords that are made up of alphanumeric, upper-case, lower-case and special characters. It is also worth considering pushing the use of high quality password vaults. Do not limit password lengths. Encourage long passwords.

PBKDF2, bcrypt and scrypt are KDFs that are designed to be slow. Used in a process commonly known as key stretching. The process of key stretching in terms of how long it takes can be tuned by increasing or decreasing the number of cycles used. Often 1000 cycles or more for passwords. “The function used to protect stored credentials should balance attacker and defender verification. The defender needs an acceptable response time for verification of users’ credentials during peak use. However, the time required to map <credential> -> <protected form> must remain beyond threats’ hardware (GPU, FPGA) and technique (dictionary-based, brute force, etc) capabilities.

OWASP Password Storage

PBKDF2, bcrypt and the newer scrypt, apply a Pseudorandom Function (PRF) such as a crypto-graphic hash, cipher or HMAC to the data being received along with a unique salt. The salt should be stored with the hashed data.

Do not use MD5, SHA-1 or the SHA-2 family of cryptographic one-way hashing functions by themselves for cryptographic purposes like hashing your sensitive data. In-fact do not use hashing functions at all for this unless they are leveraged with one of the mentioned KDFs. Why? Because the hashing speed can not be slowed as hardware continues to get faster. Many organisations that have had their data-stores stolen and continue to on a weekly basis could avoid their secrets being compromised simply by using a decent KDF with salt and a decent number of iterations. “Using four AMD Radeon HD6990 graphics cards, I am able to make about 15.5 billion guesses per second using the SHA-1 algorithm.

Per Thorsheim

In saying that, PBKDF2 can use MD5, SHA-1 and the SHA-2 family of hashing functions. Bcrypt uses the Blowfish (more specifically the Eksblowfish) cipher. Scrypt does not have user replaceable parts like PBKDF2. The PRF can not be changed from SHA-256 to something else.

Which KDF To Use?

This depends on many considerations. I am not going to tell you which is best, because there is no best. Which to use depends on many things. You are going to have to gain understanding into at least all three KDFs. PBKDF2 is the oldest so it is the most battle tested, but there has also been lessons learnt from it that have been taken to the latter two. The next oldest is bcrypt which uses the Eksblowfish cipher which was designed specifically for bcrypt from the blowfish cipher, to be very slow to initiate thus boosting protection against dictionary attacks which were often run on custom Application-specific Integrated Circuits (ASICs) with low gate counts, often found in GPUs of the day (1999).
The hashing functions that PBKDF2 uses were a lot easier to get speed increases due to ease of parallelisation as opposed to the Eksblowfish cipher attributes such as: far greater memory required for each hash, small and frequent pseudo-random memory accesses, making it harder to cache the data into faster memory. Now with hardware utilising large Field-programmable Gate Arrays (FPGAs), bcrypt brute-forcing is becoming more accessible due to easily obtainable cheap hardware such as:

The sensitive data stored within a data-store should be the output of using one of the three key derivation functions we have just discussed. Feed with the data you want protected and a salt. All good frameworks will have at least PBKDF2 and bcrypt APIs

bcrypt brute-forcing

With well ordered rainbow tables and hardware with high FPGA counts, brute-forcing bcrypt is now feasible:

Risks that Solution Causes

Reliance on adjacent layers of defence means those layers have to actually be up to scratch. There is a possibility that they will not be.

Possibility of missing secrets being sent over the wire.

Possible reliance on obscurity with many of the strategies I have seen proposed. Just be aware that obscurity may slow an attacker down a little, but it will not stop them.

Store Configuration in Configuration files

With moving any secrets from source code to configuration files, there is a possibility that the secrets will not be changed at the same time. If they are not changed, then you have not really helped much, as the secrets are still in source control.

With good configuration tools like node-config, you are provided with plenty of options of splitting up meta-data, creating overrides, storing different parts in different places, etc. There is a risk that you do not use the potential power and flexibility to your best advantage. Learn the ins and outs of what ever system it is you are using and leverage its features to do the best at obscuring your secrets and if possible securing them.

node-config

Is an excellent configuration package with lots of great features. There is no security provided with node-config, just some potential obscurity. Just be aware of that, and as discussed previously, make sure surrounding layers have beefed up security.

Windows

As is often the case with Microsoft solutions, their marketing often leads people to believe that they have secure solutions to problems when that is not the case. As discussed previously, there are plenty of ways to get around the Microsoft so called security features. As anything else in this space, they may provide some obscurity, but do not depend on them being secure.

Statements like the following have the potential for producing over confidence:

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

Bromium

Please keep your systems patched and updated.

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and a joy to use

Bromium

There is a risk that people will believe this.

Linux

As with Microsofts “virtualisation-based security” Linux containers may slow system compromise down, but a determined attacker will find other ways to get around container isolation. Maintaining a small set of user accounts is a worthwhile practise, but that alone will not be enough to stop a highly skilled and determined attacker moving forward.
Even when technical security is very good, an experienced attacker will use other mediums to gain what they want, like social engineering, physical compromise, both, or some other attack vectors. Defence in depth is crucial in achieving good security. Concentrating on the lowest hanging fruit first and working your way up the tree.

Locking file permissions and ownership down is good, but that alone will not save you.

Least Privilege

Applying least privilege to everything can take quite a bit of work. Yes, it is probably not that hard to do, but does require a breadth of thought and time. Some of the areas discussed could be missed. Having more than one person working on the task is often effective as each person can bounce ideas off of each other and the other person is likely to notice areas that you may have missed and visa-versa.

Location

Segmentation is useful, and a common technique to helping to build resistance against attacks. It does introduce some complexity though. With complexity comes the added likely-hood of introducing a fault.

Data-store Compromise

If you follow the advice in the countermeasures section, you will be doing more than most other organisations in this area. It is not hard, but if implemented could increase complacency/over confidence. Always be on your guard. Always expect that although you have done a lot to increase your security stance, a determined and experienced attacker is going to push buttons you may have never realised you had. If they want something enough and have the resources and determination to get it, they probably will. This is where you need strategies in place to deal with post compromise. Create process (ideally partly automated) to deal with theft.

Also consider that once an attacker has made off with your data-store, even if it is currently infeasible to brute-force the secrets, there may be other ways around obtaining the missing pieces of information they need. Think about the paper shredders and the associated competitions. With patience, most puzzles can be cracked. If the compromise is an opportunistic type of attack, they will most likely just give up and seek an easier target. If it is a targeted attack by determined and experienced attackers, they will probably try other attack vectors until they get what they want.

Do not let over confidence be your weakness. An attacker will search out the weak link. Do your best to remove weak links.

Costs and Trade-offs

There is potential for hidden costs here, as adjacent layers will need to be hardened. There could be trade-offs here that force us to focus on the adjacent layers. This is never a bad thing though. It helps us to step back and take a holistic view of our security.

Store Configuration in Configuration files

There should be little cost in moving secrets out of source code and into configuration files.

Windows

You will need to weigh up whether the effort to obfuscate secrets is worth it or not. It can also make the developers job more cumbersome. Some of the options provided may be worthwhile doing.

Linux

Containers have many other advantages and you may already be using them for making your deployment processes easier and less likely to have dependency issues. They also help with scaling and load balancing, so they have multiple benefits.

Least Privilege

Is something you should be at least considering and probably doing in every case. It is one of those considerations that is worth while applying to most layers.

Location

Segmenting of resources is a common and effective measure to take for at least slowing down attacks and a cost well worth considering if you have not already.

Data-store Compromise

The countermeasures discussed here go without saying, although many organisations do not do them well if at all. It is up to you whether you want to be one of the statistics that has all of their secrets revealed. Following the countermeasures here is something that just needs to be done if you have any data that is sensitive in your data-store(s).

Node.js Asynchronicity and Callback Nesting

July 26, 2014

Just a heads up before we get started on this… Chrome DevTools (v35) now has the ability to show the full call stack of asynchronous JavaScript callbacks. As of writing this, if you develop on Linux you’ll want the dev channel. Currently my Linux Mint 13 is 3 versions behind. So I had to update to the dev channel until I upgraded to the LTS 17 (Qiana).

All code samples can be found at GitHub.

Deep Callback Nesting

AKA callback hell, temple of doom, often the functions that are nested are anonymous and often they are implicit closures. When it comes to asynchronicity in JavaScript, callbacks are our bread and butter. In saying that, often the best way to use them is by abstracting them behind more elegant APIs.

Being aware of when new functions are created and when you need to make sure the memory being held by closure is released (dropped out of scope) can be important for code that’s hot otherwise you’re in danger of introducing subtle memory leaks.

What is it?

Passing functions as arguments to functions which return immediately. The function (callback) that’s passed as an argument will be run at some time in the future when the potentially time expensive operation is done. This callback by convention has it’s first parameter as the error on error, or as null on success of the expensive operation. In JavaScript we should never block on potentially time expensive operations such as I/O, network operations. We only have one thread in JavaScript, so we allow the JavaScript implementations to place our discrete operations on the event queue.

One other point I think that’s worth mentioning is that we should never call asynchronous callbacks synchronously unless of course we’re unit testing them, in which case we should be rarely calling them asynchronously. Always allow the JavaScript engine to put the callback into the event queue rather than calling it immediately, even if you already have the result to pass to the callback. By ensuring the callback executes on a subsequent turn of the event loop you are providing strict separation of the callback being allowed to change data that’s shared between itself (usually via closure) and the currently executing function. There are many ways to ensure the callback is run on a subsequent turn of the event loop. Using asynchronous API’s like setTimeout and setImmediate allow you to schedule your callback to run on a subsequent turn. The Promises/A+ specification (discussed below) for example specifies this.

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var nestedCoffee = sUTDirectory('nestedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function (done) {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the ugly nested callback coffee machine', function (done) {

      var result = function (error, state) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: ' + state.description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(error, expectedErrorOutCome, 'brew encountered an error. The following are the error details: ' + error.message);
         }
         done();
      };

      nestedCoffee().brew(result);
   });
});
lets test

The System Under Test

'use strict';

module.exports = function nestedCoffee() {

   // We don't do instant coffee ####################################

   var boilJug = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
   };
   var addInstantCoffeePowder = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Crappy instant coffee powder is being added.');
   };
   var addSugar = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Sugar is being added.');
   };
   var addBoilingWater = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Boiling water is being added.');
   };
   var stir = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Coffee is being stirred. Hmm...');
   };

   // We only do real coffee ########################################

   var heatEspressoMachine = function (state, callback) {
      var error = undefined;
      var wrappedCallback = function () {
         console.log('Espresso machine heating cycle is done.');
         if(!error) {
            callback(error, state);
         } else
            console.log('wrappedCallback encountered an error. The following are the error details: ' + error);
      };
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      // If there is an error, wrap callback with our own error function

      // Even if you call setTimeout with a time of 0 ms, the callback you pass is placed on the event queue to be called on a subsequent turn of the event loop.
      // Also be aware that setTimeout has a minimum granularity of 4ms for timers nested more than 5 deep. For several reasons we prefer to use setImmediate if we don't want a 4ms minimum wait.
      // setImmediate will schedule your callbacks on the next turn of the event loop, but it goes about it in a smarter way. Read more about it here: https://developer.mozilla.org/en-US/docs/Web/API/Window.setImmediate
      // If you are using setImmediate and it's not available in the browser, use the polyfill: https://github.com/YuzuJS/setImmediate
      // For this, we need to wait for our huge hunk of copper to heat up, which takes a lot longer than a few milliseconds.
      setTimeout(
         // Once espresso machine is hot callback will be invoked on the next turn of the event loop...
         wrappedCallback, espressoMachineHeatTime.milliseconds
      );
   };
   var grindDoseTampBeans = function (state, callback) {
      // Perform long running action.
      console.log('We are now grinding, dosing, then tamping our dose.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var mountPortaFilter = function (state, callback) {
      // Perform long running action.
      console.log('Porta filter is now being mounted.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var positionCup = function (state, callback) {
      // Perform long running action.
      console.log('Placing cup under portafilter.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var preInfuse = function (state, callback) {
      // Perform long running action.
      console.log('10 second preinfuse now taking place.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var extract = function (state, callback) {
      // Perform long running action.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.

      // Uncomment the below to test the error.
      //callback({message: 'Oh no, something has gone wrong!'})
      callback(null, state);
   };
   var espressoMachineHeatTime = {
      // if you don't want to wait for the machine to heat up assign minutes: 0.2.
      minutes: 30,
      get milliseconds() {
         return this.minutes * 60000;
      }
   };
   var state = {
      description: ''
      // Other properties
   };
   var brew = function (onCompletion) {
      // Some prep work here possibly.
      heatEspressoMachine(state, function (err, resultFromHeatEspressoMachine) {
         if(!err) {
            grindDoseTampBeans(state, function (err, resultFromGrindDoseTampBeans) {
               if(!err) {
                  mountPortaFilter(state, function (err, resultFromMountPortaFilter) {
                     if(!err) {
                        positionCup(state, function (err, resultFromPositionCup) {
                           if(!err) {
                              preInfuse(state, function (err, resultFromPreInfuse) {
                                 if(!err) {
                                    extract(state, function (err, resultFromExtract) {
                                       if(!err)
                                          onCompletion(null, state);
                                       else
                                          onCompletion(err, null);
                                    });
                                 } else
                                    onCompletion(err, null);
                              });
                           } else
                              onCompletion(err, null);
                        });
                     } else
                        onCompletion(err, null);
                  });
               } else
                  onCompletion(err, null);
            });
         } else
            onCompletion(err, null);
      });
   };
   return {
      // Publicise brew.
      brew: brew
   };
};

What’s wrong with it?

  1. It’s hard to read, reason about and maintain
  2. The debugging experience isn’t very informative
  3. It creates more garbage than adding your functions to a prototype
  4. Dangers of leaking memory due to retaining closure references
  5. Many more…

What’s right with it?

  • It’s asynchronous

Closures are one of the language features in JavaScript that they got right. There are often issues in how we use them though.   Be very careful of what you’re doing with closures. If you’ve got hot code, don’t create a new function every time you want to execute it.

Resources

  • Chapter 7 Concurrency of the Effective JavaScript book by David Herman

 

Alternative Approaches

Ranging from marginally good approaches to better approaches. Keeping in mind that all these techniques add value and some make more sense in some situations than others. They are all approaches for making the callback hell more manageable and often encapsulating it completely, so much so that the underlying workings are no longer just a bunch of callbacks but rather well thought out implementations offering up a consistent well recognised API. Try them all, get used to them all, then pick the one that suites your particular situation. The first two examples from here are blocking though, so I wouldn’t use them as they are, they are just an example of how to make some improvements.

Name your anonymous functions

  1. They’ll be easier to read and understand
  2. You’ll get a much better debugging experience, as stack traces will reference named functions rather than “anonymous function”
  3. If you want to know where the source of an exception was
  4. Reveals your intent without adding comments
  5. In itself will allow you to keep your nesting shallow
  6. A first step to creating more extensible code

We’ve made some improvements in the next two examples, but introduced blocking in the arrays prototypes forEach loop which we really don’t want to do.

Example of Anonymous Functions

var boilJug = function () {
   // Perform long running action
};
var addInstantCoffeePowder = function () {
   // Perform long running action
   console.log('Crappy instant coffee powder is being added.');
};
var addSugar = function () {
   // Perform long running action
   console.log('Sugar is being added.');
};
var addBoilingWater = function () {
   // Perform long running action
   console.log('Boiling water is being added.');
};
var stir = function () {
   // Perform long running action
   console.log('Coffee is being stirred. Hmm...');
};
var heatEspressoMachine = function () {
   // Flick switch, check water.
   console.log('Espresso machine is being turned on and is now heating.');
};
var grindDoseTampBeans = function () {
   // Perform long running action
   console.log('We are now grinding, dosing, then tamping our dose.');
};
var mountPortaFilter = function () {
   // Perform long running action
   console.log('Portafilter is now being mounted.');
};
var positionCup = function () {
   // Perform long running action
   console.log('Placing cup under portafilter.');
};
var preInfuse = function () {
   // Perform long running action
   console.log('10 second preinfuse now taking place.');
};
var extract = function () {
   // Perform long running action
   console.log('Cranking leaver down and extracting pure goodness.');
};

(function () {
   // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
   return [
      'heatEspressoMachine',
      'grindDoseTampBeans',
      'mountPortaFilter',
      'positionCup',
      'preInfuse',
      'extract',
   ].forEach(
      function (brewStep) {
         this[brewStep]();
      }
   );
}());

anonymous functions

Example of Named Functions

Now satisfies all the points above, providing the same output. Hopefully you’ll be able to see a few other issues I’ve addressed with this example. We’re also no longer clobbering the global scope. We can now also make any of the other types of coffee simply with an additional single line function call, so we’re removing duplication.

var BINARYMIST = (function (binMist) {
   binMist.coffee = {
      action: function (step) {

         return {
            boilJug: function () {
               // Perform long running action
            },
            addInstantCoffeePowder: function () {
               // Perform long running action
               console.log('Crappy instant coffee powder is being added.');
            },
            addSugar: function () {
               // Perform long running action
               console.log('Sugar is being added.');
            },
            addBoilingWater: function () {
               // Perform long running action
               console.log('Boiling water is being added.');
            },
            stir: function () {
               // Perform long running action
               console.log('Coffee is being stirred. Hmm...');
            },
            heatEspressoMachine: function () {
               // Flick switch, check water.
               console.log('Espresso machine is being turned on and is now heating.');
            },
            grindDoseTampBeans: function () {
               // Perform long running action
               console.log('We are now grinding, dosing, then tamping our dose.');
            },
            mountPortaFilter: function () {
               // Perform long running action
               console.log('Portafilter is now being mounted.');
            },
            positionCup: function () {
               // Perform long running action
               console.log('Placing cup under portafilter.');
            },
            preInfuse: function () {
               // Perform long running action
               console.log('10 second preinfuse now taking place.');
            },
            extract: function () {
               // Perform long running action
               console.log('Cranking leaver down and extracting pure goodness.');
            }
         }[step]();
      },
      coffeeType: function (type) {
         return {
            'cappuccino': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'instant': {
               brewSteps: function () {
                  return [
                     'addInstantCoffeePowder',
                     'addSugar',
                     'addBoilingWater',
                     'stir'
                  ];
               }
            },
            'macchiato': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'mocha': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'short black': {
               brewSteps: function () {
                  return [
                     'heatEspressoMachine',
                     'grindDoseTampBeans',
                     'mountPortaFilter',
                     'positionCup',
                     'preInfuse',
                     'extract',
                  ];
               }
            }
         }[type];
      },
      'brew': function (requestedCoffeeType) {
         var that = this;
         var brewSteps = this.coffeeType(requestedCoffeeType).brewSteps();
         // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
         brewSteps.forEach(function runCoffeeMakingStep(brewStep) {
            that.action(brewStep);
         });
      }
   };
   return binMist;

} (BINARYMIST || {/*if BINARYMIST is falsy, create a new object and pass it*/}));

BINARYMIST.coffee.brew('short black');

named functions


Web Workers

I’ll address these in another post.

Create Modules

Everywhere.

Legacy Modules (Server or Client side)

AMD Modules using RequireJS

CommonJS type Modules in Node.js

In most of the examples I’ve created in this post I’ve exported the system under test (SUT) modules and then required them into the test. Node modules are very easy to create and consume. requireFrom is a great way to require your local modules without explicit directory traversal, thus removing the need to change your require statements when you move your files that are requiring your modules.

NPM Packages

Browserify

Here we get to consume npm packages in the browser.

Universal Module Definition (UMD)

ES6 Modules

That’s right, we’re getting modules as part of the specification (15.2). Check out this post by Axel Rauschmayer to get you started.


Recursion

I’m not going to go into this here, but recursion can be used as a light weight solution to provide some logic to determine when to run the next asynchronous piece of work. Item 64 “Use Recursion for Asynchronous Loops” of the Effective JavaScript book provides some great examples. Do your self a favour and get a copy of David Herman’s book. Oh, we’re also getting tail-call optimisation in ES6.

 


EventEmitter

Still creates more garbage unless your functions are on the prototype, but does provide asynchronicity. Now we can put our functions on the prototype, but then they’ll all be public and if they’re part of a process then we don’t want our coffee process spilling all it’s secretes about how it makes perfect coffee. In saying that, if our code is hot and we’ve profiled it and it’s a stand-out for using to much memory, we could refactor EventEmittedCoffee to have its function declarations added to EventEmittedCoffee.prototype and perhaps hidden another way, but I wouldn’t worry about it until it’s been proven to be using to much memory.

Events are used in the well known Ganf Of Four Observer (behavioural) pattern (which I discussed the C# implementation of here) and at a higher level the Enterprise Integration Publish/Subscribe pattern. The Observer pattern is used in quite a few other patterns also. The ones that spring to mind are Model View Presenter, Model View Controller. The pub/sub pattern is slightly different to the Observer in that it has a topic/event channel that sits between the publisher and the subscriber and it uses contractual messages to encapsulate and transmit it’s events.

Here’s an example of the EventEmitter …

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var eventEmittedCoffee = sUTDirectory('eventEmittedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the event emitted coffee machine', function (done) {

      function handleSuccess(state) {
         var stateOutCome = 'The state of the ordered coffee is: ' + state.description;
         stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         done();
      }

      function handleFailure(error) {
         assert.fail(error, 'brew encountered an error. The following are the error details: ' + error.message);
         done();
      }

      // We could even assign multiple event handlers to the same event. We're not here, but we could.
      eventEmittedCoffee.on('successfulOrder', handleSuccess).on('failedOrder', handleFailure);

      eventEmittedCoffee.brew();
   });
});

The System Under Test

'use strict';

var events = require('events'); // Core node module.
var util = require('util'); // Core node module.

var eventEmittedCoffee;
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: ''
};

function EventEmittedCoffee() {

   var eventEmittedCoffee = this;

   function heatEspressoMachine(state) {
      // No need for callbacks. We can emit a failedOrder event at any stage and any subscribers will be notified.

      function emitEspressoMachineHeated() {
         console.log('Espresso machine heating cycle is done.');
         eventEmittedCoffee.emit('espressoMachineHeated', state);
      }
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      setTimeout(
         // Once espresso machine is hot event will be emitted on the next turn of the event loop...
         emitEspressoMachineHeated, espressoMachineHeatTime.milliseconds
      );
   }

   function grindDoseTampBeans(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('We are now grinding, dosing, then tamping our dose.');
      eventEmittedCoffee.emit('groundDosedTampedBeans', state);
   }

   function mountPortaFilter(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Porta filter is now being mounted.');
      eventEmittedCoffee.emit('portaFilterMounted', state);
   }

   function positionCup(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Placing cup under portafilter.');
      eventEmittedCoffee.emit('cupPositioned', state);
   }

   function preInfuse(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('10 second preinfuse now taking place.');
      eventEmittedCoffee.emit('preInfused', state);
   }

   function extract(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      eventEmittedCoffee.emit('successfulOrder', state);
      // If you want to fail the order, replace the above two lines with the below two lines.
      // state.error = 'Oh no! That extraction came out far to fast.'
      // this.emit('failedOrder', state);
   }

   eventEmittedCoffee.on('timeToHeatEspressoMachine', heatEspressoMachine).
   on('espressoMachineHeated', grindDoseTampBeans).
   on('groundDosedTampedBeans', mountPortaFilter).
   on('portaFilterMounted', positionCup).
   on('cupPositioned', preInfuse).
   on('preInfused', extract);
}

// Make sure util.inherits is before any prototype augmentations, as it seems it clobbers the prototype if it's the other way around.
util.inherits(EventEmittedCoffee, events.EventEmitter);

// Only public method.
EventEmittedCoffee.prototype.brew = function () {
   this.emit('timeToHeatEspressoMachine', state);
};

eventEmittedCoffee = new EventEmittedCoffee();

module.exports = eventEmittedCoffee;

With using raw callbacks, we have to pass them (functions) around. With events, we can have many interested parties request (subscribe) to be notified when something that our interested parties are interested in happens (the event). The Observer pattern promotes loose coupling, as the thing (publisher) wanting to inform interested parties of specific events has no knowledge of it’s subscribers, this is essentially what a service is.

Resources


Async.js

Provides a collection of methods on the async object that:

  1. take an array and perform certain actions on each element asynchronously
  2. take a collection of functions to execute in specific orders asynchronously, some based on different criteria. The likes of async.waterfall allow you to pass results of a previous function to the next. Don’t underestimate these. There are a bunch of very useful routines.
  3. are asynchronous utilities

Here’s an example…

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var asyncCoffee = sUTDirectory('asyncCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the async coffee machine', function (done) {

      var result = function (error, resultsFromAllAsyncSeriesFunctions) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: '
               + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(
               error,
               expectedErrorOutCome,
               'brew encountered an error. The following are the error details. message: '
                  + error.message
                  + '. The finished state of the ordered coffee is: '
                  + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description
            );
         }
         done();
      };

      asyncCoffee().brew(result)
   });
});

The System Under Test

'use strict';

var async = require('async');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: null
};

module.exports = function asyncCoffee() {

   var brew = function (onCompletion) {
      async.series([
         function heatEspressoMachine(heatEspressoMachineDone) {
            // No need for callbacks. We can just pass an error to the async supplied callback at any stage and the onCompletion callback will be invoked with the error and the results immediately.

            function espressoMachineHeated() {
               console.log('Espresso machine heating cycle is done.');
               heatEspressoMachineDone(state.error);
            }
            // Flick switch, check water.
            console.log('Espresso machine has been turned on and is now heating.');
            // Mutate state.
            setTimeout(
               // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
               espressoMachineHeated, espressoMachineHeatTime.milliseconds
            );
         },
         function grindDoseTampBeans(grindDoseTampBeansDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('We are now grinding, dosing, then tamping our dose.');
            grindDoseTampBeansDone(state.error);
         },
         function mountPortaFilter(mountPortaFilterDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Porta filter is now being mounted.');
            mountPortaFilterDone(state.error);
         },
         function positionCup(positionCupDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Placing cup under portafilter.');
            positionCupDone(state.error);
         },
         function preInfuse(preInfuseDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('10 second preinfuse now taking place.');
            preInfuseDone(state.error);
         },
         function extract(extractDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Cranking leaver down and extracting pure goodness.');
            // If you want to fail the order, uncomment the below line. May as well change the description too.
            // state.error = {message: 'Oh no! That extraction came out far to fast.'};
            state.description = 'beautiful shot!';
            extractDone(state.error, state);

         }
      ],
      onCompletion);
   };

   return {
      // Publicise brew.
      brew: brew
   };
};

Other Similar Useful libraries


Adding to Prototype

Check out my post on prototypes. If profiling reveals you’re spending to much memory or processing time creating the objects that contain the functions that are going to be used asynchronously you could add the functions to the objects prototype like we did with the public brew method of the EventEmitter example above.


Promises

The concepts of promises and futures which are quite similar, have been around a long time. their roots go back to 1976 and 1977 respectively. Often the terms are used interchangeably, but they are not the same thing. You can think of the language agnostic promise as a proxy for a value provided by an asynchronous actions eventual success or failure. a promise is something tangible, something you can pass around and interact with… all before or after it’s resolved or failed. The abstract concept of the future (discussed below) has a value that can be mutated once from pending to either fulfilled or rejected on fulfilment or rejection of the promise.

Promises provide a pattern that abstracts asynchronous operations in code thus making them easier to reason about. Promises which abstract callbacks can be passed around and the methods on them chained (AKA Promise pipelining). Removing temporary variables makes it more concise and clearer to readers that the extra assignments are an unnecessary step.

JavaScript Promises

A promise (Promises/A+ thenable) is an object or sometimes more specifically a function with a then (JavaScript specific) method.
A promise must only change it’s state once and can only change from either pending to fulfilled or pending to rejected.

Semantically a future is a read-only property.
A future can only have its value set (sometimes called resolved, fulfilled or bound) once by one or more (via promise pipelining) associated promises.
Futures are not discussed explicitly in the Promises/A+, although are discussed implicitly in the promise resolution procedure which takes a promise as the first argument and a value as the second argument.
The idea is that the promise (first argument) adopts the state of the second argument if the second argument is a thenable (a promise object with a then method). This procedure facilitates the concept of the “future”

We’re getting promises in ES6. That means JavaScript implementers are starting to include them as part of the language. Until we get there, we can use the likes of these libraries.

One of the first JavaScript promise drafts was the Promises/A (1) proposal. The next stage in defining a standardised form of promises for JavaScript was the Promises/A+ (2) specification which also has some good resources for those looking to use and implement promises based on the new spec. Just keep in mind though, that this has nothing to do with the EcmaScript specification, although it is/was a precursor.

Then we have Dominic Denicola’s promises-unwrapping repository (3) for those that want to stay ahead of the solidified ES6 draft spec (4). Dominic’s repo is slightly less specky and may be a little more convenient to read, but if you want gospel, just go for the ES6 draft spec sections 25.4 Promise Objects and 7.5 Operations on Promise Objects which is looking fairly solid now.

The 1, 2, 3 and 4 are the evolutionary path of promises in JavaScript.

Node Support

Although it was decided to drop promises from Node core, we’re getting them in ES6 anyway. V8 already supports the spec and we also have plenty of libraries to choose from.

Node on the other hand is lagging. Node stable 0.10.29 still looks to be using version 3.14.5.9 of V8 which still looks to be about 17 months from the beginnings of the first sign of native ES6 promises according to how I’m reading the V8 change log and the Node release notes.

So to get started using promises in your projects whether your programming server or client side, you can:

  1. Use one of the excellent Promises/A+ conformant libraries which will give you the flexibility of lots of features if that’s what you need, or
  2. Use the native browser promise API of which all ES6 methods on Promise work in Chrome (V8 -> Soon in Node), Firefox and Opera. Then polyfill using the likes of yepnope, or just check the existence of the methods you require and load them on an as needed basis. The cujojs or jakearchibald shims would be good starting points.

For my examples I’ve decided to use when.js for several reasons.

  • Currently in Node we have no native support. As stated above, this will be changing soon, so we’d be polyfilling everything.
  • It’s performance is the least worst of the Promises/A+ compliant libraries at this stage. Although don’t get to hung up on perf stats. In most cases they won’t matter in context of your module. If you’re concerned, profile your running code.
  • It wraps non Promises/A+ compliant promise look-a-likes like jQuery’s Deferred which will forever remain broken.
  • Is compliant with spec version 1.1

The following example continues with the coffee making procedure concept. Now we’ve taken this from raw callbacks to using the EventEmitter to using the Async library and finally to what I think is the best option for most of our asynchronous work, not only in Node but JavaScript anywhere. Promises. Now this is just one way to implement the example. There are many and probably many of which are even more elegant. Go forth explore and experiment.

The Test

var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var promisedCoffee = sUTDirectory('promisedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the coffee machine of promises', function (done) {

      var numberOfSteps = 7;
      // We could use a then just as we've used the promises done method, but done is semantically the better choice. It makes a bigger noise about handling errors. Read the docs for more info.
      promisedCoffee().brew().done(
         function handleValue(valueOrErrorFromPromiseChain) {
            console.log(valueOrErrorFromPromiseChain);
            valueOrErrorFromPromiseChain.errors.should.have.length(0);
            valueOrErrorFromPromiseChain.stepResults.should.have.length(numberOfSteps);
            done();
         }
      );
   });

});

The System Under Test

'use strict';

var when = require('when');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   errors: [],
   stepResults: []
};
function CustomError(message) {
   this.message = message;
   // return false
   return false;
}

function heatEspressoMachine(resolve, reject) {
   state.stepResults.push('Espresso machine has been turned on and is now heating.');
   function espressoMachineHeated() {
      var result;
      // result will be wrapped in a new promise and provided as the parameter in the promises then methods first argument.
      result = 'Espresso machine heating cycle is done.';
      // result could also be assigned another promise
      resolve(result);
      // Or call the reject
      //reject(new Error('Something screwed up here')); // You'll know where it originated from. You'll get full stack trace.
   }
   // Flick switch, check water.
   console.log('Espresso machine has been turned on and is now heating.');
   // Mutate state.
   setTimeout(
      // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
      espressoMachineHeated, espressoMachineHeatTime.milliseconds
   );
}

// The promise takes care of all the asynchronous stuff without a lot of thought required.
var promisedCoffee = when.promise(heatEspressoMachine).then(
   function fulfillGrindDoseTampBeans(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'We are now grinding, dosing, then tamping our dose.';
      // Or if something goes wrong:
      // throw new Error('Something screwed up here'); // You'll know where it originated from. You'll get full stack trace.
   },
   function rejectGrindDoseTampBeans(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillMountPortaFilter(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Porta filter is now being mounted.';
   },
   function rejectMountPortaFilter(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new Error(error.message);
   }
).then(
   function fulfillPositionCup(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Placing cup under portafilter.';
   },
   function rejectPositionCup(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillPreInfuse(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return '10 second preinfuse now taking place.';
   },
   function rejectPreInfuse(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillExtract(result) {
      state.stepResults.push(result);
      state.description = 'beautiful shot!';
      state.stepResults.push('Cranking leaver down and extracting pure goodness.');
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return state;
   },
   function rejectExtract(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);

   }
).catch(CustomError, function (e) {
      // Only deal with the error type that we know about.
      // All other errors will propagate to the next catch. whenjs also has a finally if you need it.
      // Todo: KimC. Do the dealing with e.
      e.newCustomErrorInformation = 'Ok, so we have now dealt with the error in our custom error handler.';
      return e;
   }
).catch(function (e) {
      // Handle other errors
      e.newUnknownErrorInformation = 'Hmm, we have an unknown error.';
      return e;
   }
);

function brew() {
   return promisedCoffee;
}

// when's promise.catch is only supposed to catch errors derived from the native Error (etc) functions.
// Although in my tests, my catch(CustomError func) wouldn't catch it. I'm assuming there's a bug as it kept giving me a TypeError instead.
// Looks like it came from within the library. So this was a little disappointing.
CustomError.prototype = Error;

module.exports = function promisedCoffee() {
   return {
      // Publicise brew.
      brew: brew
   };
};

Resources


Testing Asynchronous Code

All of the tests I demonstrated above have been integration tests. Usually I’d unit test the functions individually not worrying about the intrinsically asynchronous code, as most of it isn’t mine anyway, it’s C/O the EventEmitter, Async and other libraries and there is often no point in testing what the library maintainer already tests.

When you’re driving your development with tests, there should be little code testing the asynchronicity. Most of your code should be able to be tested synchronously. This is a big part of the reason why we drive our development with tests, to make sure your code is easy to test. Testing asynchronous code is a pain, so don’t do it much. Test your asynchronous code yes, but most of your business logic should be just functions that you join together asynchronously. When you’re unit testing, you should be testing units, not asynchronous code. When you’re concerned about testing your asynchronicity, that’s called integration testing. Which you should have a lot less of. I discuss the ratios here.

As of 1.18.0 Mocha now has baked in support for promises. For fluent style of testing promises we have Chai as Promised.

Resources


There are plenty of other resources around working with promises in JavaScript. For myself I found that I needed to actually work with them to solidify my understanding of them. With Chrome DevTool async option, we’ll soon have support for promise chaining.

Other Excellent Resources

And again all of the code samples can be found at GitHub.

Exploring JavaScript Prototypes

June 28, 2014

Not to be confused with the GoF Prototype pattern that defines a lot more than the simple JavaScript prototype. Although the abstract concept of the prototype is the same.

My intention with this post is to arm our developers with enough information around JavaScript prototypes to know when they are the right tool for the job as opposed to other constructs when considering how to create polymorphic JavaScript that’s performant and easy to maintain. Often performant code and easy to maintain code are in conflict with each other. I.E. if you want code that’s fast, it’s often hard to read and if you want code that’s really easy to read, it “may” not be as fast as it could/should be. So we make trade-offs.

Make your code as readable as possible in as many places as possible. The more eyes that are going to be on it, generally the more readable it needs to be. Where performance really matters, we “may” have to carefully sacrifice some precious readability to achieve the essential performance required. This really needs measuring though, because often we think we’re writing fast code that either doesn’t matter or that just isn’t fast. So we should always favour readability, then profile your running application in an environment as close to production as possible. This removes the guess work, which we usually get wrong anyway. I’m currently working on a Node.js performance blog post in which I’ll attempt to address many things to do with performance. What I’m finding a lot of the time is that techniques that I’ve been told are essential for fast code are all to often incorrect. We must measure.

Some background

Before we do the deep dive thing, lets step back for a bit. Why do prototypes matter in JavaScript? What do prototypes do for us? Where do prototypes fit into the design philosophy of JavaScript?

What do JavaScript Prototypes do for us?

Removal of Code Duplication (DRY)

Excellent for reducing unnecessary duplication of members that will need garbage collecting

Performance

Prototypes also allow us to maximise economy of memory, thus reducing Garbage Collection (GC) activity, thus increasing performance. There are other ways to get this performance though. Prototypes which obtain re-use of the parent object are not always the best way to get the performance benefits we crave. You can see here under the “Cached Functions in the Module Pattern” section that using closure (although not mentioned) which is what modules leverage, also gives us the benefit of re-use, as the free variable in the outer scope is baked into the closure. Just check the jsperf for proof.

The Design Philosophy of JavaScript and Prototypes

Prototypal inheritance was implemented in JavaScript as a key technique to support the object oriented principle of polymorphism. Prototypal inheritance provides the flexibility of being able to choose what the more specific object is going to inherit, rather than in the classical paradigm where you’re forced to inherit all the base class’s baggage whether you want it or not.

Three obvious ways to achieve polymorphism:

  1. Composition (creating an object that composes a contract to another object)(has-a relationship). Learn the pros and cons. Use when it makes sense
  2. Prototypal inheritance (is-a relationship). Learn the pros and cons. Use when it makes sense
  3. Monkey Patching courtesy of call, apply and bind
  4. Classical inheritance (is-a relationship). Why would you? Please don’t try this at home in production 😉

Of course there are other ways and some languages have unique techniques to achieve polymorphism. like templates in C++, generics in C#, first-class polymorphism in Haskell, multimethods in Clojure, etc, etc.

Diving into the Implementation Details

Before we dive into Prototypes…

What does Composition look like?

There are many great examples of how composing our objects from other object interfaces whether they’re owned by the composing object (composition), or aggregated from independent objects (aggregation), provide us with the building blocks to create complex objects to look and behave the way we want them to. This generally provides us with plenty of flexibility to swap implementation at will, thus overcoming the tight coupling of classical inheritance.

Many of the Gang of Four (GoF) design patterns we know and love leverage composition and/or aggregation to help create polymorphic objects. There is a difference between aggregation and composition, but both concepts are often used loosely to just mean creating objects that contain other objects. Composition implies ownership, aggregation doesn’t have to. With composition, when the owning object is destroyed, so are the objects that are contained within the owner. This is not necessarily the case for aggregation.

An example: Each coffee shop is composed of it’s own unique culture. Each coffee shop has a different type of culture that it fosters and the unique culture is an aggregation of its people and their attributes. Now the people that aggregate the specific coffee shop culture can also be a part of other cultures that are completely separate to the coffee shops culture, they could even leave the current culture without destroying it, but the culture of the specific coffee shop can not be the same culture of another coffee shop. Every coffee shops culture is unique, even if only slightly.

Programmer Show Pony
programmer show pony

Following we have a coffeeShop that composes a culture. We use the Strategy pattern within the culture to aggregate the customers. The Visit function provides an interface to encapsulate the Concrete Strategy, which is passed as an argument to the Visit constructor and closed over by the describe method.

// Context component of Strategy pattern.
var Programmer = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Context component of Strategy pattern.
var ShowPony = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Add more persons to make a unique culture.

var customer = {
   setCasualVisitStrategy: function (casualVisit) {
      this.casualVisit = casualVisit;
   },
   setBusinessVisitStrategy: function (businessVisit) {
      this.businessVisit = businessVisit;
   },
   doCasualVisit: function () {
      console.log(this.casualVisit.describe());
   },
   doBusinessVisit: function () {
      console.log(this.businessVisit.describe());
   }
};

// Strategy component of Strategy pattern.
var Visit = function (description) {
   // description is closed over, so it's private. Check my last post on closures for more detail
   this.describe = function () {
      return description;
   };
};

var coffeeShop;

Programmer.prototype = customer;
ShowPony.prototype = customer;

coffeeShop = (function () {
   var culture = {};
   var flavourOfCulture = '';
   // Composes culture. The specific type of culture exists to this coffee shop alone.
   var whatWeWantExposed = {
      culture: {
         looksLike: function () {
            console.log(flavourOfCulture);

         }
      }
   };

   // Other properties ...
   (function createCulture() {
      var programmer = new Programmer();
      var showPony = new ShowPony();
      var i = 0;
      var propertyName;

      programmer.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.')
      );
      programmer.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.')
      );
      showPony.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony cycles to coffee shop in lycra pretending he\'s just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.')
      );
      showPony.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.')
      );

      culture.members = [programmer, showPony, /*lots more*/];

      for (i = 0; i < culture.members.length; i++) {
         for (propertyName in culture.members[i]) {
            if (culture.members[i].hasOwnProperty(propertyName)) {
               flavourOfCulture += culture.members[i][propertyName].describe() + '\n';
            }
         }
      }

   }());
   return whatWeWantExposed;
}());

coffeeShop.culture.looksLike();
// Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.
// Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.
// Show pony cycles to coffee shop in lycra pretending he's just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.
// Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.

Now for Prototype

EcmaScript 5

In ES5 we’re a bit spoilt as we have a selection of methods on Object that help with prototypal inheritance.

Object.create takes an argument that’s an object and an optional properties object which is a EcmaScript 5 property descriptor like the second parameter of Object.defineProperties and returns a new object with the first argument passed as it’s prototype and the properties described in the property descriptor (if present) added to the returned object.

prototypal inheritance
// The object we use as the prototype for hobbit.
var person = {
   personType: 'Unknown',
   backingOccupation: 'Unknown occupation',
   age: 'Unknown'
};

var hobbit = Object.create(person);

Object.defineProperties(person, {
   'typeOfPerson': {
      enumerable: true,
      value: function () {
         if(arguments.length === 0)
            return this.personType;
         else if(arguments.length === 1 && typeof arguments[0] === 'string')
            this.personType = arguments[0];
         else
            throw 'Number of arguments not supported. Pass 0 arguments to get. Pass 1 string argument to set.';
      }
   },
   'greeting': {
      enumerable: true,
      value: function () {
         console.log('Hi, I\'m a ' + this.typeOfPerson() + ' type of person.');
      }
   },
   'occupation': {
      enumerable: true,
      get: function () {return this.backingOccupation;},
      // Would need to add some parameter checking on the setter.
      set: function (value) {this.backingOccupation = value;}
   }
});

// Add another property to hobbit.
hobbit.fatAndHairyFeet = 'Yes indeed!';
console.log(hobbit.fatAndHairyFeet); // 'Yes indeed!'
// prototype is unaffected
console.log(person.fatAndHairyFeet); // undefined

console.log(hobbit.typeOfPerson()); // 'Unknown '
hobbit.typeOfPerson('short and hairy');
console.log(hobbit.typeOfPerson()); // 'short and hairy'
console.log(person.typeOfPerson()); // 'Unknown'

hobbit.greeting(); // 'Hi, I'm a short and hairy type of person.'

person.greeting(); // 'Hi, I'm a Unknown type of person.'

console.log(hobbit.age); // 'Unknown'
hobbit.age = 'young';
console.log(hobbit.age); // 'young'
console.log(person.age); // 'Unknown'

console.log(hobbit.occupation); // 'Unknown occupation'
hobbit.occupation = 'mushroom hunter';
console.log(hobbit.occupation); // 'mushroom hunter'
console.log(person.occupation); // 'Unknown occupation'

Object.getPrototypeOf

console.log(Object.getPrototypeOf(hobbit));
// Returns the following:
// { personType: 'Unknown',
//   backingOccupation: 'Unknown occupation',
//   age: 'Unknown',
//   typeOfPerson: [Function],
//   greeting: [Function],
//   occupation: [Getter/Setter] }

 

EcmaScript 3

One of the benefits of programming in ES 3, is that we have to do more work ourselves, thus we learn how some of the lower level language constructs actually work rather than just playing with syntactic sugar. Syntactic sugar is generally great for productivity, but I still think there is danger of running into problems when you don’t really understand what’s happening under the covers.

So lets check out what really goes on with….

Prototypal Inheritance

What is a Prototype?

All objects have a prototype, but not all objects reveal their prototype directly by a property called prototype. All prototypes are objects.

So, if all objects have a prototype and all prototypes are objects, we have an inheritance chain right? That’s right. See the debug image below.

All properties that you may want to add to an objects prototype are shared through inheritance by all objects sharing the prototype.

So, if all objects have a prototype, where is it stored? All objects in JavaScript have an internal property called [[Prototype]]. You won’t see this internal property. All prototypes are stored in this internal property. How this internal property is accessed is dependant on whether it’s object is an object (object literal or object returned from a constructor) or a function. I discuss how this works below. When you dereference an object in order to find a property, the engine will first look on the current object, then the prototype of the current object, then the prototype of the prototype object and so on up the prototype chain. It’s a good idea to try and keep your inheritance hierarchies as shallow as possible for performance reasons.

Prototypes in Functions

Every function object is created with a prototype property, whether it’s a constructor or not. The prototype property has a value which is a constructor property which has a value that’s actually the function. See the below example to help clear it up. ES3 and ES5 spec 13.2 say pretty much the same thing.

var MyConstructor = function () {};
console.log(MyConstructor.prototype.constructor === MyConstructor); // true

and to help with visualising, see the below example and debug. myObj and myObjLiteral are for the two code examples below the debug image.

var MyConstructor = function () {};
var myObj = new MyConstructor();
var myObjLiteral = {};

Accessing JavaScript Prototypes

 

Up above in the composition example on line 40 and 41, you can see how we access the prototype of the constructor. We can also access the prototype of the object returned from the constructor like this:

var MyConstructor = function () {};
var myObj = new MyConstructor();
console.log(myObj.constructor.prototype === MyConstructor.prototype); // true

We can also do similar with an object literal. See below.

Prototypes in Objects that are Not Functions

Every object that is not a function is not created with a prototype property (All objects do have the hidden internal [[Prototype]] property though). Now sometimes you’ll see Object.prototype talked about. Even MDN make the matter a little confusing IMHO. In this case, the Object is the Object constructor function and as discussed above, all functions have the prototype property.

When we create object literals, the object we get is the same as if we ran the expression new Object(); (see ES3 and ES5 11.1.5)
So although we can access the prototype property of functions (that may or not be constructors), there is no such exposed prototype property directly on objects returned by constructors or on object literals.
There is however conveniently a constructor property directly on all objects returned by constructors and on object literals (as you can think of their construction procedure producing the same result). This looks similar to the above debug image:

var myObjLiteral = {};
            // ES3 ->                              // ES5 ->
console.log(myObjLiteral.constructor.prototype === Object.getPrototypeOf(myObjLiteral)); // true

I’ve purposely avoided discussing the likes of __proto__ as it’s not defined in EcmaScript and there’s no valid reason to use something that’s not standard.

Polyfilling to ES5

Now to get a couple of terms used in web development well defined before we start talking about them:

  • A shim is a library that brings a new API to an environment that doesn’t support it by using only what the older environment supports to support the new API.
  • A polyfill is some code in the form of a function, module, plugin, etc that provides the functionality of a later environment (ES5 for example) if it doesn’t exist for an older environment (ES3 for example). The polyfill often acts as a fallback. The programmer writes code targeting the newer environment as though the older environment doesn’t exist, but when the code is pulled into the older environment the polyfill kicks into action as the new language feature isn’t yet implemented natively.

If you’re supporting older browsers that don’t have full support for ES5, you can still use the ES5 additions so long as you provide ES5 polyfills. es5-shim is a good choice for this. Checkout the html5please ECMAScript 5 section for a little more detail. Also checkout Kangax’s ECMAScript 5 compatibility table to see which browsers currently support which ES5 language features. A good approach and one I like to take is to use a custom build of a library such as Lo-Dash to provide a layer of abstraction so I don’t need to care whether it’ll be in an ES5 or ES3 environment. Then for anything that the abstraction library doesn’t provide I’ll use a customised polyfill library such as es5-shim to fall back on. I prefer to use Lo-Dash over Underscore too, as I think Lo-Dash is starting to leave Underscore behind in terms of performance and features. I also like to use the likes of yepnope.js to conditionally load my polyfills based on whether they’re actually needed in the users browser. As there’s no point in loading them if we have browser support now is there?

Polyfilling Object.create as discussed above, to ES5

You could use something like the following that doesn’t accommodate an object of property descriptors. Or just go with the following next two choices which is what I do:

  1. Use an abstraction like the lodash create method which takes an optional second argument object of properties and treats them the same way
  2. Use a polyfill like this one.
if (typeof Object.create !== 'function') {
   (function () {
      var F = function () {};
      Object.create = function (proto) {
         if (arguments.length > 1) {
            throw Error('Second argument not supported');
         }
         if (proto === null) {
            throw Error('Cannot set a null [[Prototype]]');
         }
         if (typeof proto !== 'object') {
            throw TypeError('Argument must be an object');
         }
         F.prototype = proto;
         return new F();
      };
   })();
};

Polyfilling Object.getPrototypeOf as discussed above, to ES5

  1. Use an abstraction like the lodash isPlainObject method (source here), or…
  2. Use a polyfill like this one. Just keep in mind the gotcha.

 

EcmaScript 6

I got a bit excited when I saw an earlier proposed prototype-for (also seen with the name prototype-of) operator: <| . Additional example here. This would have provided a terse syntax for providing an object literal with an object to use as its prototype. It looks like it must have lost traction though as it was removed in the June 15, 2012 Draft.

There are a few extra methods in ES6 that deal with prototypes, but on trawling the EcmaScript 6 draft spec, nothing at this stage that really stands out as revolutionising the way I write JavaScript or being a mental effort/time saver for me. Of course I may have missed something. I’d like to hear from anyone that has seen something interesting to the contrary?

Yes we’re getting class‘s in ES6, but they are just an abstraction giving us a terse and declarative mechanism for doing what we already do with functions that we use as constructors, prototypes and the objects (or instances if you will) that are returned from our functions that we’ve chosen to act as constructors.

Architectural Ideas that Prototypes Help With

This is a common example that I often use for domain objects that are fairly hot that use one set of accessor properties added to the business objects prototype, as you can see on line 13 of my Hobbit module (Hobbit.js) below.

First a quick look at the tests/spec to drive the development. This is being run using mocha with the help of a Makefile in the root directory of my module under test.

  • Makefile
# The relevant section.
unit-test:
	@NODE_ENV=test ./node_modules/.bin/mocha \
		test/unit/*test.js test/unit/**/*test.js
  • Hobbit-test.js
var requireFrom = require('requirefrom');
var assert = require('assert');
var should = require('should');
var shire = requireFrom('shire/');

// Hardcode $NODE_ENV=test for debugging.
process.env.NODE_ENV='test';

describe('shire/Hobbit business object unit suite', function () {
   it('Should be able to instantiate a shire/Hobbit business object.', function (done) {
      // Uncomment below lines if you want to debug.
      //this.timeout(444000);
      //setTimeout(done, 444000);

      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();

      // Properties should be declared but not initialised.
      // No good checking for undefined alone, as that would be true whether it was declared or not.

      hobbit.should.have.property('id');
      (hobbit.id === undefined).should.be.true;
      hobbit.should.have.property('typeOfPerson');
      (hobbit.typeOfPerson === undefined).should.be.true;
      hobbit.should.have.property('greeting');
      (hobbit.greeting === undefined).should.be.true;
      hobbit.should.have.property('occupation');
      (hobbit.occupation === undefined).should.be.true;
      hobbit.should.have.property('emailFrom');
      (hobbit.emailFrom === undefined).should.be.true;
      hobbit.should.have.property('name');
      (hobbit.name === undefined).should.be.true;      

      done();
   });

   it('Should be able to set and get all properties of a shire/Hobbit business object.', function (done){
      // Uncomment below lines if you want to debug.
      this.timeout(444000);
      setTimeout(done, 444000);

      // Arrange
      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();      

      // Act
      hobbit.id = '32f4d01e-74dc-45e8-b3a8-9aa24840bc6a';
      hobbit.typeOfPerson = 'short and hairy';
      hobbit.greeting = {
         intro: 'Hi, I\'m a ',
         outro: ' type of person.'};
      hobbit.occupation = 'mushroom hunter';
      hobbit.emailFrom = 'Bilbo.Baggins@theshire.arn';
      hobbit.name = 'Bilbo Baggins';

      // Assert
      hobbit.id.should.equal('32f4d01e-74dc-45e8-b3a8-9aa24840bc6a');
      hobbit.typeOfPerson.should.equal('short and hairy');
      hobbit.greeting.should.equal('Hi, I\'m a short and hairy type of person.');
      hobbit.occupation.should.equal('mushroom hunter');
      hobbit.emailFrom.should.equal('Bilbo.Baggins@theshire.arn');
      hobbit.name.should.eql('Bilbo Baggins');

      done();
   });
});
  • Now the business object itself Hobbit.js

    Now what’s happening here is that on instance creation of new Hobbit, the empty members object you see created on line 9 is the only instance data. All of the Hobbit‘s accessor properties are defined once per export of the Hobbit module which is assigned the constructor function object. So what we store on each instance are the values assigned in the Hobbit-test.js from lines 47 through 54. That’s just the strings. So very little space is used for each instance of the Hobbit function returned by invoking the Hobbit constructor that the Hobbit module exports.
// Could achieve a cleaner syntax with Object.create, but constructor functions are a little faster.
// As this will be hot code, it makes sense to favour performance in this case.
// Of course profiling may say it's not worth it, in which case this could be rewritten.
var Hobbit = (function () {
   function Hobbit (/*Optionally Construct with DTO and serializer*/) {
      // Todo: Implement pattern for enforcing new.
      Object.defineProperty (this, 'members', {
         value: {}
      });
   }

   (function definePublicAccessors (){
      Object.defineProperties(Hobbit.prototype, {
         id: {
            get: function () {return this.members.id;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.id = newValue;
            },
            configurable: false, enumerable: true
         },
         typeOfPerson: {
            get: function () {return this.members.typeOfPerson;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.typeOfPerson = newValue;
            },
            configurable: false, enumerable: true
         },
         greeting: {
            get: function () {
               return this.members.greeting === undefined ?
                  undefined :
               this.members.greeting.intro +
                  this.typeOfPerson +
                  this.members.greeting.outro;
            },
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.greeting = newValue;
            },
            configurable: false, enumerable: true
         },
         occupation: {
            get: function () {return this.members.occupation;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.occupation = newValue;
            },
            configurable: false, enumerable: true
         },
         emailFrom: {
            get: function () {return this.members.emailFrom;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.emailFrom = newValue;
            },
            configurable: false, enumerable: true
         },
         name: {
            get: function () {return this.members.name;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.name = newValue;
            },
            configurable: false, enumerable: true
         }
      });

   })();
   return Hobbit;
})();

// JSON.parse provides a hydrated hobbit from the DTO.
//    So you would call this to populate this DO from a DTO
// JSON.stringify provides the DTO from a hydrated hobbit

module.exports = Hobbit;
  • Now running the test
lets test

 

Flyweights using Prototypes

A couple of interesting examples of the Flyweight pattern implemented in JavaScript are by the GoF and Addy Osmani.

The GoF’s implementation of the FlyweightFactory makes extensive use of closure to store its flyweights and uses aggregation in order to create it’s ConcreteFlyweight from the Flyweight. It doesn’t use prototypes.

Addy Osmani has a free book “JavaScript Design Patterns” containing an example of the Flyweight pattern, which IMO is considerably simpler and more elegant. In saying that, the GoF want you to buy their product, so maybe they do a better job when you give them money. In this example closure is also used extensively, but it’s a good example of how to leverage prototypes to share your less specific behaviour.

Mixins using Prototypes

Again if you check out the last example of Mixins in Addy Osmani’s book, there is quite an elegant example.

We can even do multiple inheritance using mixins, by adding which ever properties we want from what ever objects we want to the target objects prototype.

This is a similar concept to the post I wrote on Monkey Patching.

Mixins support the Open/Closed principle, where objects should be able to have their behaviour modified without their source code being altered.

Keep in mind though, that you shouldn’t just expect all consumers to know you’ve added additional behaviour. So think this through before using.

Factory functions using Prototypes

Again a decent example of the Factory function pattern is implemented in the “JavaScript Design Patterns” book here.

There are many other areas you can get benefits from using prototypes in your code.

Prototypal Inheritance: Not Right for Every Job

Prototypes give us the power to share only the secrets of others that need to be shared. We have fine grained control. If you’re thinking of using inheritance be it classical or prototypal, ask yourself “Is the class/object I’m wanting to provide a parent for truly a more specific version of the proposed parent?”. This is the idea behind the Liskov Substitution Principle (LSP) and Design by Contract (DbC) which I posted on here. Don’t just inherit because it’s convenient In my “javascript object creation patterns” post I also discussed inheritance.

The general consensus is that composition should be favoured over inheritance. If it makes sense to compose once you’ve considered all options, then go for it, if not, look at inheritance. Why should composition be favoured over inheritance? Because when you compose your object from another contract of an object, your sub object (the object doing the composing) doesn’t inherit anything or need to know anything about the composed objects secrets. The object being composed has complete freedom as to how it minds it’s own business, so long as it provides a consistent contract for consumers. This gives us the much loved polymorphism we crave without the crazy tight coupling of classical inheritance (inherit everything, even your fathers drinking problem :-s).

I’m pretty much in agreement with this when we’re talking about classical inheritance. When it comes to prototypal inheritance, we have a lot more flexibility and control around how we use the object that we’re deriving from and exactly what we inherit from it. So we don’t suffer the same “all or nothing” buy in and tight coupling as we do with classical inheritance. We get to pick just the good parts from an object that we decide we want as our parent. The other thing to consider is the memory savings of inheriting from a prototype rather than achieving your polymorphic behaviour by way of composition, which has us creating the composed object each time we want another specific object.

So in JavaScript, we really are spoilt for choice when it comes to how we go about getting our fix of polymorphism.

When surveys are carried out on..

Why Software Projects Fail

the following are the most common causes:

  • Ambiguous Requirements
  • Poor Stakeholder Involvement
  • Unrealistic Expectations
  • Poor Management
  • Poor Staffing (not enough of the right skills)
  • Poor Teamwork
  • Forever Changing Requirements
  • Poor Leadership
  • Cultural & Ethical Misalignment
  • Inadequate Communication

You’ll notice that technical reasons are very low on the list of why projects fail. You can see the same point mentioned by many of our software greats, but when a project does fail due to technical reasons, it’s usually because the complexity got out of hand. So as developers when focusing on the art of creating good code, our primary concern should be to reduce complexity, thus enhance the ability to maintain the code going forward.

I think one of Edsger W. Dijkstra’s phrases sums it up nicely. “Simplicity is prerequisite for reliability”.

Stratification is a design principle that focuses on keeping the different layers in code autonomous, I.E. you should be able to work in one layer without having to go up or down adjacent layers in order to fully understand the current layer you’re working in. Its internals should be able to move independently of the adjacent layers without effecting them or being concerned that a change in it’s own implementation will affect other layers. Modules are an excellent design pattern used heavily to build medium to large JavaScript applications.

With composition, if your composing with contracts, this is exactly what you get.

References and interesting reads

 

Exploring JavaScript Closures

May 31, 2014

Just before we get started, we’ll be using the terms lexical scope and dynamic scope a bit. In computer science the term lexical scope is synonymous with static scope.

  • lexical or static scope is where name resolution of “part of a program” depends on the location in the source code
  • dynamic scope is whether name resolution depends on the program state (dependent on execution context or calling context) when the name is encountered.

What are Closures?

Now establishing the formal definition has been quite an interesting journey, with quite a few sources not quite getting it right. Although the ES3 spec talks about closure, there is no formal definition of what it actually is. The ES5 spec on the other hand does discuss what closure is in two distinct locations.

  1. “11.1.5 Object Initialiser” section under the section that talks about accessor properties This is the relevant text: (In relation to getters): “Let closure be the result of creating a new Function object as specified in 13.2 with an empty parameter list (that’s getter specific) and body specified by FunctionBody. Pass in the LexicalEnvironment of the running execution context as the Scope.
  2. “13 Function Definition” section This is the relevant text: “Let closure be the result of creating a new Function object as specified in 13.2 with parameters specified by FormalParameterList (which are optional) and body specified by FunctionBody. Pass in funcEnv as the Scope.

Now what are the differences here that stand out?

  1. We see that 1 specifies a function object with no parameters, and 2 specifies some parameters (optional). So from this we can establish that it’s irrelevant whether arguments are passed or not to create closure.
  2. 1 also mentions passing in the LexicalEnvironment, where as 2 passes in funcEnv. funcEnv is the result of “calling NewDeclarativeEnvironment passing the running execution context‘s LexicalEnvironment as the argument“. So basically there is no difference.

Now 13.2 just specifies how functions are created. Given an optional parameter list, a body, a LexicalEnvironment specified by Scope, and a Boolean flag (for strict mode (ignore this for the purposes of establishing a formal definition)). Now the Scope mentioned above is the lexical environment of the running execution context (discussed here in depth) at creation time. The Scope is actually [[Scope]] (an internal property).

The ES6 spec draft runs along the same vein.

Lets get abstract

Every problem in computer science is just a more specific problem of a problem we’re familiar with in the natural world. So often it helps to find the abstract problem that we are already familiar with in order to help us understand the more specific problem we are dealing with. Patterns are an example of this. Before I was programming as a profession I was a carpenter. I find just about every problem I deal with in programming I’ve already dealt with in physical carpentry and at a higher level still with physical architecture.

In search of the true formal definition I also looked outside of JavaScript at the language agnostic term, which should just be an abstraction of the JavaScript closure anyway. Yip… Wikipedias definition “In programming languages, a closure (also lexical closure or function closure) is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables or upvalues) of that function. A closure—unlike a plain function pointer—allows a function to access those non-local variables even when invoked outside its immediate lexical scope.

My abstract formal definition

A closure is a function containing a reference to the lexical (static) environment via the function objects internal [[Scope]] property (ES5 spec 13.2.9) that it is defined within at creation time, not call time (ES5 spec 13.2.1). The closure is closed over it’s parent lexical environment and all of it’s properties. You can access these properties as variables, but not as properties, because you don’t have access to the internal [[Scope]] property directly in order to reference it’s properties. So this example fails. More correctly (ES5 spec 8.6.2) “Of the standard built-in ECMAScript objects, only Function objects implement [[Scope]].

var outerObjectLiteral = {

   x: 10,

   foo: function () {
      console.log(x); // ReferenceError: x is not defined obviously
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

See here for an explanation on the differences between properties and variables. That’s basically it. Of course there are many ways we can use a closure and that’s often where confusion creeps in about what a closure actually is and is not. Feel free to bring your perspective on this in the comments section below.

When is a closure born?

So lets get this closure closing over something. JavaScript addresses the funarg problem with closure.

var x = 10;

var outerObjectLiteral = {   

   foo: function () {
      // Because our internal [[Scope]] property now has a property (more specifically a free variable) x, we can access it directly.
      console.log(x); // Writes 10 to the console.
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

The closure is created on line 13. Now at line 9 we have access to the closed over lexical environment. When we print x on line 7, we get 10 which is the value of x on [[Scope]] that our closure was statically bound to at function object creation time (not the dynamically scoped x = 20). Now of course you can change the value of the free variable x and it’ll be reflected where ever you use the closed over variable because the closure was bound to the free variable x, not the value of the free variable x.

This is what you’ll see in Chrome Dev Tools when execution is on line 10. Bear in mind though that both foo and invokeMe closures were created at line 13.

Closure

Now I’m going to attempt to explain what the structure looks like in a simplified form with a simple hash. I don’t know how it’s actually implemented in the varius EcmaScript implementations, but I do know what the specification (single source of truth) tells us, it should look something like the following:

////////////////
// pseudocode //
////////////////
foo = closure {
   FormalParameterList: {}, // Optional
   FunctionBody: <...>,
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10
   }
}

The closure is born when the function is created (“the result of creating a new Function object” as stated above). Not when it’s returned by the outer function (I.E. upwards funarg problem) and not when it’s invoked as Angus Croll mentioned here under the “The [[Scope]] property” section.

Angus quotes the ES5 spec 10.4.3.5-7. On studying this section I’m pretty sure it is meant for the context of actually creating the function object rather than invoking an existing function object. The clauses I’ve detailed above (11.1.5 Object Initialiser and 13 Function Definition), confirm this.

The ES6 spec draft “14.1.22 Runtime Semantics: Evaluation” also confirms this theory. Although it’s titled Runtime Semantics, it has several points that confirm my theory… The so called runtime semantics are the runtime semantics of function object creation rather than function object invocation. As some of the steps specified are FunctionCreate, MakeMethod and MakeConstructor (not FunctionInvoke, InvokeMethod or InvokeConstructor). The ES6 spec draft “14.2.17 Runtime Semantics: Evaluation” and also 14.3.8 are similar.

Why do we care about Closure?

Without closures, we wouldn’t have the concept of modules which I’ve discussed in depth here.

Modules are used very heavily in JavaScript both client and server side (think NPM), and for good reason. Until ES6 there is no baked in module system. In ES6 modules become part of the language. The entire Node.js ecosystem exists to install modules via the CommonJS initiative. Modules on the client side most often use the Asynchronous Module Definition (AMD) implementation RequireJS to load modules, but can also use the likes of CommonJS via Browserify, which allows us to load node.js packages in the browser.

As of writing this, the TC39 committee have looked at both the AMD and CommonJS approaches and come up with something completely different for the ES6 module draft spec. Modules provide another mechanism for not allowing secrets to leak into the global object.

Modules are not new. David Parnas wrote a paper titled “On the Criteria To Be Used in Decomposing Systems into Modules” in 1972. This explores the idea of secrets. Design and implementation decisions that should be hidden from the rest of the programme.

Here is an example of the Module pattern that includes both private and public methods. my.moduleMethod has access to private variables outside of it’s VariableEnvironment (the current scope) via the Environment record which references the outer LexicalEnvironment via it’s internal [[Scope]] property.

Information hiding: state and implementation. In JavaScript we don’t have access modifiers, but we don’t need them either. We can hide our secrets with various patterns. Closure is a key concept for many of these patterns. Closure is a key building block for helping us to programme against contract rather than implementation, helping us to form consistent abstractions, giving us the ability to engage with a concept while safely ignoring some of its details. Thus hiding unnecessary complexity from consumers.

I think Steve McConnell explains this very well in his classic “Code Complete” book. Steve uses the house abstraction as his metaphor. “People use abstraction continuously. If you had to deal with individual wood fibers, varnish molecules, and steel molecules every time you used your front door, you’d hardly make it in or out of your house each day. Abstraction is a big part of how we deal with complexity in the real world. Software developers sometimes build systems at the wood-fiber, varnish-molecule, and steel-molecule level. This makes the systems overly complex and intellectually hard to manage. When programmers fail to provide larger programming abstractions, the system itself sometimes fails to make it through the front door. Good programmers create abstractions at the routine-interface level, class-interface level, and package-interface level-in other words, the doorknob level, door level, and house level-and that supports faster and safer programming.

Encapsulation: you can not look at the details (the internal implementation, the secrets).

Partial function application and Currying: I have a set of posts on this topic. Closure is an integral building block of these constructs. Part 1, Part 2 and Part 3.

Functional JavaScript relies heavily on closure.

Are there any Costs or Gotchas of using Closures?

Of course. You didn’t think you’d get all this expressive power without having to think about how you’re going to use it did you? As we’ve discussed, closures were created to address the funarg problem. In doing that, the closure references the lexical (static) scope of the outer scope. So even once the free variables are out of scope, closure will still reference them if they were saved at function creation time. They can not be garbage collected until the function that references (is closed over) the outer scope has fallen out of scope. I.E. the reference count is 0.

var x = 10;
var noOneLikesMe = 20;
var globalyAccessiblePrivilegedFunction;

function globalyScopedFunction(z) {

  var noOneLikesMeInner = 40;

  function privilegedFunction() {
    return x + z;
  }

  return privilegedFunction;

}

// This is where privilegedFunction is created.
globalyAccessiblePrivilegedFunction = globalyScopedFunction(30);

// This is where privilegedFunction is applied.
globalyAccessiblePrivilegedFunction();

Now only the free variables that are needed are saved at function creation time. We see that when execution arrives at line 7, the currently scoped closure has the x free variable saved to it, but not z, noOneLikesMe, or noOneLikesMeInner.

noOneLikesMe

When we enter innerFunction on line 10, we see the hidden [[Scope]] property has both the outer scope and the global scope saved to it.

TwoClosures

Say for example execution has passed the above code snippet. If the closed over variables can still be referenced by calling globalyAccessiblePrivilegedFunction again, then they can not be garbage collected. This is a frequently abused problem with the upwards funarg problem. If you’ve got hot code that is creating many functions, make sure the functions that are closed over free variables are dropped out of scope as soon as you no longer have a need for them. This way garbage collection can deallocate the memory used by the free variables.

Looking at how the specification would look simplified, we can see that each Environment record inherits what it knows it’s going to need from the Environment record of its lexical parent. This chaining inheritance goes all the way up the lexical hierarchy to the global function object as seen below. In this case the family tree is quite short. Remember this structure is formed at function creation time, not invocation time. the free variables (not their values) are statically baked.

////////////////
// pseudocode //
////////////////
globalyScopedFunction = closure {
   FormalParameterList: { // Optional
      z: 30 // Values updated at invocation time.
   },
   FunctionBody: {
      var noOneLikesMeInner = 40;

      function privilegedFunction() {
         return x + z;
      }

      return privilegedFunction;
   },
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10 // Free variable saved because we know it's going to be used in privilegedFunction.
   },
   privilegedFunction: = closure {
      FormalParameterList: {}, // Optional
      FunctionBody: {
         return x + z;
      },
      Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
         x: 10 // Free variable inherited from the outer Environment.
         z: 30 // Formal parameter saved from outer lexical environment.
      }
   }
}

Scope

I discuss closure here very briefly and how it can be used to create block scoped variables prior to block scoping with the let keyword in ES6, supposed to be officially approved by December 2014. I discuss scoping here in a little more depth.

Closure misunderstandings

Closures are created when a function is returned

A closure is formed when one of those inner functions is made accessible outside of the function in which it was contained” found here is simply incorrect. There are also a lot of other misconceptions found at that link. I’d advise to read with a bag of salt.

Now we’ve already addressed this one above, but here is an example that confirms that the closure is in fact created at function creation time, not when the inner function is returned. Yes, it does what it looks like it does. Fiddle with it?

(function () {

   var lexicallyScopedFunction = function () {
      console.log('We\'re in the lexicalyScopedFunction');
   };

   (function innerClosure() {
      lexicallyScopedFunction();
   }());

}());

On line 8, we get to see the closure that was created from the execution of line 11.

lexicallyScopedFunction

Closures can create memory leaks

Yes they can, but not if you let the closure go out of scope. Discussed above.

Values of free variables are baked into the Closure

Also untrue. Now I’ve put in-line comments to explain what’s happening here. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      loopCountingFunctions[i] = (function printLoopCount() {
         // What you see here is that each time this code is run, it prints the last value of the loop counter i.
         // Clearly showing that for each new printLoopCount function created and saved to the loopCountingFunctions array,
         // the variable i is saved to the Environment record, not the value of the variable i.
         console.log(i);
      });
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 3
runLoopPrinter[1](); // 3
runLoopPrinter[2](); // 3

An aside… getLoopPrinter is a global function. Once execution is on line 3 you get to see that global functions also have closure… supporting my comments above

global functions have closure too

Now in the above example, this is probably not what you want to happen, so how do we give each printLoopCount function it’s on value? Well by creating a parameter for each iteration of the loop, each with the new value. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      (function (i) {
         // Now what happens here is each time the above loop runs this code,
         // inside this scope (the scope of this comment) i is a new formal parameter which of course
         // gets statically saved to each printLoopCount functions Environment record (or more simply each closure of printLoopCount).
         loopCountingFunctions[i] = (function printLoopCount() {
            console.log(i);
         });
      })(i)
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 0
runLoopPrinter[1](); // 1
runLoopPrinter[2](); // 2

As always, let me know your thoughts on this post, any thing you think I may have the wrong handle on, or anything that otherwise stood out.

References and interesting reads

Evaluation of AngularJS, EmberJS, BackboneJS + MarionetteJS

December 28, 2013

This post will continue to be modified for at least a month from the publish date. I just didn’t want to wait another month before publishing, so people can start to get some use out of it early. If you have resources, comments, anything you think that could be useful to others, please add a comment and if it makes sense, I may add it to the post. This will also be used as a resource for the attendees to the CHC.JS MV* Battle Royale meet-up.

Recently I’ve undertaken the task of reviewing some JavaScript MV* frameworks to help organise/structure the client side code within an application I’m currently working on. This is about the third time I’ve done this. Each time has been for a different type of application with completely different requirements, frameworks and libraries to consider.
Unlike Angular and Ember, Backbone is a small library. Marionette adds quite a lot of extra functionality and provides some nice abstractions on top . All mentioned frameworks/libraries are free and open source.

I found a useful tool for helping with the selection process about a year ago. It’s called TodoMVC and it contains a generous collection of applications all satisfying the requirements of a single specification (a small web app that allows the person using it to add todo notes etc.). So basically they all do the same thing, but use a different JavaScript framework or library to do it. It’s still being maintained. Addy Osmani’s blog post on the project is here.

The idea is that you can work through a decent size selection of applications that all do the same thing.
This assists the R&D developer or architect to make informed decisions on which JavaScript framework or library will suite their purposes, if any.
There are also a couple of Todo apps (vanillajs and jquery) that don’t use a framework at all.
There’s a template to use as a starting point, so you can create your own.

Just bear in mind though, that the TodoMVC app doesn’t really show case what Ember and Angular has to offer.

On Addy’s post There are a collection of good points on how to create your selection criterion under the heading “Our Suggested Criteria For Selecting A Framework”.

I’ve heard a few times that “all you really need to do in order to make an informed decision on which framework or library to go with is just write a small app for each of the frameworks, do a bit of reading and maybe watch a few screen casts. Shouldn’t take more than a day”. I disagree with this. I don’t think there is any way you can learn all or most of the pros and cons of each framework in a day or even two. Depending on how much time you have, my recommended approach would be to go through the following activities in the following order (give or take). Spending as little or as much time as you have, ideally in a few iterations, for each of the offerings you’re investigating.

  1. Listen to a pod-cast (say, on your way to/from a clients or even in your sleep. Good time savers.)
  2. Read some of the documentation
  3. Watch a screen-cast on each one
  4. Play with some examples
  5. Evaluate on features you (definitely or may) want verses features available. Features need to be learned. If you don’t need them, you will probably be better going with the offering that doesn’t have the features you don’t need, but has the architecture to add them (thinking Backbone) if/when you do need them.
  6. Are the features implemented in an architecture that you believe is good (I.E. are the layers muddied)?
  7. Read some blog posts, tutorials.
  8. Read some opinions and evaluate for yourself.
  9. Start testing it’s limits
  10. Decide whether you like its opinions imposed
  11. Does it impose enough or to many opinions for you and your team

As the JavaScript MV* landscape is constantly and very quickly changing, the outcome of your evaluation will have a short use by date.

This is my attempt to distil the attributes of the discussed offerings. I’ve attempted to come at this with an open mind. Hopefully this will help save some work for those that come after me. lists are sorted in the order of most useful to me. I make no apologies for the abundance of links, as I’ve also used this for a resource collection point and hope that this post will fall into the category of a “one stop shop” for what I consider to “currently” be the top three contenders in the client side MV* line-up. In saying that though, there are other strong contenders like Meteor not discussed here, as it’s more than just a client side MV* framework. Without further ado, here they are…

Angular.js

AngularJS

Intro

Opinionated framework that has Models, Views and Controllers, but does not conform to the MVC pattern.

Core Team

Igor Minár, Miško Hevery, Vojta Jína.
All work at google.

Backed by the commercial giant Google (you decide whether that’s a good thing).

Community

Conferences

  1. ng-conf

Statistics

  • Version: 1.2.5
  • Payload Size: Depends on handlebars development version 85kb
    1. development version 716.7kb
    2. minified 99.8kb
    3. minified and compressed < 36kb
  • Age: Initial Github commits: January 2010

Performance

See Backbone Performance below.

Documentation

Pod-casts

  1. Angular.js

Screen-casts

  1. AngularJS on YouTube
  2. EggHead.io Lessons

Blog Posts, Tutorials, etc.

  1. Learning AngularJS in one day
  2. Angular docs Tutorial

Features

  • Directives: used via Non HTML compliant tags, attributes, comment and class names. Although there are options to make it compliant:  via the class (not recommended) and data attributes.
  • scope. The first half of this video shows how the scope may be confusing to those new to Angular. If I can not tell how code will work without running it, it violates the Principle of Least Astonishment (PoLA). It seems quite clunky to me.

Positive

  1. Good for long running and complex applications with deep nested view hierarchies
  2. Two-way data binding
  3. All tests run against IE8 (good for those that are locked into legacy MS)
  4. Test driven (and more vocal about it than Ember)
  5. Payload is about 1/3 smaller than Ember

Negative

  1. Steep learning curve compared to Backbone, but not as steep as Ember.
  2. Dirty checking to keep views and models in sync is costly. Ember keeps sync in a more elegant way. Possible perceived downside to this is Ember models have to inherit from DS.Model (next point addresses this as a positive though). Also discussed here under the “Performance issues” heading.
  3. Models are Plain Old JavaScript Objects (POJO’s). Doesn’t have to be anything special. Now there’s an argument here that attempts to explain this as being a selling point of Angular, but in reality what happens is a violation of the Uniform Access Principle, thus creating tight coupling. How’s that? Well now the view needs to know too much about the model’s members. Discussed in more detail here. For example if one of the models properties is a function, the view has to know this. So you see this sort of thing in the view {{area()}} (so we’re pulling our JavaScript into our view.) where as with Ember because it’s models are well defined and you can use computed properties on them, all the view needs to specify is an identifier, then you’ll see this sort of thing in the view {{area}}. The Ember model then creates a computed property with the same name. The opposing view is that in ES5 you can just hide your functions etc. behind property getters and setters. Most developers take the path of least resistance, so I think most will be doing it the wrong way.

Interesting Plug-ins

  1. ?

Useful Tools

  • “AngularJS Batarang” for Chrome browser (it’s an extension)

Ember.js

EmberJS

Intro

Opinionated framework that has Models, Views and Controllers, but does not conform to the MVC pattern.

Core Team

Yehuda Katz, formerly of Rails and SproutCore projects.
Tom Dale, Peter Wagenet, Trek Glowacki, Erik Bryn, Kris Selden, Stefan Penner, Leah Silber, Alex Matchneer.

Backed by the JavaScript community.

Community

Conferences

Statistics

  • Version: 1.2.0
  • Payload Size:
    1. development version 1.1MB
    2. production version 1.0MB
    3. minified and GZipped 67kb
  • Age: Initial Github commits: April 2011

Performance

See Backbone Performance below.

Documentation

Pod-casts

  1. JavaScript Jabber Ember Tools
  2. JavaScript Jabber Ember.js (also covers some backbone)
  3. JavaScript Jabber Ember.js & Discourse
  4. EmberWatch

Screen-casts

  1. Building an Ember.js Application
  2. Ember101
  3. EmberWatch
  4. tutsplus
  5. EmberWatch 

Blog Posts, Tutorials, etc.

Features

  • the ember-application class gets added to the root element (body) in the ember JavaScript file. I was wondering how this class was magically added to the markup. Couldn’t find any documentation on it, so had to look through the JavaScript.

Positive

  1. Good for long running and complex applications with deep nested view hierarchies
  2. Aggregates model data changes and update the DOM late in the RunLoop.
  3. Well defined models and computed properties (See Angular negative point around this).
  4. Test driven

Negative

  1. Steepest learning curve out of the three. Why? Because there’s more in it. If you need it, great! Maybe you don’t. If not, is the extra learning worth using it? Part of the “more in it” may also be around the elegant way things have been designed, I.E. more constraints to push the users down the right path, thus higher chance of less friction and pain in the future of your application, that is of course if your application does things they way Ember says they must be done. I’m seeing some of these things in the likes of the well defined models and computed properties.
  2. Payload is the largest out of all three.

Interesting Plug-ins

  1. ?

Useful Tools

  • “Ember Inspector” for Chrome browser (it’s an extension)
  • ember-tools. Listen to and/or read the pod-cast linked to above. Provides file organisation, scaffolding, template pre-compilation, generators, CommonJS (that’s node.js style) modules. and other goodies. Useful for setting up your project to conform to the Ember conventions, so you don’t end up fighting them.

Angular versus Ember views

Pod-casts

  1. Angular vs Ember Cage Match NDC

Screen-casts

  1. Angular vs Ember Cage Match NDC

Blog Posts, Tutorials, etc.

  1. Evil Trout Ember versus Angular (possible bias toward Ember)
  2. Why AngularJS beat EmberJS

My Thoughts

Both Frameworks Appear to be Targeting a Similar Problem Space

Don’t believe everything you read. Test it before you buy it. I’ve come across quite a few articles that are just incorrect. Even by reputable people. Sometimes because the frameworks have changed how they do things and/or their documentation has changed. So don’t just take it all at face value. The concept of MVC has changed over the past decade. Although concepts have changed, a pattern doesn’t change, that’s why it’s a pattern. Something everybody familiar with a pattern understands. If an implementation starts to change, then it no longer conforms to the pattern and should not be named after the pattern, as this just brings confusion. Microsoft’s ASP.NET MVC framework is a perfect example of this. It does not follow the MVC pattern (documented in 1979) and so should never have been named MVC. Ask me in the comments to explain if your not aware of how this is. In the MVC pattern Models are not injected into views by Controllers. With the MVC pattern, Views listen to events from a Model (The View is actually oblivious to the Model) which the Controller has hooked up, since the Controller knows about both the View and the Model. This may not be your understanding of MVC? More than likely this is due to certain frameworks being labelled as MVC when they are not, thus bringing the confusion. The following image provided by Gang-Of-Four depicts the MVC pattern.

MVC

Angular Doesn’t Pretend JavaScript has Real Classes

Personally I find both frameworks have opinions that make me nauseous.
Like Angular’s scope and Embers class-hierarchy abstraction. Yes Harmony will have pseudo classes for the classical programmers that struggle with JavaScripts declarative prototypal inheritance. (disclaimer: my roots are in classical OO languages) The way I feel about it: Say a whole lot of JavaScript programmers start using a classical OO language and decide they don’t like the way it does classical inheritance, so the classical object oriented language authorities decide to add syntactic sugar on top of the language to make it’s classical inheritance “look” more like prototypal inheritance for those that struggle with the classical paradigm. Now seriously, why would you muddy the language to cater for those that are not prepared to spend the time learning how it works?

Another and probably the most obvious reason why JavaScript didn’t have classes, is so that object hierarchies could be built up via composition (only inheriting what is actually needed) rather than having to inherit every member needlessly from a base class (essentially knowing far more than is actually needed).  Once you have to re-factor your way out of a code base that has abused inheritance thus creating very tightly coupled code by violating one of the object-oriented design principles (information hiding), the perils of over using inheritance will become very clear.

I’m open to exploring what the other client side JavaScript frameworks and libraries have to offer and I’d love to hear from everyone that’s had experience with them.

Angular and Ember do a Lot For You

With all the bells and whistles, both frameworks impose strong opinions that you must follow in order to make the magic (in a lot of the cases convention) work. Once you’ve learnt Angular and/or Ember, productivity is maximised. But… you must be building your application the way the framework creators want you to. At this stage, I’m not supper comfortable with that. This is where Backbone and friends comes in to its own.

Backbone.jsMarionette.js

BackboneJS + MarionetteJS

Intro

Backbone is an unopinionated library that has Models, Views but no Controllers out of the box. That’s right, a library rather than a framework because your code needs to know about it, rather than it knowing about and executing your code. It does not follow the MVC, MVP or MVVM patterns. It’s views and routers act similarly to a controller. Marionette brings the controller to Backbone (if you want or need it), thus you can keep your router doing what it should be doing (just routing, with no controller logic).

What I find strange is that a Backbone view contains a model. I’m not sure I’d even call this a MV* library, as it may introduce confusion.

Backbone’s sweet spot is providing the user with brief and casual interaction. Doesn’t provide help or guidance with deallocating memory and detaching events. Assumptions are that the user isn’t going to be using this application all day without closing the browser window. Although in saying that, there are many applications that use Backbone for this type of thing, but they must provide explicit code to release event handlers. Marionette provided some help here for older versions of Backbone. and Backbone has improved things with newer versions. You will still need to keep in mind that event handlers need to be released though (Backbone’s view.remove takes care of this now). Marionette provides abstractions to deal with these like the close method which provides a place to add clean-up code and then calls Backbone’s remove. Failing to remove event handlers are the largest cause of memory leaks in Backbone.

Core Team

Backbone: Jeremy Ashkenas

Marionette: Derick Bailey

Community

IRC: #marionette on FreeNode. Little activity.

Conferences

  1. BackboneConf

Statistics

  • Version: 1.1.0
  • Payload size: Depends on Underscore development version 43kb or minified and gzipped 4.9kb
    1. Backbone development version 59kb
    2. Backbone minified and gzipped 6.4kb
  • Age: Backbone: Initial Github commits: September 2010

Performance

The second half of this video shows the difference between Backbone and Ember performance. What I’ve seen to date, is that in terms of performance, Backbone leads, second is Ember, third is Angular. You need to decide how much performance matters to your situation and whether or not it’s “good enough” for the framework/library you choose.

Documentation

Pod-casts

  1. Marionette.js
  2. JavaScript Jabber Ember.js (also covers some backbone)
  3. Backbone.js

Screen-casts

  1. How to build modular Backbone applications using MarionetteJS
  2. Tuts+ Intro to Marionette
  3. Plugging in MarionetteJS. This resource is about adding Marionette to a MongoDB document explorer. Also features source code.
  4. Github
  5. BackboneConf 2013 Talks

Blog Posts, Tutorials, etc.

  1. Github
  2. backbone and ember
  3. Marionette Wiki

Books

  1. Backbone Fundamentals

Features

  • ?

Positive

  1. Free to use any templating engine. You can use underscore as it’s the only dependency of backbone, or any other of your choosing.
  2. A lot of excellent documentation
  3. Very flexible in how you may want to use it
  4. Minimalist library
  5. Easy to learn (not a lot of it).
  6. Payload including dependencies is the smallest out of all three. About 9 times smaller than Ember.

Negative

  1. No two way data-binding. Although if you want/need it, you could use the likes of the data binding offerings below in the Interesting Plug-ins section.
  2. No provision for handling nested views. This is where the likes of Marionette’s Backbone.BabySitter comes in
  3. More work required to build large scale applications than the likes of Angular or Ember (just a library after all).
  4. If your large complex application is written in Backbone, chances are you have added a lot of boiler plate code. Any new developers coming onto the project will have to get up to speed on this code. If your large complex application uses Angular or Ember and the new developers coming onto the project have worked with these frameworks, they more than likely won’t have to learn the boiler plate code that they would have to with the likes of Backbone, because it’s part of the framework.

Interesting Plug-ins

  1. There is a similar offering: backbone.layoutmanager which I haven’t really looked into, but according to Derick Bailey (Marionette BDFL) is more of a framework where as Marionette is a library.
  2. Two way data binding with Rivets.jsKnockback.jsbackbone.stickit
    NYTimes backbone.stickit “is a Backbone data binding plug-in that binds Model attributes to View elements with a myriad of options for fine-tuning a rich application experience”. What looks to be nice about this is that unlike most model binding plug-ins I’ve seen, it doesn’t require you to add any extra tags like Angular to your view. In fact your views are not contaminated at all.
  3. Backbone.routefilter plug-in allows you to add behaviour that will be executed immediately before and/or after a route (Backbone.Router or Marionette.AppRouter) executes.

Useful Tools

  • “Backbone Debugger” for Chrome browser (it’s an extension)
  • Frameworks that leverage backbone and provide more functionality
    1. chaplinJS
    2. thoraxJS (adds handlebars integration plus other functionality)

Now a few more concepts that I think are important to know about if your serious about using a client side JavaScript MV* framework/library and in regards to module loading, this applies to the server side also.

Templating

Blog Posts etc.

  1. net tuts+ Best Practices When Working With JavaScript Templates
  2. net tuts+ An Introduction to Handlebars

Some Offerings

I covered some of the template engines here under “Templating Engines evaluated”, or just use the likes of the Template Engine Chooser

Coupling Domain with Framework

As Boris Smus has said and I think it’s right on the money (although I disagree with his comments around JavaScript class as per my comments above):
Once you bite the bullet and decide to invest in a framework, you often have no easy way to move your code out of it.
If you pick Backbone, but decide mid-cycle that it’s not for you, you are in for a world of hurt:
If you have core functionality that you want to release, release it in pure JavaScript, not as a jQuery plug-in, or some MV* module.

Because there are so many JavaScript frameworks coming and going, and we don’t want to invest to heavily into any one of them,
we really need to keep our investment separate from the library/framework code.

To avoid library/framework and class-system lock-in, a good approach in regards to JavaScript MV* libraries/frameworks,
Is to keep the core functionality separate from the user interface code, thus giving us two separate layers.
This gives us flexibility to swap user interfaces as they come and go, yet still keep the majority of our code in an API layer.
The API layer being a logical single layer, but can be modularised, and loaded as needed, AMD style.
With this separation, we can implement the two layers in the following manner.

1) Build the base layer using pure JavaScript prototypal inheritance.
This is the part you write with the intention of keeping and possibly using parts for other projects also.
This base layer will implement an API that you will want to spend a bit of time getting right.
This is the code that will make the most use of unit tests.
To get the separation clear in your head, think of the user interface code as a client that uses this API as if it was service API sitting on the server.
This way you can avoid creating leaky abstractions.

2) Use an MV* library/framework to implement the user interface, and call into the base layer directly.
This lets you move quickly and focus entirely on writing the user interface.
This architecture should facilitate building your user interface on a solid foundation and avoid investing heavily into an offering that you may want to swap out further down the track.

Modules

In most browsers, just including a script tag will cause the rest of the page to stop rendering until the script has loaded then executed.
Which is why if loading scripts synchronously, they should be concatenated, minified, compressed and included at the bottom.
Loading scripts asynchronously don’t block, which is why you can load multiple scripts in parallel where ever you want (any more than 2-3 concurrently and performance will degrade). Make sure to concatenate your scripts though.

What we see as our projects get larger, is that scripts start to have many dependencies in a way that may overlap and nest.

The simplest way to load asynchronously is to create a script tag and inject it into an existing DOM element on your page.
Because the DOM element already exists, the rendering is not blocked.
See the first code example here

// Create a new script element.
var script = document.createElement('script');

// Find an existing script element on the page (usually the one this code is in).
var firstScript = document.getElementsByTagName('script')[0];

// Set the location of the script.
script.src = "http://example.com/myscript.min.js";

// Inject with insertBefore to avoid appendChild errors.
firstScript.parentNode.insertBefore( script, firstScript );

If you want or need to get serious about script loading (which you’re probably going to have to do at some stage), use a best-of-breed script loader. This will also push you down the path of defining modular JavaScirpt (AKA modules).

Next we look at employing script loaders to load our modules…

Formats available for Writing and Using Modular JavaScript

Asynchronous Module Definition (AMD)

For writing modular JavaScript in the browser. To save re-writing what’s already been done… http://addyosmani.com/writing-modular-js/ see “AMD” section, explains it well. What does AMD actually give us? http://requirejs.org/docs/whyamd.html#amd Separation of Concerns, essentially placing value on interface rather than implementation. Mapping of module IDs to different paths. Lots more. Allows asynchronous loading of modules and their dependencies, which is something we need on the client side, but is not generally a requirement for the server side. For getting started, see “Getting Started With Modules” under the AMD section here. Also check out the AMD specification and of course the most common AMD implementation: RequireJS. Then at some stage you’re probably going to want to concatenate and minify your modules and that’s where the likes of r.js comes in. r.js also has a node.js adapter which allows you to use node’s implementation of  require.

Tom Dale (core team member on Ember) also has some interesting ideas around why he thinks AMD is not the answer.

CommonJS API (Optimised for the server)

Although we have the likes of browserify a CommonJS module implementation that can run in the browser or browser-build… makes CommonJS modules available in the browser and is very fast. Ryan Florence discusses module loaders in the pod-cast listed above “JavaScript Jabber Ember Tools” where he decided to move to CommonJS rather than RequireJS for his Ember Tools mostly due to speed. So it’s horses for courses. Decide what your requirements are, then decide which module loader satisfies the most of them. Also see “writing modular js” under the “CommonJS” section.
Providing a rich standard library. The intention is that an application developer will be able to write an application using the CommonJS APIs and then run that application across different JavaScript interpreters and host environments. With CommonJS-compliant systems, you can use JavaScript to write:

  • Server-side JavaScript applications
  • Command line tools
  • Desktop GUI-based applications
  • Hybrid applications (Titanium, Adobe AIR)

Why it doesn’t excel in the browser “out of the box”: http://requirejs.org/docs/whyamd.html#commonjs
ES Harmony (Modules implemented in the language. were not quite there yet, but the current offerings look to be a pretty good step in the right direction).

http://addyosmani.com/writing-modular-js/ (specifically “ES Harmony” section) discusses where TC39 are going in regards to implementing modules in ES.next.

So AMD and CommonJS can be used on server side or client side. In some cases one will work better for you than the other. You’ll need to do your homework as to what to use in which scenarios. Both have advantages and disadvantages that may work for or against you.

I’m keen to get a discussion going here on peoples experiences with the three MV* offerings mentioned. Especially those that have experience with two or more.

Up and Running with Express on Node.js … and friends

July 27, 2013

This is a result of a lot of trial and error, reading, notes taken, advice from more knowledgeable people than myself over a period of a few months in my spare time. This is the basis of a web site I’m writing for a new business endeavour.

Web Frameworks evaluated

  1. ExpressJS Version 3.1 I talked to quite a few people on the #Node.js IRC channel and the preference in most cases was Express. I took notes around the web frameworks, but as there were not that many good contenders, and I hadn’t thought about pushing this to a blog post at the time, I’ve pretty much just got a decision here.
  2. Geddy Version 0.6

MV* Frameworks evaluated

  1. CompoundJS (old name = RailwayJS) Version 1.1.2-7
  2. Locomotive Version 0.3.6. built on Express

At this stage I worked out that I don’t really need a server side MV* framework, as Express.js routes are near enough to controllers. My mind may change on  this further down the track, if and when it does, I’ll re-evaluate.

Templating Engines evaluated

  1. jade Version 0.28.2, but reasonably mature and stable. 2.5 years old. A handful of active contributors headed by Chuk Holoway. Plenty of support on the net. NPM: 4696 downloads in the last day, 54 739 downloads in the last week, 233 570 downloads in the last month (as of 2013-04-01). Documentation: Excellent. The default view engine when running the express binary without specifying the desired view engine. Discussion on LinkedIn. Discussed in the Learning Node book. Easy to read and intuitive. Encourages you down the path of keeping your logic out of the view. The documentation is found here and you can test it out here.
  2. handlebars Version 1.0.10 A handful of active contributors. NPM: 191 downloads in the last day, 15 657 downloads in the last week, 72 174 downloads in the last month (as of 2013-04-01). Documentation: Excellent: nettuts. Also discussed in Nicholas C. Zakas’s book under Chapter 5 “Loose Coupling of UI Layers”.
  3. EJS Most of the work done by the Chuk Holoway (BDFL). NPM: 258 downloads in the last day, 13 875 downloads in the last week, 56 962 downloads in the last month (as of 2013-04-01). Documentation: possibly a little lacking, but the ASP.NET syntax makes it kind of intuitive for developers from the ASP.NET world. Discussion on LinkedIn. Discussed in the “Learning Node” book by Shelley Powers. Plenty of support on the net. deoxxa from #Node.js mentioned: “if you’re generating literally anything other than all-html-all-the-time, you’re going to have a tough time getting the job done with something like jade or handlebars (though EJS can be a good contender there). For this reason, I ended up writing node-ginger a while back. I wouldn’t suggest using it in production at this stage, but it’s a good example of how you don’t need all the abstractions that some of the other libraries provide to achieve the same effects.”
  4. mu (Mustache template engine for Node.js) NPM: 0 downloads in the last day, 46 downloads in the last week, 161 downloads in the last month (as of 2013-04-01).
  5. hogan-express NPM: 1 downloads in the last day, 183 downloads in the last week, 692 downloads in the last month (as of 2013-04-01). Documentation: lacking

Middleware AKA filters

Connect

Details here https://npmjs.org/package/connect express.js shows that connect().use([takes a path defaulting to ‘/’ here], andACallbackHere) http://expressjs.com/api.html#app.use the body of andACallbackHere will only get executed if the request had the sub directory that matches the first parameter of connect().use

Styling extensions etc evaluated

  1. less (CSS3 extension and (preprocessor) compilation to CSS3) Version 1.4.0 Beta. A couple of solid committers plus many others. runs on both server-side and client-side. NPM: 269 downloads in the last day, 16 688 downloads in the last week, 74 992 downloads in the last month (as of 2013-04-01). Documentation: Excellent. Wiki. Introduction.
  2. stylus (CSS3 extension and (preprocessor) compilation to CSS3) Worked on since 2010-12. Written by the Chuk Holoway (BDFL) that created Express, Connect, Jade and many more. NPM: 282 downloads in the last day, 16 284 downloads in the last week, 74 500 downloads in the last month (as of 2013-04-01).
  3. sass (CSS3 extension and (preprocessor) compilation to CSS3) Version 3.2.7. Worked on since 2006-06. Still active. One solid committer with lots of other help. NPM: 12 downloads in the last day, 417 downloads in the last week, 1754 downloads in the last month (as of 2013-04-01). Documentation: Looks pretty good. Community looks strong: #sass on irc.freenode.net. forum. less, stylus, sass comparison on nettuts.
  • rework (processor) Version 0.13.2. Worked on since 2012-08. Written by the Chuk Holoway (BDFL) that created Express, Connect, Jade and many more. NPM: 77 downloads in the last week, 383 downloads in the last month (as of 2013-04-01). As explained and recommended by mikeal from #Node.js its basically a library for building something like stylus and less, but you can turn on the features you need and add them easily.  No new syntax to learn. Just CSS syntax, enables removal of prefixes and provides variables. Basically I think the idea is that rework is going to use the likes of less, stylus, sass, etc as plugins. So by using rework you get what you need (extensibility) and nothing more.

Responsive Design (CSS grid system for Responsive Web Design (RWD))

There are a good number of offerings here to help guide the designer in creating styles that work with the medium they are displayed on (leveraging media queries).

Keeping your Node.js server running

Development

During development nodemon works a treat. Automatically restarts node when any source file is changed and notifies you of the event. I install it locally:

$ npm install nodemon

Start your node app wrapped in nodemon:

$ nodemon [your node app]

Production

There are a few modules here that will keep your node process running and restart it if it dies or gets into a faulted state. forever seems to be one of the best options. forever usage. deoxxa’s jesus seems to be a reasonable option also, ningu from #Node.js is using it as forever was broken for a bit due to problems with lazy.

Reverse Proxy

I’ve been looking at reverse proxies to forward requests to different process’s on the same machine based on different domain names and cname prefixes. At this stage the picks have been node-http-proxy and NGinx. node-http-proxy looks perfect for what I’m trying to do. It’s always worth chatting to the hoards of developers on #Node.js for personal experience. If using Express, you’ll need to enable the ‘trust proxy’ setting.

Adding less-middleware

I decided to add less after I had created my project and structure with the express executable.
To do this, I needed to do the following:
Update my package.json in the projects root directory by adding the following line to the dependencies object.
“less-middleware”: “*”

Usually you’d specify the version, so that when you update in the future, npm will see that you want to stay on a particular version, this way npm won’t update a particular version and potentially break your app. By using the “*” npm will download the latest package. So now I just copy the version of the less-middleware and replace the “*”.

Run npm install from within your project root directory:

my-command-prompt npm install
npm WARN package.json my-apps-name@0.0.1 No README.md file found!
npm http GET https://registry.npmjs.org/less-middleware
npm http 200 https://registry.npmjs.org/less-middleware
npm http GET https://registry.npmjs.org/less-middleware/-/less-middleware-0.1.11.tgz
npm http 200 https://registry.npmjs.org/less-middleware/-/less-middleware-0.1.11.tgz
npm http GET https://registry.npmjs.org/less
npm http GET https://registry.npmjs.org/mkdirp
npm http 200 https://registry.npmjs.org/mkdirp
npm http 200 https://registry.npmjs.org/less
npm http GET https://registry.npmjs.org/less/-/less-1.3.3.tgz
npm http 200 https://registry.npmjs.org/less/-/less-1.3.3.tgz
npm http GET https://registry.npmjs.org/ycssmin
npm http 200 https://registry.npmjs.org/ycssmin
npm http GET https://registry.npmjs.org/ycssmin/-/ycssmin-1.0.1.tgz
npm http 200 https://registry.npmjs.org/ycssmin/-/ycssmin-1.0.1.tgz
less-middleware@0.1.11 node_modules/less-middleware
├── mkdirp@0.3.5
└── less@1.3.3 (ycssmin@1.0.1)

So you can see that less-middleware pulls in less as well.
Now you need to require your new middleware and tell express to use it.
Add the following to your app.js in your root directory.

var lessMiddleware = require('less-middleware');

and within your function that you pass to app.configure, add the following.

app.use(lessMiddleware({
   src : __dirname + "/public",
   // If you want a different location for your destination style sheets, uncomment the next two lines.
   // dest: __dirname + "/public/css",
   // prefix: "/css",
   // if you're using a different src/dest directory, you MUST include the prefix, which matches the dest public directory
   // force true recompiles on every request... not the best for production, but fine in debug while working through changes. Uncomment to activate.
   // force: true
   compress : true,
   // I'm also using the debug option...
   debug: true
}));

Now you can just rename your css files to .less and less will compile to css for you.
Generally you’ll want to exclude the compiled styles (.css) from your source control.

The middleware is made to watch for any requests for a .css file and check if there is a corresponding .less file. If there is a less file it checks to see if it has been modified. To prevent re-parsing when not needed, the .less file is only reprocessed when changes have been made or there isn’t a matching .css file.
less-middleware documentation

Bootstrap

Twitters Bootstap is also really helpful for getting up and running and comes with allot of helpful components and ideas to get you kick started.
Getting started.
Docs
.

Bootstrap-for-jade

As I decided to use the Node Jade templating engine, Bootstrap-for-Jade also came in useful for getting started with ideas and helping me work out how things could fit together. In saying that, I came across some problems.

ReferenceError: home.jade:23

body is not defined
    at eval (eval at <anonymous> (MySite/node_modules/jade/lib/jade.js:171:8), <anonymous>:238:64)
    at MySite/node_modules/jade/lib/jade.js:172:35
    at Object.exports.render (MySite/node_modules/jade/lib/jade.js:206:14)
    at View.exports.renderFile [as engine] (MySite/node_modules/jade/lib/jade.js:233:13)
    at View.render (MySite/node_modules/express/lib/view.js:75:8)
    at Function.app.render (MySite/node_modules/express/lib/application.js:506:10)
    at ServerResponse.res.render (MySite/node_modules/express/lib/response.js:756:7)
    at exports.home (MySite/routes/index.js:19:7)
    at callbacks (MySite/node_modules/express/lib/router/index.js:161:37)
    at param (MySite/node_modules/express/lib/router/index.js:135:11)
GET /home 500 22ms

I found a fix and submitted a pull request. Details here.

I may make a follow up post to this titled something like “Going Steady with Express on Node.js … and friends'”

JavaScript Object Creation Patterns

July 6, 2013

What are the differences in creating an object by way of simple function invocation, vs using a constructor vs creating an object using the object literal notation vs function application?

To make sure we’re all on the same page, a quick refresher of what an object actually is in JavaScript…

What is an object in JavaScript?

  • An object is an unordered mutable keyed collection of properties. Each property is either a named data property, a named accessor property, or an internal property. I discussed JavaScript properties in depth here.
  • The ECMAScript language types are Undefined, Null, Boolean, Number, String and Object.
  • The simple types (primitives) of JavaScript are members of one of the following built-in types: Undefined, Null, Boolean (true and false), Number, and String.
  • All other values are objects. Function, String, Number, RegExp etc all indirectly inherit Object via their prototype property, which has a hidden link to Object. Try not to get confused about the fact that we can have for example a String primitive which isn’t an object and we can have a String object by calling the String constructor. See the ES3 and ES5 spec 15.5.
  • All objects have a prototype. I explain JavaScript prototypes here.
  • An object created from a function has a “prototype” property (an Object) (seen below in the red box) (whether invoked as a constructor or function). It has a property which is a constructor function (seen below in blue box) and a hidden property (a link) to the actual Object.prototype (seen below in the pink box).
  • An object created by means of an object literal inherits straight from (is linked to) Object.prototype. So the “prototype” property doesn’t exist, but there are other ways to access it. Problem is… how to access it varies from browser to browser.

internals of a function object

internalsOfAfunctionobject

Every function is also created with two additional hidden properties: the functions context and the code that implements the functions behaviour.

Object creation

On invocation, every function receives 2 hidden parameters. this and the arguments array (which provides access to all the arguments that were supplied to the function on invocation). The value assigned to this is determined by how the function was invoked. We’ll look at this in the following sections.

Object creation via function invocation

Pros

Best suited for creation of one-time on-demand objects.

Cons

If a function is not the property of an object literal, when you invoke it, this will be bound to the global object. Often not what you’re expecting. A mistake in the design of the language. As you can see in the following code, the this of the local scope (kimsGlobalFunction)

aFunctionInItsOwnScope

correct this is global

Now here you can see when we invoke the local function, the this is bound to the global object. That’s a mistake in the language. When innerFunction is invoked, the this of that function is also bound to the global object.


InsideaFunctionsOwnScope

Line 18 above alerts undefined, because that doesn’t belong to the global object.


NotCorrectThisIsGlobal

Object creation via constructor

What is a constructor in JavaScript?

It’s a function, nothing more. It’s how it is invoked that determines it as a constructor.

What does it look like?

(function () {
   // Create a constructor function called MyFunc.
   var MyFunc = function (aString) {
      // The following variable is private.
      var privateString = aString;
      // Access it with a privileged method.
      this.publicString = function () {
         return privateString;
      }
   };

   // Prefixing with new means we're now using the function as a constructor.
   // So we use PascalCase rather than camelCase, so users of MyFunc don't invoke without the new prefix.
   var myFunc = new MyFunc('sponge bob');
   alert(myFunc.publicString());
}());

Pros

Great for re-use. Creating a constructor and assigning members to it’s prototype, mean that every time you create an object from the constructor using the new prefix, the new object uses the same prototypes members. This can save big time on memory if you are creating many objects with the constructor.

Cons

Con 1

What happens when someone invokes a constructor function directly without the new prefix?
The this of the function will not be bound to the new object, but rather to the global object.
so instead of augmenting your new myFunc object, you will be clobbering the global object.
myFunc‘s this would refer to the global object

What counter measures do we have at our disposal to make sure this doesn’t happen?
Naming conventions.
Enforcing new is used with JavaScript constructor functions? pg 45 – 46 of Stoyan Stefanov’s JavaScript Patterns book addresses this. Problem is, those patterns all have significant flaws. So, you really need to weigh up the pros and cons. Maturity of your development team should also play a role in your decision here. It may be worth taking a safer route and employing an object literal if developers are likely to omit the new prefix on an intended constructor invocation.

Object creation via object literal

What does it look like?

Line 12 executes the creation and return of an object with a property called publicString.

// In its simplest form:
var myObject = {};

(function () {
   var myFunc = (function () {
      // private members
      var privateString = 'Spong Bob';
      // implement the public part
      return {
         publicString: function () {
            return privateString;
         }
      };
   }());

   alert(myFunc.publicString());
}());

Pros

Pro 1

The this is bound to where you’d expect it to be (adherence to Principle of least astonishment (POLA).
When a function is stored as a property of an object literal, it’s a method. When a method is invoked with this pattern, this is bound to the object. In the below section of code, when execution is on line 05

(function kims() {

   var myObjectLiteral = {
      myProp: 'a property value',
      myFunc: function () {
         alert(this.myProp);
      }
   };
   myObjectLiteral.myFunc();
}());

Here you can see that the value of myFunc‘s this argument is in fact the myObjectLiteral.

this value

In which case because a function is an object, and a variable is a property, then the same must apply to invoking a function of a function? No. The vital word here is “literal”… As above… “When a function is stored as a property of an object literal

Pro 2

Best suited for creation of one-time on-demand objects.

Cons

As with storing members in a function, be it a function you intend using new with (a constructor), or just by invoking the function, members will not be shared between instances of the prototype. This means that if you create 200’000 myObj object literals as I did in the test above, you will have 200’000 separate add functions in memory. The same goes for adding the add function to the constructor without adding it to the constructors prototype.

Object creation via function application

There is one more object creation pattern. Function application. I’ve discussed this in depth in the following three posts. Check them out.

https://blog.binarymist.net/2012/04/29/extending-currying-and-monkey-patching-part-1/

https://blog.binarymist.net/2012/05/14/extending-currying-and-monkey-patching-part-2/

https://blog.binarymist.net/2012/05/27/extending-currying-and-monkey-patching-part-3/

Speed Testing

Object creation is significantly slower using constructors with no prototype than it is using object literals. Now I did some more testing around this and got some surprising results. Using Chromium, V8 is doing some severe optimisation with object creation using constructors. The more members I added to my test constructor and object literal, the more it became noticeable.

var runTestTimes = function (iterations) {
   var test = function () {
      var constructorIterations = 1000000;
      var objLiteralIterations = 1000000;
      var constructorStart;
      var objLiteralStart;
      var constructorTime;
      var objLiteralTime;
      var MyFunc = function () {
         var simpleString = 'simple string';
         var add = function () {
            return 1+1;
         };
         add();
      };

      constructorStart = new Date();
      while (constructorIterations--) {
         var myFunc = new MyFunc();
      }
      constructorTime = new Date - constructorStart;

      objLiteralStart = new Date();
      while (objLiteralIterations--) {
         var myObj = {
            simpleString: 'simple string',
            add: function () {
               return 1+1;
            }
         };
         myObj.add();
      }
      objLiteralTime = new Date - objLiteralStart;

      console.log('constructor: ' + constructorTime + ' object literal: ' + objLiteralTime);
   };
   while (iterations--) {
      test();
   }
};
runTestTimes(10);

Yields the following results:

constructor: 32 object literal: 19
constructor: 25 object literal: 18
constructor: 25 object literal: 17
constructor: 25 object literal: 18
constructor: 26 object literal: 17
constructor: 25 object literal: 18
constructor: 25 object literal: 17
constructor: 25 object literal: 17
constructor: 25 object literal: 18
constructor: 25 object literal: 17

By simply adding another function to the constructor and the object literal, the results in chromium swung to favour the constructor.

// add this function to the constructor.
var subtract = function () {
   return 1-1;
};

// add this function to the object literal.
subtract: function () {
   return 1-1;
}

Reducing the number of iterations from 1’000’000 to 200’000 because the same code run in Firefox crashed it… Yielded the following in Chromium:

constructor: 5 object literal: 6
constructor: 3 object literal: 5
constructor: 3 object literal: 5
constructor: 2 object literal: 5
constructor: 3 object literal: 4
constructor: 2 object literal: 5
constructor: 2 object literal: 5
constructor: 3 object literal: 4
constructor: 3 object literal: 5
constructor: 2 object literal: 5

and yielded the following using the Firefox JavaScript engine SpiderMonkey:

constructor: 701 object literal: 21
constructor: 729 object literal: 17
constructor: 705 object literal: 17
constructor: 721 object literal: 18
constructor: 727 object literal: 20
constructor: 723 object literal: 18
constructor: 726 object literal: 18
constructor: 727 object literal: 20
constructor: 728 object literal: 17
constructor: 736 object literal: 18

When I moved the add and subtract functions from the constructor to the constructors prototype, the speed results in chromium didn’t yield any noticeable difference. in Firefox

the average went from 854ms to 836ms. The change looked like the following:

var MyFunc = function () {
   var simpleString = 'simple string';
};

MyFunc.prototype.add = function () {
   return 1+1;
};

MyFunc.prototype.subtract = function () {
   return 1-1;
};

I decided to create some tests on jsperf to provide repeatable results. What’s interesting is you don’t have to change a lot to get completely different results, so if you’re concerned about performance, it really pays to test it. I think the tests on jsperf are probably a bit more truthful. here they are.

Summary

There are many more pros and cons of each invocation pattern. I’ve listed the ones that I think are the most important to understand. There is no right or wrong pattern to use for everything. Consider your target audience and what the majority of them may be using in terms of browsers. Consider the maturity of your development team. Benchmark the different approaches, but don’t fall into the trap of micro optimisation, or optimising for a single browser or JavaScript engine unless that’s all your users are using. Choose the pattern that provides the most wins for the given situation. I didn’t test in I.E, but as you can see, the JavaScript engines I did test with do things very differently.

Tools like

  1. Google Page Speed
  2. Google Speed Tracer
  3. Chromiums Profiler

Will help you focus on the areas that matter most.

There are of course many other areas to look at when it comes to “is your app delivering an acceptable user experience”.
Take a few steps back from your situation. You only have so much time. Spend it wisely.
There are many good wins to be had for little cost. Yahoos YSlow has a bunch.
Many books also address this in depth:

  1. Even Faster Web Sites by Steve Souders
  2. High Performance Web sites by Steve Souders
  3. High Performance JavaScript by Nicholas Zakas

There is also a good read on how V8’s full and optimising JIT compilers optimise JavaScript.
I’ve found that most of it’s intuitive and if your using good design and coding principles, in “most” cases your safe, but it’s still worth the read.

  1. As developers we try not to change classes on the fly. deleting or adding properties to hot objects in JavaScript negatively effects the optimising compiler.
  2. We don’t use floats when we only need ints.
  3. In JavaScript, use Arrays when the property names are small sequential integers. Otherwise, use an object. JavaScript Arrays are not like arrays in most other languages. They are simply objects with some array like characteristics.
  4. Assign your array elements as early as possible. 
  5. Don’t delete elements in an array, or leave elements empty.

JavaScript is a very dynamic language. Use its dynamic nature cautiously if you want performant code. Most importantly, favour read time convenience over write time. Your code is going to be read many more times than it’s written.

At this stage, V8 is way ahead of the other JavaScript engines in terms of performance. Node.js uses the V8 engine and enjoys the same incredible performance.

Inheritance

Prototypal inheritance is more OO than classical inheritance.
With prototypal inheritance, a child object only needs to inherit the parent objects specific properties pertinent to it.
With classical inheritance, a child object inherits all the parent objects members, even the ones that it should have no knowledge of.

Hopefully most of us already know to favour composition (aggregation) over inheritance. Be it classical or prototypal. I’ve explained some techniques of how this can be done effectively in JavaScript in the above three posts.

Angus Croll is a master at explaining these concepts, so be sure to check out his post here.

Even the Java creator James Gosling says he’d do away with classes or classical inheritance if he could write the language again.
Inheritance can be an anti pattern as it’s tight coupling. Sub classes inherit everything no matter what. Prototypal is opt-in.
One of the Fluent Conference talks by Eric Elliott on why we should steer away from classical inheritance goes to say the following:

Classical Inheritance is Obsolete
“Those who are unaware they are walking in darkness will never seek the light.” —Bruce Lee
In “Design Patterns”, the Gang of Four recommend two important principles of object oriented design:
1) Program to an interface, not an implementation.
2) Favour object composition over class inheritance.
In a sense, the second principle could follow from the first, because classical inheritance exposes the parent class to all child classes. The child classes are all programming to an implementation rather than an interface. Classical inheritance breaks the principle of encapsulation, and tightly couples the child classes to its ancestors.
Why is the seminal work on Object Oriented design so distinctly anti-inheritance? Because classical inheritance causes several problems:
Tight coupling. Classical inheritance is the tightest coupling available in OO design. Descendant classes have an intimate knowledge of their ancestor classes.
Inflexible hierarchies. Single parent hierarchies are rarely capable of describing all possible use cases. Eventually, all hierarchies are “wrong” for new uses—a problem that necessitates code duplication.
Complicated multiple inheritance. It’s often desirable to inherit from more than one parent. That process is inordinately complex and its implementation is inconsistent with the process for single inheritance, which makes it harder to read and understand.
Brittle architecture. Because of tight coupling, it’s often difficult to refactor a class with the “wrong” design, because much existing functionality depends on the existing design.
The Gorilla / Banana problem. Often there are parts of the parent that you don’t want to inherit. Subclassing allows you to override properties from the parent, but it doesn’t allow you to select which properties you want to inherit.

Additional Sources:
ECMA-262 edition 5.1
JavaScript The Good Parts.
JavaScript The Definitive Guide.
Eric Elliott’s talk at the fluent conference May 28-30, 2013.