Posts Tagged ‘Web’

Journey To Self Hosting

November 29, 2014

I was recently tasked with working out the best options for hosting web applications and their data for a client. This was their foray into whether to throw all their stuff into the cloud or to build their own infrastructure to host everything on.

Hosting Options

There are a lot of options available now. Most of which are derivatives of either external cloud or internal (possibly cloud). All of which come with features and some price tags that need to be weighed up. I’ve been collecting resources of providers and their offerings (both cloud and in-house) for quite a while. So I didn’t have to go far to pull them together for comparison.

All sites and apps require a different amount of each resource type to be allocated to them. For example many web sites are still predominantly static, which require more network band-width than any other resource, some memory, a little processing power and provided they’re being cached on the server, not a lot else. These resources are very cheap.

If you’re running an e-commerce site, then you can potentially add more Disk I/O which is usually the first bottleneck, processing power and space for your data store. Add in redundancy, backups and administration of.
Fast disks (or lets just call it storage) are cheap. In fact most hardware is cheap.

Administration of redundancy, backups and staying on top of security starts to cost more. Although the “staying on top of security” will need to be done whether you’re on someone else’s hardware or on your own. It’s just that it’s a lot easier on your own because you’re in control and dictate the amount of visibility you have.

The Cloud

The Cloud

Pros

It’s out of your hands.
Indeed it is, in more ways than one. Your trust is going to have to be honoured here (or not). Yes you have SLA’s, but what guarantee do the SLA’s give you that the people working on your system and data are not having a bad day. Maybe they’ve broken up with their girlfriend, or what ever. It takes very little to miss something that could drastically compromise your system and or data.

VPS’s can be spun up quickly, but remember, good things take time. Everything has a cost. Things are quick and easy for a reason. There is a cost to this, think about what those (often hidden) costs are.

In some cases it can be cheaper, but you get what you pay for.

Cons

Your are trusting others with your data. Even others that you are not aware of. In many cases, hosting providers can be (and in many cases are) forced by governments and other agencies to give up your secrets. This is very common place now and you may not even know it’s happened.

Your provider may go out of business.

There is an inherent lack of security in all the cloud providers I’ve looked at and worked with. They will tell you they take security seriously, but when someone that understands security inspects how they do things, the situation often looks like Swiss cheese.

In-House Cloud

In-House Cloud

Pros

You are in control of your data and your application, providing you or “your” staff:

  • and/or external consultants are competent and haven’t made mistakes in setting up your infrastructure
  • Are patching all software/firmware involved
  • Are Fastidiously hardening your server/s (this is continuous. It doesn’t stop at the initial set-up)
  • Have set-up the routes and firewall rules correctly
  • Have the correct alerts set-up
  • Have implemented Intrusion Detection and Prevention Systems (IDS’s/IPS’s)
  • Have penetration tested the set-up and not just from a technical perspective. It’s often best to get pairs to do the reviews.

The list goes on. If you are at all in doubt, that’s where you consider the alternatives. In saying that, most hosting and cloud providers perform abysmally, despite their claims that your applications and data is safe with them.

It “can” cost less than entrusting your system and data to someone (or many someone’s) on the other side of the planet. Weigh up the costs. They will not always be what they appear at face value.

Hardware is very cheap.

Cons

Potential lack of in-house skills.

People with the right skills and attitudes are not cheap.

It may not be core business. You may not have the necessary capitols in-house to scope, architect, cost, set-up, administer. Potentially you could hire someone to do the initial work and the on going administration. The amount of on going administration will be partly determined by what your hosting. Generally speaking hosting company web sites, blogs etc, will require less work than systems with distributed components and redundancy.

Spinning up an instance to develop or prototype on, doesn’t have to be hard. In fact if you have some hardware, provisioning of VM images is usually quick and easy. There is actually a pro in this too… you decide how much security you want baked into these images and the processes taken to configure.

Consider download latencies from people you want to reach possibly in other countries.

In some cases it can be more expensive, but you get what you pay for.

Outcome

The decision for this client was made to self host. There will be a follow up post detailing some of the hardening process I took for one of their Debian web servers.

Node.js Asynchronicity and Callback Nesting

July 26, 2014

Just a heads up before we get started on this… Chrome DevTools (v35) now has the ability to show the full call stack of asynchronous JavaScript callbacks. As of writing this, if you develop on Linux you’ll want the dev channel. Currently my Linux Mint 13 is 3 versions behind. So I had to update to the dev channel until I upgraded to the LTS 17 (Qiana).

All code samples can be found at GitHub.

Deep Callback Nesting

AKA callback hell, temple of doom, often the functions that are nested are anonymous and often they are implicit closures. When it comes to asynchronicity in JavaScript, callbacks are our bread and butter. In saying that, often the best way to use them is by abstracting them behind more elegant APIs.

Being aware of when new functions are created and when you need to make sure the memory being held by closure is released (dropped out of scope) can be important for code that’s hot otherwise you’re in danger of introducing subtle memory leaks.

What is it?

Passing functions as arguments to functions which return immediately. The function (callback) that’s passed as an argument will be run at some time in the future when the potentially time expensive operation is done. This callback by convention has it’s first parameter as the error on error, or as null on success of the expensive operation. In JavaScript we should never block on potentially time expensive operations such as I/O, network operations. We only have one thread in JavaScript, so we allow the JavaScript implementations to place our discrete operations on the event queue.

One other point I think that’s worth mentioning is that we should never call asynchronous callbacks synchronously unless of course we’re unit testing them, in which case we should be rarely calling them asynchronously. Always allow the JavaScript engine to put the callback into the event queue rather than calling it immediately, even if you already have the result to pass to the callback. By ensuring the callback executes on a subsequent turn of the event loop you are providing strict separation of the callback being allowed to change data that’s shared between itself (usually via closure) and the currently executing function. There are many ways to ensure the callback is run on a subsequent turn of the event loop. Using asynchronous API’s like setTimeout and setImmediate allow you to schedule your callback to run on a subsequent turn. The Promises/A+ specification (discussed below) for example specifies this.

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var nestedCoffee = sUTDirectory('nestedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function (done) {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the ugly nested callback coffee machine', function (done) {

      var result = function (error, state) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: ' + state.description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(error, expectedErrorOutCome, 'brew encountered an error. The following are the error details: ' + error.message);
         }
         done();
      };

      nestedCoffee().brew(result);
   });
});
lets test

The System Under Test

'use strict';

module.exports = function nestedCoffee() {

   // We don't do instant coffee ####################################

   var boilJug = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
   };
   var addInstantCoffeePowder = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Crappy instant coffee powder is being added.');
   };
   var addSugar = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Sugar is being added.');
   };
   var addBoilingWater = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Boiling water is being added.');
   };
   var stir = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Coffee is being stirred. Hmm...');
   };

   // We only do real coffee ########################################

   var heatEspressoMachine = function (state, callback) {
      var error = undefined;
      var wrappedCallback = function () {
         console.log('Espresso machine heating cycle is done.');
         if(!error) {
            callback(error, state);
         } else
            console.log('wrappedCallback encountered an error. The following are the error details: ' + error);
      };
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      // If there is an error, wrap callback with our own error function

      // Even if you call setTimeout with a time of 0 ms, the callback you pass is placed on the event queue to be called on a subsequent turn of the event loop.
      // Also be aware that setTimeout has a minimum granularity of 4ms for timers nested more than 5 deep. For several reasons we prefer to use setImmediate if we don't want a 4ms minimum wait.
      // setImmediate will schedule your callbacks on the next turn of the event loop, but it goes about it in a smarter way. Read more about it here: https://developer.mozilla.org/en-US/docs/Web/API/Window.setImmediate
      // If you are using setImmediate and it's not available in the browser, use the polyfill: https://github.com/YuzuJS/setImmediate
      // For this, we need to wait for our huge hunk of copper to heat up, which takes a lot longer than a few milliseconds.
      setTimeout(
         // Once espresso machine is hot callback will be invoked on the next turn of the event loop...
         wrappedCallback, espressoMachineHeatTime.milliseconds
      );
   };
   var grindDoseTampBeans = function (state, callback) {
      // Perform long running action.
      console.log('We are now grinding, dosing, then tamping our dose.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var mountPortaFilter = function (state, callback) {
      // Perform long running action.
      console.log('Porta filter is now being mounted.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var positionCup = function (state, callback) {
      // Perform long running action.
      console.log('Placing cup under portafilter.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var preInfuse = function (state, callback) {
      // Perform long running action.
      console.log('10 second preinfuse now taking place.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var extract = function (state, callback) {
      // Perform long running action.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.

      // Uncomment the below to test the error.
      //callback({message: 'Oh no, something has gone wrong!'})
      callback(null, state);
   };
   var espressoMachineHeatTime = {
      // if you don't want to wait for the machine to heat up assign minutes: 0.2.
      minutes: 30,
      get milliseconds() {
         return this.minutes * 60000;
      }
   };
   var state = {
      description: ''
      // Other properties
   };
   var brew = function (onCompletion) {
      // Some prep work here possibly.
      heatEspressoMachine(state, function (err, resultFromHeatEspressoMachine) {
         if(!err) {
            grindDoseTampBeans(state, function (err, resultFromGrindDoseTampBeans) {
               if(!err) {
                  mountPortaFilter(state, function (err, resultFromMountPortaFilter) {
                     if(!err) {
                        positionCup(state, function (err, resultFromPositionCup) {
                           if(!err) {
                              preInfuse(state, function (err, resultFromPreInfuse) {
                                 if(!err) {
                                    extract(state, function (err, resultFromExtract) {
                                       if(!err)
                                          onCompletion(null, state);
                                       else
                                          onCompletion(err, null);
                                    });
                                 } else
                                    onCompletion(err, null);
                              });
                           } else
                              onCompletion(err, null);
                        });
                     } else
                        onCompletion(err, null);
                  });
               } else
                  onCompletion(err, null);
            });
         } else
            onCompletion(err, null);
      });
   };
   return {
      // Publicise brew.
      brew: brew
   };
};

What’s wrong with it?

  1. It’s hard to read, reason about and maintain
  2. The debugging experience isn’t very informative
  3. It creates more garbage than adding your functions to a prototype
  4. Dangers of leaking memory due to retaining closure references
  5. Many more…

What’s right with it?

  • It’s asynchronous

Closures are one of the language features in JavaScript that they got right. There are often issues in how we use them though.   Be very careful of what you’re doing with closures. If you’ve got hot code, don’t create a new function every time you want to execute it.

Resources

  • Chapter 7 Concurrency of the Effective JavaScript book by David Herman

 

Alternative Approaches

Ranging from marginally good approaches to better approaches. Keeping in mind that all these techniques add value and some make more sense in some situations than others. They are all approaches for making the callback hell more manageable and often encapsulating it completely, so much so that the underlying workings are no longer just a bunch of callbacks but rather well thought out implementations offering up a consistent well recognised API. Try them all, get used to them all, then pick the one that suites your particular situation. The first two examples from here are blocking though, so I wouldn’t use them as they are, they are just an example of how to make some improvements.

Name your anonymous functions

  1. They’ll be easier to read and understand
  2. You’ll get a much better debugging experience, as stack traces will reference named functions rather than “anonymous function”
  3. If you want to know where the source of an exception was
  4. Reveals your intent without adding comments
  5. In itself will allow you to keep your nesting shallow
  6. A first step to creating more extensible code

We’ve made some improvements in the next two examples, but introduced blocking in the arrays prototypes forEach loop which we really don’t want to do.

Example of Anonymous Functions

var boilJug = function () {
   // Perform long running action
};
var addInstantCoffeePowder = function () {
   // Perform long running action
   console.log('Crappy instant coffee powder is being added.');
};
var addSugar = function () {
   // Perform long running action
   console.log('Sugar is being added.');
};
var addBoilingWater = function () {
   // Perform long running action
   console.log('Boiling water is being added.');
};
var stir = function () {
   // Perform long running action
   console.log('Coffee is being stirred. Hmm...');
};
var heatEspressoMachine = function () {
   // Flick switch, check water.
   console.log('Espresso machine is being turned on and is now heating.');
};
var grindDoseTampBeans = function () {
   // Perform long running action
   console.log('We are now grinding, dosing, then tamping our dose.');
};
var mountPortaFilter = function () {
   // Perform long running action
   console.log('Portafilter is now being mounted.');
};
var positionCup = function () {
   // Perform long running action
   console.log('Placing cup under portafilter.');
};
var preInfuse = function () {
   // Perform long running action
   console.log('10 second preinfuse now taking place.');
};
var extract = function () {
   // Perform long running action
   console.log('Cranking leaver down and extracting pure goodness.');
};

(function () {
   // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
   return [
      'heatEspressoMachine',
      'grindDoseTampBeans',
      'mountPortaFilter',
      'positionCup',
      'preInfuse',
      'extract',
   ].forEach(
      function (brewStep) {
         this[brewStep]();
      }
   );
}());

anonymous functions

Example of Named Functions

Now satisfies all the points above, providing the same output. Hopefully you’ll be able to see a few other issues I’ve addressed with this example. We’re also no longer clobbering the global scope. We can now also make any of the other types of coffee simply with an additional single line function call, so we’re removing duplication.

var BINARYMIST = (function (binMist) {
   binMist.coffee = {
      action: function (step) {

         return {
            boilJug: function () {
               // Perform long running action
            },
            addInstantCoffeePowder: function () {
               // Perform long running action
               console.log('Crappy instant coffee powder is being added.');
            },
            addSugar: function () {
               // Perform long running action
               console.log('Sugar is being added.');
            },
            addBoilingWater: function () {
               // Perform long running action
               console.log('Boiling water is being added.');
            },
            stir: function () {
               // Perform long running action
               console.log('Coffee is being stirred. Hmm...');
            },
            heatEspressoMachine: function () {
               // Flick switch, check water.
               console.log('Espresso machine is being turned on and is now heating.');
            },
            grindDoseTampBeans: function () {
               // Perform long running action
               console.log('We are now grinding, dosing, then tamping our dose.');
            },
            mountPortaFilter: function () {
               // Perform long running action
               console.log('Portafilter is now being mounted.');
            },
            positionCup: function () {
               // Perform long running action
               console.log('Placing cup under portafilter.');
            },
            preInfuse: function () {
               // Perform long running action
               console.log('10 second preinfuse now taking place.');
            },
            extract: function () {
               // Perform long running action
               console.log('Cranking leaver down and extracting pure goodness.');
            }
         }[step]();
      },
      coffeeType: function (type) {
         return {
            'cappuccino': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'instant': {
               brewSteps: function () {
                  return [
                     'addInstantCoffeePowder',
                     'addSugar',
                     'addBoilingWater',
                     'stir'
                  ];
               }
            },
            'macchiato': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'mocha': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'short black': {
               brewSteps: function () {
                  return [
                     'heatEspressoMachine',
                     'grindDoseTampBeans',
                     'mountPortaFilter',
                     'positionCup',
                     'preInfuse',
                     'extract',
                  ];
               }
            }
         }[type];
      },
      'brew': function (requestedCoffeeType) {
         var that = this;
         var brewSteps = this.coffeeType(requestedCoffeeType).brewSteps();
         // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
         brewSteps.forEach(function runCoffeeMakingStep(brewStep) {
            that.action(brewStep);
         });
      }
   };
   return binMist;

} (BINARYMIST || {/*if BINARYMIST is falsy, create a new object and pass it*/}));

BINARYMIST.coffee.brew('short black');

named functions


Web Workers

I’ll address these in another post.

Create Modules

Everywhere.

Legacy Modules (Server or Client side)

AMD Modules using RequireJS

CommonJS type Modules in Node.js

In most of the examples I’ve created in this post I’ve exported the system under test (SUT) modules and then required them into the test. Node modules are very easy to create and consume. requireFrom is a great way to require your local modules without explicit directory traversal, thus removing the need to change your require statements when you move your files that are requiring your modules.

NPM Packages

Browserify

Here we get to consume npm packages in the browser.

Universal Module Definition (UMD)

ES6 Modules

That’s right, we’re getting modules as part of the specification (15.2). Check out this post by Axel Rauschmayer to get you started.


Recursion

I’m not going to go into this here, but recursion can be used as a light weight solution to provide some logic to determine when to run the next asynchronous piece of work. Item 64 “Use Recursion for Asynchronous Loops” of the Effective JavaScript book provides some great examples. Do your self a favour and get a copy of David Herman’s book. Oh, we’re also getting tail-call optimisation in ES6.

 


EventEmitter

Still creates more garbage unless your functions are on the prototype, but does provide asynchronicity. Now we can put our functions on the prototype, but then they’ll all be public and if they’re part of a process then we don’t want our coffee process spilling all it’s secretes about how it makes perfect coffee. In saying that, if our code is hot and we’ve profiled it and it’s a stand-out for using to much memory, we could refactor EventEmittedCoffee to have its function declarations added to EventEmittedCoffee.prototype and perhaps hidden another way, but I wouldn’t worry about it until it’s been proven to be using to much memory.

Events are used in the well known Ganf Of Four Observer (behavioural) pattern (which I discussed the C# implementation of here) and at a higher level the Enterprise Integration Publish/Subscribe pattern. The Observer pattern is used in quite a few other patterns also. The ones that spring to mind are Model View Presenter, Model View Controller. The pub/sub pattern is slightly different to the Observer in that it has a topic/event channel that sits between the publisher and the subscriber and it uses contractual messages to encapsulate and transmit it’s events.

Here’s an example of the EventEmitter …

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var eventEmittedCoffee = sUTDirectory('eventEmittedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the event emitted coffee machine', function (done) {

      function handleSuccess(state) {
         var stateOutCome = 'The state of the ordered coffee is: ' + state.description;
         stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         done();
      }

      function handleFailure(error) {
         assert.fail(error, 'brew encountered an error. The following are the error details: ' + error.message);
         done();
      }

      // We could even assign multiple event handlers to the same event. We're not here, but we could.
      eventEmittedCoffee.on('successfulOrder', handleSuccess).on('failedOrder', handleFailure);

      eventEmittedCoffee.brew();
   });
});

The System Under Test

'use strict';

var events = require('events'); // Core node module.
var util = require('util'); // Core node module.

var eventEmittedCoffee;
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: ''
};

function EventEmittedCoffee() {

   var eventEmittedCoffee = this;

   function heatEspressoMachine(state) {
      // No need for callbacks. We can emit a failedOrder event at any stage and any subscribers will be notified.

      function emitEspressoMachineHeated() {
         console.log('Espresso machine heating cycle is done.');
         eventEmittedCoffee.emit('espressoMachineHeated', state);
      }
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      setTimeout(
         // Once espresso machine is hot event will be emitted on the next turn of the event loop...
         emitEspressoMachineHeated, espressoMachineHeatTime.milliseconds
      );
   }

   function grindDoseTampBeans(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('We are now grinding, dosing, then tamping our dose.');
      eventEmittedCoffee.emit('groundDosedTampedBeans', state);
   }

   function mountPortaFilter(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Porta filter is now being mounted.');
      eventEmittedCoffee.emit('portaFilterMounted', state);
   }

   function positionCup(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Placing cup under portafilter.');
      eventEmittedCoffee.emit('cupPositioned', state);
   }

   function preInfuse(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('10 second preinfuse now taking place.');
      eventEmittedCoffee.emit('preInfused', state);
   }

   function extract(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      eventEmittedCoffee.emit('successfulOrder', state);
      // If you want to fail the order, replace the above two lines with the below two lines.
      // state.error = 'Oh no! That extraction came out far to fast.'
      // this.emit('failedOrder', state);
   }

   eventEmittedCoffee.on('timeToHeatEspressoMachine', heatEspressoMachine).
   on('espressoMachineHeated', grindDoseTampBeans).
   on('groundDosedTampedBeans', mountPortaFilter).
   on('portaFilterMounted', positionCup).
   on('cupPositioned', preInfuse).
   on('preInfused', extract);
}

// Make sure util.inherits is before any prototype augmentations, as it seems it clobbers the prototype if it's the other way around.
util.inherits(EventEmittedCoffee, events.EventEmitter);

// Only public method.
EventEmittedCoffee.prototype.brew = function () {
   this.emit('timeToHeatEspressoMachine', state);
};

eventEmittedCoffee = new EventEmittedCoffee();

module.exports = eventEmittedCoffee;

With using raw callbacks, we have to pass them (functions) around. With events, we can have many interested parties request (subscribe) to be notified when something that our interested parties are interested in happens (the event). The Observer pattern promotes loose coupling, as the thing (publisher) wanting to inform interested parties of specific events has no knowledge of it’s subscribers, this is essentially what a service is.

Resources


Async.js

Provides a collection of methods on the async object that:

  1. take an array and perform certain actions on each element asynchronously
  2. take a collection of functions to execute in specific orders asynchronously, some based on different criteria. The likes of async.waterfall allow you to pass results of a previous function to the next. Don’t underestimate these. There are a bunch of very useful routines.
  3. are asynchronous utilities

Here’s an example…

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var asyncCoffee = sUTDirectory('asyncCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the async coffee machine', function (done) {

      var result = function (error, resultsFromAllAsyncSeriesFunctions) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: '
               + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(
               error,
               expectedErrorOutCome,
               'brew encountered an error. The following are the error details. message: '
                  + error.message
                  + '. The finished state of the ordered coffee is: '
                  + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description
            );
         }
         done();
      };

      asyncCoffee().brew(result)
   });
});

The System Under Test

'use strict';

var async = require('async');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: null
};

module.exports = function asyncCoffee() {

   var brew = function (onCompletion) {
      async.series([
         function heatEspressoMachine(heatEspressoMachineDone) {
            // No need for callbacks. We can just pass an error to the async supplied callback at any stage and the onCompletion callback will be invoked with the error and the results immediately.

            function espressoMachineHeated() {
               console.log('Espresso machine heating cycle is done.');
               heatEspressoMachineDone(state.error);
            }
            // Flick switch, check water.
            console.log('Espresso machine has been turned on and is now heating.');
            // Mutate state.
            setTimeout(
               // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
               espressoMachineHeated, espressoMachineHeatTime.milliseconds
            );
         },
         function grindDoseTampBeans(grindDoseTampBeansDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('We are now grinding, dosing, then tamping our dose.');
            grindDoseTampBeansDone(state.error);
         },
         function mountPortaFilter(mountPortaFilterDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Porta filter is now being mounted.');
            mountPortaFilterDone(state.error);
         },
         function positionCup(positionCupDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Placing cup under portafilter.');
            positionCupDone(state.error);
         },
         function preInfuse(preInfuseDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('10 second preinfuse now taking place.');
            preInfuseDone(state.error);
         },
         function extract(extractDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Cranking leaver down and extracting pure goodness.');
            // If you want to fail the order, uncomment the below line. May as well change the description too.
            // state.error = {message: 'Oh no! That extraction came out far to fast.'};
            state.description = 'beautiful shot!';
            extractDone(state.error, state);

         }
      ],
      onCompletion);
   };

   return {
      // Publicise brew.
      brew: brew
   };
};

Other Similar Useful libraries


Adding to Prototype

Check out my post on prototypes. If profiling reveals you’re spending to much memory or processing time creating the objects that contain the functions that are going to be used asynchronously you could add the functions to the objects prototype like we did with the public brew method of the EventEmitter example above.


Promises

The concepts of promises and futures which are quite similar, have been around a long time. their roots go back to 1976 and 1977 respectively. Often the terms are used interchangeably, but they are not the same thing. You can think of the language agnostic promise as a proxy for a value provided by an asynchronous actions eventual success or failure. a promise is something tangible, something you can pass around and interact with… all before or after it’s resolved or failed. The abstract concept of the future (discussed below) has a value that can be mutated once from pending to either fulfilled or rejected on fulfilment or rejection of the promise.

Promises provide a pattern that abstracts asynchronous operations in code thus making them easier to reason about. Promises which abstract callbacks can be passed around and the methods on them chained (AKA Promise pipelining). Removing temporary variables makes it more concise and clearer to readers that the extra assignments are an unnecessary step.

JavaScript Promises

A promise (Promises/A+ thenable) is an object or sometimes more specifically a function with a then (JavaScript specific) method.
A promise must only change it’s state once and can only change from either pending to fulfilled or pending to rejected.

Semantically a future is a read-only property.
A future can only have its value set (sometimes called resolved, fulfilled or bound) once by one or more (via promise pipelining) associated promises.
Futures are not discussed explicitly in the Promises/A+, although are discussed implicitly in the promise resolution procedure which takes a promise as the first argument and a value as the second argument.
The idea is that the promise (first argument) adopts the state of the second argument if the second argument is a thenable (a promise object with a then method). This procedure facilitates the concept of the “future”

We’re getting promises in ES6. That means JavaScript implementers are starting to include them as part of the language. Until we get there, we can use the likes of these libraries.

One of the first JavaScript promise drafts was the Promises/A (1) proposal. The next stage in defining a standardised form of promises for JavaScript was the Promises/A+ (2) specification which also has some good resources for those looking to use and implement promises based on the new spec. Just keep in mind though, that this has nothing to do with the EcmaScript specification, although it is/was a precursor.

Then we have Dominic Denicola’s promises-unwrapping repository (3) for those that want to stay ahead of the solidified ES6 draft spec (4). Dominic’s repo is slightly less specky and may be a little more convenient to read, but if you want gospel, just go for the ES6 draft spec sections 25.4 Promise Objects and 7.5 Operations on Promise Objects which is looking fairly solid now.

The 1, 2, 3 and 4 are the evolutionary path of promises in JavaScript.

Node Support

Although it was decided to drop promises from Node core, we’re getting them in ES6 anyway. V8 already supports the spec and we also have plenty of libraries to choose from.

Node on the other hand is lagging. Node stable 0.10.29 still looks to be using version 3.14.5.9 of V8 which still looks to be about 17 months from the beginnings of the first sign of native ES6 promises according to how I’m reading the V8 change log and the Node release notes.

So to get started using promises in your projects whether your programming server or client side, you can:

  1. Use one of the excellent Promises/A+ conformant libraries which will give you the flexibility of lots of features if that’s what you need, or
  2. Use the native browser promise API of which all ES6 methods on Promise work in Chrome (V8 -> Soon in Node), Firefox and Opera. Then polyfill using the likes of yepnope, or just check the existence of the methods you require and load them on an as needed basis. The cujojs or jakearchibald shims would be good starting points.

For my examples I’ve decided to use when.js for several reasons.

  • Currently in Node we have no native support. As stated above, this will be changing soon, so we’d be polyfilling everything.
  • It’s performance is the least worst of the Promises/A+ compliant libraries at this stage. Although don’t get to hung up on perf stats. In most cases they won’t matter in context of your module. If you’re concerned, profile your running code.
  • It wraps non Promises/A+ compliant promise look-a-likes like jQuery’s Deferred which will forever remain broken.
  • Is compliant with spec version 1.1

The following example continues with the coffee making procedure concept. Now we’ve taken this from raw callbacks to using the EventEmitter to using the Async library and finally to what I think is the best option for most of our asynchronous work, not only in Node but JavaScript anywhere. Promises. Now this is just one way to implement the example. There are many and probably many of which are even more elegant. Go forth explore and experiment.

The Test

var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var promisedCoffee = sUTDirectory('promisedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the coffee machine of promises', function (done) {

      var numberOfSteps = 7;
      // We could use a then just as we've used the promises done method, but done is semantically the better choice. It makes a bigger noise about handling errors. Read the docs for more info.
      promisedCoffee().brew().done(
         function handleValue(valueOrErrorFromPromiseChain) {
            console.log(valueOrErrorFromPromiseChain);
            valueOrErrorFromPromiseChain.errors.should.have.length(0);
            valueOrErrorFromPromiseChain.stepResults.should.have.length(numberOfSteps);
            done();
         }
      );
   });

});

The System Under Test

'use strict';

var when = require('when');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   errors: [],
   stepResults: []
};
function CustomError(message) {
   this.message = message;
   // return false
   return false;
}

function heatEspressoMachine(resolve, reject) {
   state.stepResults.push('Espresso machine has been turned on and is now heating.');
   function espressoMachineHeated() {
      var result;
      // result will be wrapped in a new promise and provided as the parameter in the promises then methods first argument.
      result = 'Espresso machine heating cycle is done.';
      // result could also be assigned another promise
      resolve(result);
      // Or call the reject
      //reject(new Error('Something screwed up here')); // You'll know where it originated from. You'll get full stack trace.
   }
   // Flick switch, check water.
   console.log('Espresso machine has been turned on and is now heating.');
   // Mutate state.
   setTimeout(
      // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
      espressoMachineHeated, espressoMachineHeatTime.milliseconds
   );
}

// The promise takes care of all the asynchronous stuff without a lot of thought required.
var promisedCoffee = when.promise(heatEspressoMachine).then(
   function fulfillGrindDoseTampBeans(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'We are now grinding, dosing, then tamping our dose.';
      // Or if something goes wrong:
      // throw new Error('Something screwed up here'); // You'll know where it originated from. You'll get full stack trace.
   },
   function rejectGrindDoseTampBeans(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillMountPortaFilter(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Porta filter is now being mounted.';
   },
   function rejectMountPortaFilter(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new Error(error.message);
   }
).then(
   function fulfillPositionCup(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Placing cup under portafilter.';
   },
   function rejectPositionCup(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillPreInfuse(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return '10 second preinfuse now taking place.';
   },
   function rejectPreInfuse(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillExtract(result) {
      state.stepResults.push(result);
      state.description = 'beautiful shot!';
      state.stepResults.push('Cranking leaver down and extracting pure goodness.');
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return state;
   },
   function rejectExtract(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);

   }
).catch(CustomError, function (e) {
      // Only deal with the error type that we know about.
      // All other errors will propagate to the next catch. whenjs also has a finally if you need it.
      // Todo: KimC. Do the dealing with e.
      e.newCustomErrorInformation = 'Ok, so we have now dealt with the error in our custom error handler.';
      return e;
   }
).catch(function (e) {
      // Handle other errors
      e.newUnknownErrorInformation = 'Hmm, we have an unknown error.';
      return e;
   }
);

function brew() {
   return promisedCoffee;
}

// when's promise.catch is only supposed to catch errors derived from the native Error (etc) functions.
// Although in my tests, my catch(CustomError func) wouldn't catch it. I'm assuming there's a bug as it kept giving me a TypeError instead.
// Looks like it came from within the library. So this was a little disappointing.
CustomError.prototype = Error;

module.exports = function promisedCoffee() {
   return {
      // Publicise brew.
      brew: brew
   };
};

Resources


Testing Asynchronous Code

All of the tests I demonstrated above have been integration tests. Usually I’d unit test the functions individually not worrying about the intrinsically asynchronous code, as most of it isn’t mine anyway, it’s C/O the EventEmitter, Async and other libraries and there is often no point in testing what the library maintainer already tests.

When you’re driving your development with tests, there should be little code testing the asynchronicity. Most of your code should be able to be tested synchronously. This is a big part of the reason why we drive our development with tests, to make sure your code is easy to test. Testing asynchronous code is a pain, so don’t do it much. Test your asynchronous code yes, but most of your business logic should be just functions that you join together asynchronously. When you’re unit testing, you should be testing units, not asynchronous code. When you’re concerned about testing your asynchronicity, that’s called integration testing. Which you should have a lot less of. I discuss the ratios here.

As of 1.18.0 Mocha now has baked in support for promises. For fluent style of testing promises we have Chai as Promised.

Resources


There are plenty of other resources around working with promises in JavaScript. For myself I found that I needed to actually work with them to solidify my understanding of them. With Chrome DevTool async option, we’ll soon have support for promise chaining.

Other Excellent Resources

And again all of the code samples can be found at GitHub.

Exploring JavaScript Closures

May 31, 2014

Just before we get started, we’ll be using the terms lexical scope and dynamic scope a bit. In computer science the term lexical scope is synonymous with static scope.

  • lexical or static scope is where name resolution of “part of a program” depends on the location in the source code
  • dynamic scope is whether name resolution depends on the program state (dependent on execution context or calling context) when the name is encountered.

What are Closures?

Now establishing the formal definition has been quite an interesting journey, with quite a few sources not quite getting it right. Although the ES3 spec talks about closure, there is no formal definition of what it actually is. The ES5 spec on the other hand does discuss what closure is in two distinct locations.

  1. “11.1.5 Object Initialiser” section under the section that talks about accessor properties This is the relevant text: (In relation to getters): “Let closure be the result of creating a new Function object as specified in 13.2 with an empty parameter list (that’s getter specific) and body specified by FunctionBody. Pass in the LexicalEnvironment of the running execution context as the Scope.
  2. “13 Function Definition” section This is the relevant text: “Let closure be the result of creating a new Function object as specified in 13.2 with parameters specified by FormalParameterList (which are optional) and body specified by FunctionBody. Pass in funcEnv as the Scope.

Now what are the differences here that stand out?

  1. We see that 1 specifies a function object with no parameters, and 2 specifies some parameters (optional). So from this we can establish that it’s irrelevant whether arguments are passed or not to create closure.
  2. 1 also mentions passing in the LexicalEnvironment, where as 2 passes in funcEnv. funcEnv is the result of “calling NewDeclarativeEnvironment passing the running execution context‘s LexicalEnvironment as the argument“. So basically there is no difference.

Now 13.2 just specifies how functions are created. Given an optional parameter list, a body, a LexicalEnvironment specified by Scope, and a Boolean flag (for strict mode (ignore this for the purposes of establishing a formal definition)). Now the Scope mentioned above is the lexical environment of the running execution context (discussed here in depth) at creation time. The Scope is actually [[Scope]] (an internal property).

The ES6 spec draft runs along the same vein.

Lets get abstract

Every problem in computer science is just a more specific problem of a problem we’re familiar with in the natural world. So often it helps to find the abstract problem that we are already familiar with in order to help us understand the more specific problem we are dealing with. Patterns are an example of this. Before I was programming as a profession I was a carpenter. I find just about every problem I deal with in programming I’ve already dealt with in physical carpentry and at a higher level still with physical architecture.

In search of the true formal definition I also looked outside of JavaScript at the language agnostic term, which should just be an abstraction of the JavaScript closure anyway. Yip… Wikipedias definition “In programming languages, a closure (also lexical closure or function closure) is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables or upvalues) of that function. A closure—unlike a plain function pointer—allows a function to access those non-local variables even when invoked outside its immediate lexical scope.

My abstract formal definition

A closure is a function containing a reference to the lexical (static) environment via the function objects internal [[Scope]] property (ES5 spec 13.2.9) that it is defined within at creation time, not call time (ES5 spec 13.2.1). The closure is closed over it’s parent lexical environment and all of it’s properties. You can access these properties as variables, but not as properties, because you don’t have access to the internal [[Scope]] property directly in order to reference it’s properties. So this example fails. More correctly (ES5 spec 8.6.2) “Of the standard built-in ECMAScript objects, only Function objects implement [[Scope]].

var outerObjectLiteral = {

   x: 10,

   foo: function () {
      console.log(x); // ReferenceError: x is not defined obviously
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

See here for an explanation on the differences between properties and variables. That’s basically it. Of course there are many ways we can use a closure and that’s often where confusion creeps in about what a closure actually is and is not. Feel free to bring your perspective on this in the comments section below.

When is a closure born?

So lets get this closure closing over something. JavaScript addresses the funarg problem with closure.

var x = 10;

var outerObjectLiteral = {   

   foo: function () {
      // Because our internal [[Scope]] property now has a property (more specifically a free variable) x, we can access it directly.
      console.log(x); // Writes 10 to the console.
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

The closure is created on line 13. Now at line 9 we have access to the closed over lexical environment. When we print x on line 7, we get 10 which is the value of x on [[Scope]] that our closure was statically bound to at function object creation time (not the dynamically scoped x = 20). Now of course you can change the value of the free variable x and it’ll be reflected where ever you use the closed over variable because the closure was bound to the free variable x, not the value of the free variable x.

This is what you’ll see in Chrome Dev Tools when execution is on line 10. Bear in mind though that both foo and invokeMe closures were created at line 13.

Closure

Now I’m going to attempt to explain what the structure looks like in a simplified form with a simple hash. I don’t know how it’s actually implemented in the varius EcmaScript implementations, but I do know what the specification (single source of truth) tells us, it should look something like the following:

////////////////
// pseudocode //
////////////////
foo = closure {
   FormalParameterList: {}, // Optional
   FunctionBody: <...>,
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10
   }
}

The closure is born when the function is created (“the result of creating a new Function object” as stated above). Not when it’s returned by the outer function (I.E. upwards funarg problem) and not when it’s invoked as Angus Croll mentioned here under the “The [[Scope]] property” section.

Angus quotes the ES5 spec 10.4.3.5-7. On studying this section I’m pretty sure it is meant for the context of actually creating the function object rather than invoking an existing function object. The clauses I’ve detailed above (11.1.5 Object Initialiser and 13 Function Definition), confirm this.

The ES6 spec draft “14.1.22 Runtime Semantics: Evaluation” also confirms this theory. Although it’s titled Runtime Semantics, it has several points that confirm my theory… The so called runtime semantics are the runtime semantics of function object creation rather than function object invocation. As some of the steps specified are FunctionCreate, MakeMethod and MakeConstructor (not FunctionInvoke, InvokeMethod or InvokeConstructor). The ES6 spec draft “14.2.17 Runtime Semantics: Evaluation” and also 14.3.8 are similar.

Why do we care about Closure?

Without closures, we wouldn’t have the concept of modules which I’ve discussed in depth here.

Modules are used very heavily in JavaScript both client and server side (think NPM), and for good reason. Until ES6 there is no baked in module system. In ES6 modules become part of the language. The entire Node.js ecosystem exists to install modules via the CommonJS initiative. Modules on the client side most often use the Asynchronous Module Definition (AMD) implementation RequireJS to load modules, but can also use the likes of CommonJS via Browserify, which allows us to load node.js packages in the browser.

As of writing this, the TC39 committee have looked at both the AMD and CommonJS approaches and come up with something completely different for the ES6 module draft spec. Modules provide another mechanism for not allowing secrets to leak into the global object.

Modules are not new. David Parnas wrote a paper titled “On the Criteria To Be Used in Decomposing Systems into Modules” in 1972. This explores the idea of secrets. Design and implementation decisions that should be hidden from the rest of the programme.

Here is an example of the Module pattern that includes both private and public methods. my.moduleMethod has access to private variables outside of it’s VariableEnvironment (the current scope) via the Environment record which references the outer LexicalEnvironment via it’s internal [[Scope]] property.

Information hiding: state and implementation. In JavaScript we don’t have access modifiers, but we don’t need them either. We can hide our secrets with various patterns. Closure is a key concept for many of these patterns. Closure is a key building block for helping us to programme against contract rather than implementation, helping us to form consistent abstractions, giving us the ability to engage with a concept while safely ignoring some of its details. Thus hiding unnecessary complexity from consumers.

I think Steve McConnell explains this very well in his classic “Code Complete” book. Steve uses the house abstraction as his metaphor. “People use abstraction continuously. If you had to deal with individual wood fibers, varnish molecules, and steel molecules every time you used your front door, you’d hardly make it in or out of your house each day. Abstraction is a big part of how we deal with complexity in the real world. Software developers sometimes build systems at the wood-fiber, varnish-molecule, and steel-molecule level. This makes the systems overly complex and intellectually hard to manage. When programmers fail to provide larger programming abstractions, the system itself sometimes fails to make it through the front door. Good programmers create abstractions at the routine-interface level, class-interface level, and package-interface level-in other words, the doorknob level, door level, and house level-and that supports faster and safer programming.

Encapsulation: you can not look at the details (the internal implementation, the secrets).

Partial function application and Currying: I have a set of posts on this topic. Closure is an integral building block of these constructs. Part 1, Part 2 and Part 3.

Functional JavaScript relies heavily on closure.

Are there any Costs or Gotchas of using Closures?

Of course. You didn’t think you’d get all this expressive power without having to think about how you’re going to use it did you? As we’ve discussed, closures were created to address the funarg problem. In doing that, the closure references the lexical (static) scope of the outer scope. So even once the free variables are out of scope, closure will still reference them if they were saved at function creation time. They can not be garbage collected until the function that references (is closed over) the outer scope has fallen out of scope. I.E. the reference count is 0.

var x = 10;
var noOneLikesMe = 20;
var globalyAccessiblePrivilegedFunction;

function globalyScopedFunction(z) {

  var noOneLikesMeInner = 40;

  function privilegedFunction() {
    return x + z;
  }

  return privilegedFunction;

}

// This is where privilegedFunction is created.
globalyAccessiblePrivilegedFunction = globalyScopedFunction(30);

// This is where privilegedFunction is applied.
globalyAccessiblePrivilegedFunction();

Now only the free variables that are needed are saved at function creation time. We see that when execution arrives at line 7, the currently scoped closure has the x free variable saved to it, but not z, noOneLikesMe, or noOneLikesMeInner.

noOneLikesMe

When we enter innerFunction on line 10, we see the hidden [[Scope]] property has both the outer scope and the global scope saved to it.

TwoClosures

Say for example execution has passed the above code snippet. If the closed over variables can still be referenced by calling globalyAccessiblePrivilegedFunction again, then they can not be garbage collected. This is a frequently abused problem with the upwards funarg problem. If you’ve got hot code that is creating many functions, make sure the functions that are closed over free variables are dropped out of scope as soon as you no longer have a need for them. This way garbage collection can deallocate the memory used by the free variables.

Looking at how the specification would look simplified, we can see that each Environment record inherits what it knows it’s going to need from the Environment record of its lexical parent. This chaining inheritance goes all the way up the lexical hierarchy to the global function object as seen below. In this case the family tree is quite short. Remember this structure is formed at function creation time, not invocation time. the free variables (not their values) are statically baked.

////////////////
// pseudocode //
////////////////
globalyScopedFunction = closure {
   FormalParameterList: { // Optional
      z: 30 // Values updated at invocation time.
   },
   FunctionBody: {
      var noOneLikesMeInner = 40;

      function privilegedFunction() {
         return x + z;
      }

      return privilegedFunction;
   },
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10 // Free variable saved because we know it's going to be used in privilegedFunction.
   },
   privilegedFunction: = closure {
      FormalParameterList: {}, // Optional
      FunctionBody: {
         return x + z;
      },
      Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
         x: 10 // Free variable inherited from the outer Environment.
         z: 30 // Formal parameter saved from outer lexical environment.
      }
   }
}

Scope

I discuss closure here very briefly and how it can be used to create block scoped variables prior to block scoping with the let keyword in ES6, supposed to be officially approved by December 2014. I discuss scoping here in a little more depth.

Closure misunderstandings

Closures are created when a function is returned

A closure is formed when one of those inner functions is made accessible outside of the function in which it was contained” found here is simply incorrect. There are also a lot of other misconceptions found at that link. I’d advise to read with a bag of salt.

Now we’ve already addressed this one above, but here is an example that confirms that the closure is in fact created at function creation time, not when the inner function is returned. Yes, it does what it looks like it does. Fiddle with it?

(function () {

   var lexicallyScopedFunction = function () {
      console.log('We\'re in the lexicalyScopedFunction');
   };

   (function innerClosure() {
      lexicallyScopedFunction();
   }());

}());

On line 8, we get to see the closure that was created from the execution of line 11.

lexicallyScopedFunction

Closures can create memory leaks

Yes they can, but not if you let the closure go out of scope. Discussed above.

Values of free variables are baked into the Closure

Also untrue. Now I’ve put in-line comments to explain what’s happening here. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      loopCountingFunctions[i] = (function printLoopCount() {
         // What you see here is that each time this code is run, it prints the last value of the loop counter i.
         // Clearly showing that for each new printLoopCount function created and saved to the loopCountingFunctions array,
         // the variable i is saved to the Environment record, not the value of the variable i.
         console.log(i);
      });
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 3
runLoopPrinter[1](); // 3
runLoopPrinter[2](); // 3

An aside… getLoopPrinter is a global function. Once execution is on line 3 you get to see that global functions also have closure… supporting my comments above

global functions have closure too

Now in the above example, this is probably not what you want to happen, so how do we give each printLoopCount function it’s on value? Well by creating a parameter for each iteration of the loop, each with the new value. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      (function (i) {
         // Now what happens here is each time the above loop runs this code,
         // inside this scope (the scope of this comment) i is a new formal parameter which of course
         // gets statically saved to each printLoopCount functions Environment record (or more simply each closure of printLoopCount).
         loopCountingFunctions[i] = (function printLoopCount() {
            console.log(i);
         });
      })(i)
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 0
runLoopPrinter[1](); // 1
runLoopPrinter[2](); // 2

As always, let me know your thoughts on this post, any thing you think I may have the wrong handle on, or anything that otherwise stood out.

References and interesting reads

JavaScript Properties

October 2, 2012

In ECMAScript 5 we now have two distinct kinds of properties.

  1. Data properties
  2. Accessor properties

A property is a named collection of attributes.
value: any JavaScript value
writable: boolean
configurable: boolean, common for both Data and Accessor
enumerable: boolean, common for both Data and Accessor
get: a function that returns a value
set: a function that takes an argument as its value

configurable

Any attempts to delete the property or change its (writable, configurable, or enumerable) attributes will fail if set to false.
if using strict mode, we get a run time error.
if not using strict mode, the behaviour is as it was with ES3,
the deletion attempt is ignored.
If set to false:
-It can not be re-set to true.
-We can change the value and writable attributes, but writable only from true to false.

enumerable

The property will be enumerated over when a for-in loop is encountered if set to true.
if using strict mode, it’s as if the property doesn’t exist, it’s ignored.

In ES5,

  • a default property descriptor; if the property is defined the old fashioned way, without using Object.defineProperty,
    the boolean attributes will all default to true.
  • A default property descriptor; if the property is defined using Object.defineProperty and the boolean attribute values not specified,
    the boolean attributes will all default to false.

I was wondering about this, as I had heard conflicting stories.
IMO this follows the Principle of least astonishment (POLA)

var obj1 = {};
var obj1PropertyDesc;
var obj2 = {};
var obj2PropertyDesc;

Object.defineProperty(obj1, 'propOnObj1', {
   value: 'value of propOnObj1' //,
   // writable: false,
   // enumerable: false,
   // configurable: false,
});

obj1PropertyDesc = Object.getOwnPropertyDescriptor(obj1, 'propOnObj1');

// obj1PropertyDesc {
//    configurable: false,
//    enumerable: false,
//    value: "value of propOnObj1",
//    writable: false
// }

obj2.propOnObj2 = 'value of propOnObj2';

obj2PropertyDesc = Object.getOwnPropertyDescriptor(obj2, 'propOnObj2');

// obj2PropertyDesc {
//    configurable: true,
//    enumerable: true,
//    value: "value of propOnObj2",
//    writable: true
// }

So in general

Properties declared the old ES3 way are configurable (can be deleted).
Properties declared using Object.defineProperty; by default are not configurable (can not be deleted).
See edge cases below.

The delete operator is used to remove a property from an object.
It does not touch properties in the prototype chain.
If you have a prototype that has a property with the same name, it will now be used when your code references the derived object’s property that no longer exists.

var objLiteral = {
   aProperty: 'value of super property'
}

var anObject = Object.create(objLiteral); // create is an ES5 method, but easy enough to replicate for ES3 implementations

anObject.aProperty = 'value of derived property';

anObject.aProperty  // 'value of derived property'
delete anObject.aProperty;
anObject.aProperty  // 'value of super property'

Edge cases

JavaScript Patterns pg 12 states “Implied globals created without var (regardless if created inside functions) can be
deleted.”
Thanks to Angus Croll for pointing this out as untrue.

obj1 = 'kims global property';
var obj1PropertyDesc;
var obj2PropertyDesc;

obj1PropertyDesc = Object.getOwnPropertyDescriptor(this, 'obj1');

// obj1PropertyDesc {
//    configurable: true,
//    enumerable: true,
//    value: "kims global property",
//    writable: true,
// }

(function (){
   obj2 = 'kims global property declared within function scope';
}());

obj2PropertyDesc = Object.getOwnPropertyDescriptor(this, 'obj2');

// obj2PropertyDesc {
//    configurable: false,
//    enumerable: true,
//    value: "kims global property declared within function scope",
//    writable: true
// }

delete obj2;
// Nope, obj2 was not deleted.
// turn strict mode on and we get the following error:
// Uncaught SyntaxError: Delete of an unqualified identifier in strict mode.

When you declare a global,
you are actually defining a property of the global object.
If you use the var keyword on that global, you are still creating a property.
That property is non-configurable (can not be deleted with the delete operator).
Only object properties with the configurable option set to true can be deleted.
Nothing else can be deleted.
Variables which are properties that we can’t access their property descriptor, can never be deleted.

var obj1 = {};
var obj1PropertyDesc;

obj1PropertyDesc = Object.getOwnPropertyDescriptor(this, 'obj1');

// obj1PropertyDesc {
//    configurable: false
//    enumerable: true
//    value: Object
//    writable: true
// }

There are a couple of notable internal properties that are found on all ES3 and ES5 objects.
[[Get]] and [[Put]].
The Ecma specs enclose internal properties in double square brackets as a convention only.
In ES3 [[Get]] and [[Put]] are used to return and set the internal [[Value]] property.
According to the Ecma5 spec, the internal [[Get]] and [[Put]] properties appear to do the same thing, although it’s not stated explicitly.
This may just be an oversight of the spec.

Accessor Properties

All the examples so far have been showing data properties.
By default properties are data properties unless they define a getter and/or setter,
in which case they are defined as accessor properties.
There are two attributes that are distinct to accessor properties.
get and set.
Both of which allow a method (and only a method) to be assigned to them to get or set respectively.

JavaScript getter error

  • Internally the getter calls the functions internal [[Call]] method with no arguments.
  • Internally the setter calls the functions internal [[Call]] method with an arguments list containing the assigned value as its sole argument.
    The setter may but is not required to have an effect on the value returned by subsequent calls to the properties internal [[Get]] method.

So these attributes may or may not leverage the internal [[Get]] and [[Put]] properties that are found in ES3 and ES5 on all objects.
You can in fact define only a getter (readonly), or only a setter (write-only) accessor if you so choose.

Defining accessor properties literally:

var testObj = {
   // An ordinary data property
   dataProp: 'value',

   // An accessor property defined as a pair of functions
   // get accessorProp() { return this.dataProp; },
   set accessorProp(value) { this.dataProp = value; }
};

testObj.accessorProp = 'an updated string';
alert(testObj.accessorProp); // undefined
alert(testObj.dataProp);  // an updated string

Can we create a data (default) property and then change it to be an accessor property?

var testObj = {}; // Start with no properties at all
// Add a nonenumerable data property x with value 1.
Object.defineProperty(testObj, 'x', { value : 1,
writable: true,
enumerable: false,
configurable: true});

// Check that the property is there but is non-enumerable
alert(testObj.x); // 1

// check that we can't enumerate the testObj
alert(Object.keys(testObj)); // returns an empty array of strings

// Now modify the property x so that it is read-only
Object.defineProperty(testObj, 'x', { writable: false });

// Try to change the value of the property
testObj.x = 2;
// Fails silently or throws TypeError in strict mode
alert(testObj.x); // 1

// The property is still configurable, so we can change its value like this:
Object.defineProperty(testObj, 'x', { value: 2 });

alert(testObj.x); // 2

// what happens if we change configurable to false?
Object.defineProperty(testObj, 'x', { configurable: false });
Object.defineProperty(testObj, 'x', { value: 2.5 }); // Uncaught TypeError: Cannot redefine property: x

// Now change x from a data property to an accessor property
// providing we haven't set configurable to false as above.
Object.defineProperty(testObj, 'x', {
   get: function() {
      return 0;
   }
});

alert(testObj.x); // 0

Yip.

Ok, so what does a property descriptor of an Accessor Property look like?

var objWithMultipleProperties;
var objWithMultiplePropertiesDescriptor;

objWithMultipleProperties = Object.defineProperties({}, {
   x: { value: 1, writable: true, enumerable:true, configurable:true },
   y: { value: 1, writable: true, enumerable:true, configurable:true },
   r: {
      get: function() {
         return Math.sqrt(this.x*this.x + this.y*this.y)
      },
      enumerable:true,
      configurable:true
   }
});

objWithMultiplePropertiesDescriptor = Object.getOwnPropertyDescriptor(objWithMultipleProperties, 'r');
// objWithMultiplePropertiesDescriptor {
//    configurable: true,
//    enumerable: true,
//    get: function () {
//       // other members in here
//    },
//    set: undefined,
//   // ...
// }

The Global Object

When the JavaScript interpreter starts (or whenever a web browser loads a new page),
it creates a new global object and gives it an initial set of properties that define:
• global properties like undefined, Infinity, and NaN
• global functions like isNaN(), parseInt(), and eval()
• constructor functions like Date(), RegExp(), String(), Object(), and Array()
• global objects like Math and JSON

delete undefined;  // not deleted
delete Infinity;   // not deleted
delete NaN;        // not deleted
delete isNaN;      // deleted
delete parseInt;   // deleted
delete eval;       // deleted
delete Date;       // deleted
delete RegExp;     // deleted
delete String;     // deleted
delete Object;     // deleted
delete Array;      // deleted
delete Math;       // deleted
delete JSON;       // deleted
undefined = 'kims undefined'; // nonassignable
Infinity = 'kims infinity';   // nonassignable
NaN = 'kims nan';             // nonassignable
  1. Why are undefined, Infinity and NaN not removed?
  2. Are they non-configurable?
  3. Why are they non-assignable?
  4. How do we test this?
  5. Are they constants?
  6. Are Infinity, NaN and undefined reserved words?

I’ll answer these questions shortly.

ES3 properties

According to the standard
8.6 “Each property consists of a name, a value and a set of attributes”.
8.6.1 A property can have zero or more attributes from the following set:

These attributes along with others (see ES3 spec) are reserved for internal use.

Attribute Description
ReadOnly The property is a read-only property.
Attempts by ECMAScript code to write to the property will be ignored.
(Note, however, that in some cases the value of a property with the ReadOnly attribute may change over time because of actions taken by the host environment; therefore “ReadOnly” does not mean “constant and unchanging”!)
DontEnum The property is not to be enumerated by a for-in enumeration
DontDelete Attempts to delete the property will be ignored.
Internal Internal properties have no name and are not directly accessible via the property accessor operators. This means the property is not accessible to the ECMAScript program.
How these properties are accessed is implementation specific.
How and when some of these properties are used is specified by the language specification.

These property attribute values can not be changed
An interesting Internal property is the [[Prototype]]
There are a number of ways to access the internal [[Prototype]] property indirectly.
I’ve detailed them in my post on prototypes here.

More on ES5 properties

writable, enumerable and configurable replace the ES3 property attributes: ReadOnly, DontEnum, DontDelete.
The property attributes and their values define the property descriptor object (including Data or Accessor properties and those that apply to both (enumerable and configurable)).

The property attributes can be manually managed by the:
Object.defineProperty and Object.defineProperties methods
Object.getOwnPropertyDescriptor

var myObj = {};

Object.defineProperty(myObj, 'propOnMyObj', {
   value: 'property descriptor',
   writable: true,    // ReadOnly = false in ES3
   enumerable: false, // DontEnum = true in ES3
   configurable: true // DontDelete = false in ES3
});

console.log(myObj.propOnMyObj); // 'property descriptor'

// getOwnPropertyDescriptor is the only way to get the properties attributes.
// They don't exist as visible properties on the property (other than for setting them as above),
// they're stored internally in the ECMAScript engine.
var myPropertyDescriptor = Object.getOwnPropertyDescriptor(myObj, 'propOnMyObj');

console.log(myPropertyDescriptor.enumerable); // false
console.log(myPropertyDescriptor.writable);   // true
// etc.

getOwnPropertyDescriptor

There’s lots of new methods defined in ES5.

Now, back to the six questions we had above.

  1. Why are undefined, Infinity and NaN not removed?
    Because  their property descriptors configurable attribute is set to false.
  2. Are they non-configurable?
    Yes, as above.
  3. Why are they non-assignable?
    Because their property descriptors writable attribute is set to false.
  4. How do we test this?
    NaN
    Number
    global Infinity property
    undefined
  5. Are they constants?
    Effectively, yes.
  6. Are Infinity, NaN and undefined reserved words?
    No. Avoid using their names to remove ambiguity.

ES3
Infinity read/write (the value can be changed). Holds positive infinity.
Number is a property on the global object, which has readonly properties Infinity and NaN.
NaN read/write (the value can be changed).
undefined

ES5
Infinity (well… POSITIVE_INFINITY) is a property on the global Number property with the value Infinity
We now also have NEGATIVE_INFINITY with the value –Infinity
Number
Infinity is also declared directly on the global object.
NaN is a property on the global object with the value NaN
undefined is a property on the global object with the value (you guessed it) undefined.
These are all constants now.

Important differences between Properties and Variables

Variables are properties, but not vice versa.

The VariableObject in ES3 is called the VariableEnvironment in ES5.
Can be seen in the specs.
Not sure why they changed what they called it.

Each execution context (be it global or any function) has an associated VariableObject.
Variables (and functions) created within a given context are bound as properties of that context’s VariableObject.
Even function parameters are added as properties of the VariableObject.
Discussed in depth in:
ECMAScript3 spec under “10 Execution Contexts”
ECMAScript5 spec under “10.3 Execution Contexts” onwards
This is why we can access global variables as properties of the global object…
Because that’s what they are.

  • The global object is created before control enters any execution context.
  • The global object is the same as the global contexts VariableObject.
  • In the HTML DOM; the window property of the global object is the global object.

Now variables of functions are similar, but we can’t access them as properties.
Why?…
ECMAScript has an Activation Object.
When control enters the execution context of a function, an activation object is created and associated with the execution context.
The activation object is initialised with:

  1. The this value
  2. an arguments property (referred to as a binding in ES5 spec) that has the DontDelete attribute (configurable set to false in ES5).

The activation object is then used as the VariableObject.
We can access members of the activation object but not the activation object itself,
which is why we can’t access the members as properties.
Further details in:
ECMAScript3 spec under “10.2 Entering An Execution Context”
ECMAScript5 spec under “10.4 Establishing an Execution Context”

Feature Detection (Yes, Including JavaScript)

I know this is not property specific, but it was something I thought noteworthy.

There’s a library that looks to have potential for JavaScript feature detection.
“has.js”
This should be useful for detecting what your users browsers are capable of EcmaScript wise.
The project lead is Peter Higgins (Dojo Toolkit project lead).
Has a good sized group of committers.
May have potential to be a better Modernizr.
The source is here.
Explanation of has.js features here.

Additional References:

Succinct explanation of Variables vs Properties in JavaScript

EcmaScript5 Objects and Properties

Slideshow by Doug Crockford on ES5’s new parts

Dmitry Soshnikov’s elaborations on the Ecma standards:

http://dmitrysoshnikov.com/ecmascript/es5-chapter-1-properties-and-property-descriptors/

http://dmitrysoshnikov.com/ecmascript/chapter-7-2-oop-ecmascript-implementation/

http://dmitrysoshnikov.com/ecmascript/chapter-2-variable-object/

Scoping & Hoisting in JavaScript

November 14, 2011

Scoping

JavaScript scoping is different to classical languages, and can take some getting used to for programmers used to languages such as C, C++, C#, Java.
Classical languages like the before mentioned have block scope.
JavaScript has function scope.

In the following example “5” will be alerted.

var foo = 1;
function bar() {
   if (!foo) {
      var foo = 5;
   }
   alert(foo);
}
bar();


In the following example “1” will be alerted.

var a = 1;
function b() {
   a = 10;
   return;
   function a() {}
}
b();
alert(a);


In the following example Firebug will show 1, 2, 2.

var x = 1;
console.log(x); // 1
if (true) {
   var x = 2;
   console.log(x); // 2
}
console.log(x); // 2


In JavaScript, blocks such as if statements, do not create new scope. Only functions create new scope.

There is a workaround though 😉
JavaScript has Closure.
If you need to create a temporary scope within a function, do the following.

function foo() {
   var x = 1;
   if (x) {
      (function () {
         var x = 2;
         // some other code
      }());
   }
// x is still 1.
}

Line 3: begins a closure
Line 6: the closure invokes itself with ()

I discuss closure in depth in a later post.

Hoisting

Terminoligy

As far as I know…
function declaration or function statement
are the same thing.
function expression or variable declaration with function assignment
are the same thing.


A function statement looks like the following:

function foo( ) {}


A function expression looks like the following:

var foo = function foo( ) {};


A function expression must not start with the word “function”.

//anonymous function expression
var a = function () {
   return 3;
};

//named function expression
var a = function bar() {
   return 3;
};

//self invoking named function expression.
(function sayHello() {
   alert('hello!');
})();

//self invoking anonymous function expression.
(function ( ) {
   var hidden_variable;
   // This function can have some impact on
   // the environment, but introduces no new
   // global variables.
}() );


In JavaScript, a name enters a scope in one of four basic ways:

  1. Language-defined: All scopes are, by default, given the names this and arguments.
  2. Formal parameters: Functions can have named formal parameters, which are scoped to the body of that function.
  3. Function declarations: These are of the form function foo() {}.
  4. Variable declarations: These take the form var foo;.

Function declarations and variable declarations are always hoisted
invisibly to the top of their containing scope by the JavaScript interpreter.
Function parameters and language-defined names are, obviously, already there. This means that code like this:

function foo() {
   bar();
   var x = 1;
}

Is actually interpreted like this:

function foo() {
   var x;
   bar();
   x = 1;
}


It turns out that it doesn’t matter whether the line that contains the declaration would ever be executed.
The following two functions are equivalent:

function foo() {
   if (false) {
      var x = 1;
   }
   return;
   var y = 1;
}
function foo() {
   var x, y;
   if (false) {
      x = 1;
   }
   return;
   y = 1;
}

The assignment portion of the declaration is not hoisted.
Only the identifier is hoisted.
This is not the case with function declarations, where the entire function body will be hoisted as well,
but remember that there are two normal ways to declare functions. Consider the following JavaScript:

function test() {
   foo(); // TypeError 'foo is not a function'
   bar(); // 'this will run!'
   var foo = function () { // function expression assigned to local variable 'foo'
      alert('this won't run!');
   }
   function bar() { // function declaration, given the name 'bar'
      alert('this will run!');
   }
}
test();

In this case, only the function declaration has its body hoisted to the top. The name ‘foo’ is hoisted, but the body is left behind, to be assigned during execution.

Including named function expressions added to the local object

//'use strict';

var container;

function Container() {

   var that = this;
   var descender = 3;
   var targetMin = 0;
   var ascender = 0;
   var targetMax = 3;    

   this.inc = function () {

      if (ascender < targetMax) {
         ascender += 1;
         console.log('ascender incremented: now equals ' + ascender);
         return that;
      } else {
         that.inc = function () {
            console.log('inc now modified to return ' + targetMax);
         };
         that.inc();
         return that;
      }
   };

   alert(dec); // Uncaught ReferenceError: dec is not defined. we're actually looking for the global objects dec property which doesn't exist.
   alert(this.dec); // Prints 'undefined' because this.dec is hoisted and declared, but of course not yet defined.
   // If this.dec was not hoisted, we would get the same 'Uncaught ReferenceError: this.dec is not defined'.

   this.dec = function () {

      if (descender > targetMin) {
         descender -= 1;
         console.log('descender decremented: now equals ' + descender);
         return that;
      } else {
         that.dec = function () {
            console.log('dec now modified to return ' + targetMin);
         };
         that.inc();
         return that;
      }
   };
}

container = new Container();
container.inc().inc().inc().inc();
console.log(container.inc);
container.dec().dec().dec().dec();
console.log(container.dec);

Out of interest, the output looks like the following:

modify routine on the fly

Name Resolution Order

The most important special case to keep in mind is name resolution order. Remember that there are four ways for names to enter a given scope. The order I listed them above is the order they are resolved in. In general, if a name has already been defined, it is never overridden by another property of the same name. This means that a function declaration takes priority over a variable declaration. This does not mean that an assignment to that name will not work, just that the declaration portion will be ignored. There are a few exceptions:

  • The built-in name arguments behaves oddly. It seems to be declared following the formal parameters, but before function declarations. This means that a formal parameter with the name arguments will take precedence over the built-in, even if it is undefined. This is a bad feature. Don’t use the name arguments as a formal parameter.
  • Trying to use the name this as an identifier anywhere will cause a Syntax Error. This is a good feature.
  • If multiple formal parameters have the same name, the one occurring latest in the list will take precedence, even if it is undefined.

Now that you understand scoping and hoisting, what does that mean for coding in JavaScript?
The most important thing is to always declare your variables with var statements.
Declare your variables at the top of the scope (as already mentioned JavaScript only has function scope). See the Variable Declarations section.
If you force yourself to do this, you will never have hoisting-related confusion.
However, doing this can make it hard to keep track of which variables have actually been declared in the current scope.
I generally like to follow these coding standards with JavaScript.

function foo(a, b, c) {
   var x = 1;
   var bar;
   var baz = 'something';
   // other non hoistable code here
}