Posts Tagged ‘ecmascript’

Exploring JavaScript Prototypes

June 28, 2014

Not to be confused with the GoF Prototype pattern that defines a lot more than the simple JavaScript prototype. Although the abstract concept of the prototype is the same.

My intention with this post is to arm our developers with enough information around JavaScript prototypes to know when they are the right tool for the job as opposed to other constructs when considering how to create polymorphic JavaScript that’s performant and easy to maintain. Often performant code and easy to maintain code are in conflict with each other. I.E. if you want code that’s fast, it’s often hard to read and if you want code that’s really easy to read, it “may” not be as fast as it could/should be. So we make trade-offs.

Make your code as readable as possible in as many places as possible. The more eyes that are going to be on it, generally the more readable it needs to be. Where performance really matters, we “may” have to carefully sacrifice some precious readability to achieve the essential performance required. This really needs measuring though, because often we think we’re writing fast code that either doesn’t matter or that just isn’t fast. So we should always favour readability, then profile your running application in an environment as close to production as possible. This removes the guess work, which we usually get wrong anyway. I’m currently working on a Node.js performance blog post in which I’ll attempt to address many things to do with performance. What I’m finding a lot of the time is that techniques that I’ve been told are essential for fast code are all to often incorrect. We must measure.

Some background

Before we do the deep dive thing, lets step back for a bit. Why do prototypes matter in JavaScript? What do prototypes do for us? Where do prototypes fit into the design philosophy of JavaScript?

What do JavaScript Prototypes do for us?

Removal of Code Duplication (DRY)

Excellent for reducing unnecessary duplication of members that will need garbage collecting

Performance

Prototypes also allow us to maximise economy of memory, thus reducing Garbage Collection (GC) activity, thus increasing performance. There are other ways to get this performance though. Prototypes which obtain re-use of the parent object are not always the best way to get the performance benefits we crave. You can see here under the “Cached Functions in the Module Pattern” section that using closure (although not mentioned) which is what modules leverage, also gives us the benefit of re-use, as the free variable in the outer scope is baked into the closure. Just check the jsperf for proof.

The Design Philosophy of JavaScript and Prototypes

Prototypal inheritance was implemented in JavaScript as a key technique to support the object oriented principle of polymorphism. Prototypal inheritance provides the flexibility of being able to choose what the more specific object is going to inherit, rather than in the classical paradigm where you’re forced to inherit all the base class’s baggage whether you want it or not.

Three obvious ways to achieve polymorphism:

  1. Composition (creating an object that composes a contract to another object)(has-a relationship). Learn the pros and cons. Use when it makes sense
  2. Prototypal inheritance (is-a relationship). Learn the pros and cons. Use when it makes sense
  3. Monkey Patching courtesy of call, apply and bind
  4. Classical inheritance (is-a relationship). Why would you? Please don’t try this at home in production 😉

Of course there are other ways and some languages have unique techniques to achieve polymorphism. like templates in C++, generics in C#, first-class polymorphism in Haskell, multimethods in Clojure, etc, etc.

Diving into the Implementation Details

Before we dive into Prototypes…

What does Composition look like?

There are many great examples of how composing our objects from other object interfaces whether they’re owned by the composing object (composition), or aggregated from independent objects (aggregation), provide us with the building blocks to create complex objects to look and behave the way we want them to. This generally provides us with plenty of flexibility to swap implementation at will, thus overcoming the tight coupling of classical inheritance.

Many of the Gang of Four (GoF) design patterns we know and love leverage composition and/or aggregation to help create polymorphic objects. There is a difference between aggregation and composition, but both concepts are often used loosely to just mean creating objects that contain other objects. Composition implies ownership, aggregation doesn’t have to. With composition, when the owning object is destroyed, so are the objects that are contained within the owner. This is not necessarily the case for aggregation.

An example: Each coffee shop is composed of it’s own unique culture. Each coffee shop has a different type of culture that it fosters and the unique culture is an aggregation of its people and their attributes. Now the people that aggregate the specific coffee shop culture can also be a part of other cultures that are completely separate to the coffee shops culture, they could even leave the current culture without destroying it, but the culture of the specific coffee shop can not be the same culture of another coffee shop. Every coffee shops culture is unique, even if only slightly.

Programmer Show Pony
programmer show pony

Following we have a coffeeShop that composes a culture. We use the Strategy pattern within the culture to aggregate the customers. The Visit function provides an interface to encapsulate the Concrete Strategy, which is passed as an argument to the Visit constructor and closed over by the describe method.

// Context component of Strategy pattern.
var Programmer = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Context component of Strategy pattern.
var ShowPony = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Add more persons to make a unique culture.

var customer = {
   setCasualVisitStrategy: function (casualVisit) {
      this.casualVisit = casualVisit;
   },
   setBusinessVisitStrategy: function (businessVisit) {
      this.businessVisit = businessVisit;
   },
   doCasualVisit: function () {
      console.log(this.casualVisit.describe());
   },
   doBusinessVisit: function () {
      console.log(this.businessVisit.describe());
   }
};

// Strategy component of Strategy pattern.
var Visit = function (description) {
   // description is closed over, so it's private. Check my last post on closures for more detail
   this.describe = function () {
      return description;
   };
};

var coffeeShop;

Programmer.prototype = customer;
ShowPony.prototype = customer;

coffeeShop = (function () {
   var culture = {};
   var flavourOfCulture = '';
   // Composes culture. The specific type of culture exists to this coffee shop alone.
   var whatWeWantExposed = {
      culture: {
         looksLike: function () {
            console.log(flavourOfCulture);

         }
      }
   };

   // Other properties ...
   (function createCulture() {
      var programmer = new Programmer();
      var showPony = new ShowPony();
      var i = 0;
      var propertyName;

      programmer.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.')
      );
      programmer.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.')
      );
      showPony.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony cycles to coffee shop in lycra pretending he\'s just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.')
      );
      showPony.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.')
      );

      culture.members = [programmer, showPony, /*lots more*/];

      for (i = 0; i < culture.members.length; i++) {
         for (propertyName in culture.members[i]) {
            if (culture.members[i].hasOwnProperty(propertyName)) {
               flavourOfCulture += culture.members[i][propertyName].describe() + '\n';
            }
         }
      }

   }());
   return whatWeWantExposed;
}());

coffeeShop.culture.looksLike();
// Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.
// Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.
// Show pony cycles to coffee shop in lycra pretending he's just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.
// Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.

Now for Prototype

EcmaScript 5

In ES5 we’re a bit spoilt as we have a selection of methods on Object that help with prototypal inheritance.

Object.create takes an argument that’s an object and an optional properties object which is a EcmaScript 5 property descriptor like the second parameter of Object.defineProperties and returns a new object with the first argument passed as it’s prototype and the properties described in the property descriptor (if present) added to the returned object.

prototypal inheritance
// The object we use as the prototype for hobbit.
var person = {
   personType: 'Unknown',
   backingOccupation: 'Unknown occupation',
   age: 'Unknown'
};

var hobbit = Object.create(person);

Object.defineProperties(person, {
   'typeOfPerson': {
      enumerable: true,
      value: function () {
         if(arguments.length === 0)
            return this.personType;
         else if(arguments.length === 1 && typeof arguments[0] === 'string')
            this.personType = arguments[0];
         else
            throw 'Number of arguments not supported. Pass 0 arguments to get. Pass 1 string argument to set.';
      }
   },
   'greeting': {
      enumerable: true,
      value: function () {
         console.log('Hi, I\'m a ' + this.typeOfPerson() + ' type of person.');
      }
   },
   'occupation': {
      enumerable: true,
      get: function () {return this.backingOccupation;},
      // Would need to add some parameter checking on the setter.
      set: function (value) {this.backingOccupation = value;}
   }
});

// Add another property to hobbit.
hobbit.fatAndHairyFeet = 'Yes indeed!';
console.log(hobbit.fatAndHairyFeet); // 'Yes indeed!'
// prototype is unaffected
console.log(person.fatAndHairyFeet); // undefined

console.log(hobbit.typeOfPerson()); // 'Unknown '
hobbit.typeOfPerson('short and hairy');
console.log(hobbit.typeOfPerson()); // 'short and hairy'
console.log(person.typeOfPerson()); // 'Unknown'

hobbit.greeting(); // 'Hi, I'm a short and hairy type of person.'

person.greeting(); // 'Hi, I'm a Unknown type of person.'

console.log(hobbit.age); // 'Unknown'
hobbit.age = 'young';
console.log(hobbit.age); // 'young'
console.log(person.age); // 'Unknown'

console.log(hobbit.occupation); // 'Unknown occupation'
hobbit.occupation = 'mushroom hunter';
console.log(hobbit.occupation); // 'mushroom hunter'
console.log(person.occupation); // 'Unknown occupation'

Object.getPrototypeOf

console.log(Object.getPrototypeOf(hobbit));
// Returns the following:
// { personType: 'Unknown',
//   backingOccupation: 'Unknown occupation',
//   age: 'Unknown',
//   typeOfPerson: [Function],
//   greeting: [Function],
//   occupation: [Getter/Setter] }

 

EcmaScript 3

One of the benefits of programming in ES 3, is that we have to do more work ourselves, thus we learn how some of the lower level language constructs actually work rather than just playing with syntactic sugar. Syntactic sugar is generally great for productivity, but I still think there is danger of running into problems when you don’t really understand what’s happening under the covers.

So lets check out what really goes on with….

Prototypal Inheritance

What is a Prototype?

All objects have a prototype, but not all objects reveal their prototype directly by a property called prototype. All prototypes are objects.

So, if all objects have a prototype and all prototypes are objects, we have an inheritance chain right? That’s right. See the debug image below.

All properties that you may want to add to an objects prototype are shared through inheritance by all objects sharing the prototype.

So, if all objects have a prototype, where is it stored? All objects in JavaScript have an internal property called [[Prototype]]. You won’t see this internal property. All prototypes are stored in this internal property. How this internal property is accessed is dependant on whether it’s object is an object (object literal or object returned from a constructor) or a function. I discuss how this works below. When you dereference an object in order to find a property, the engine will first look on the current object, then the prototype of the current object, then the prototype of the prototype object and so on up the prototype chain. It’s a good idea to try and keep your inheritance hierarchies as shallow as possible for performance reasons.

Prototypes in Functions

Every function object is created with a prototype property, whether it’s a constructor or not. The prototype property has a value which is a constructor property which has a value that’s actually the function. See the below example to help clear it up. ES3 and ES5 spec 13.2 say pretty much the same thing.

var MyConstructor = function () {};
console.log(MyConstructor.prototype.constructor === MyConstructor); // true

and to help with visualising, see the below example and debug. myObj and myObjLiteral are for the two code examples below the debug image.

var MyConstructor = function () {};
var myObj = new MyConstructor();
var myObjLiteral = {};

Accessing JavaScript Prototypes

 

Up above in the composition example on line 40 and 41, you can see how we access the prototype of the constructor. We can also access the prototype of the object returned from the constructor like this:

var MyConstructor = function () {};
var myObj = new MyConstructor();
console.log(myObj.constructor.prototype === MyConstructor.prototype); // true

We can also do similar with an object literal. See below.

Prototypes in Objects that are Not Functions

Every object that is not a function is not created with a prototype property (All objects do have the hidden internal [[Prototype]] property though). Now sometimes you’ll see Object.prototype talked about. Even MDN make the matter a little confusing IMHO. In this case, the Object is the Object constructor function and as discussed above, all functions have the prototype property.

When we create object literals, the object we get is the same as if we ran the expression new Object(); (see ES3 and ES5 11.1.5)
So although we can access the prototype property of functions (that may or not be constructors), there is no such exposed prototype property directly on objects returned by constructors or on object literals.
There is however conveniently a constructor property directly on all objects returned by constructors and on object literals (as you can think of their construction procedure producing the same result). This looks similar to the above debug image:

var myObjLiteral = {};
            // ES3 ->                              // ES5 ->
console.log(myObjLiteral.constructor.prototype === Object.getPrototypeOf(myObjLiteral)); // true

I’ve purposely avoided discussing the likes of __proto__ as it’s not defined in EcmaScript and there’s no valid reason to use something that’s not standard.

Polyfilling to ES5

Now to get a couple of terms used in web development well defined before we start talking about them:

  • A shim is a library that brings a new API to an environment that doesn’t support it by using only what the older environment supports to support the new API.
  • A polyfill is some code in the form of a function, module, plugin, etc that provides the functionality of a later environment (ES5 for example) if it doesn’t exist for an older environment (ES3 for example). The polyfill often acts as a fallback. The programmer writes code targeting the newer environment as though the older environment doesn’t exist, but when the code is pulled into the older environment the polyfill kicks into action as the new language feature isn’t yet implemented natively.

If you’re supporting older browsers that don’t have full support for ES5, you can still use the ES5 additions so long as you provide ES5 polyfills. es5-shim is a good choice for this. Checkout the html5please ECMAScript 5 section for a little more detail. Also checkout Kangax’s ECMAScript 5 compatibility table to see which browsers currently support which ES5 language features. A good approach and one I like to take is to use a custom build of a library such as Lo-Dash to provide a layer of abstraction so I don’t need to care whether it’ll be in an ES5 or ES3 environment. Then for anything that the abstraction library doesn’t provide I’ll use a customised polyfill library such as es5-shim to fall back on. I prefer to use Lo-Dash over Underscore too, as I think Lo-Dash is starting to leave Underscore behind in terms of performance and features. I also like to use the likes of yepnope.js to conditionally load my polyfills based on whether they’re actually needed in the users browser. As there’s no point in loading them if we have browser support now is there?

Polyfilling Object.create as discussed above, to ES5

You could use something like the following that doesn’t accommodate an object of property descriptors. Or just go with the following next two choices which is what I do:

  1. Use an abstraction like the lodash create method which takes an optional second argument object of properties and treats them the same way
  2. Use a polyfill like this one.
if (typeof Object.create !== 'function') {
   (function () {
      var F = function () {};
      Object.create = function (proto) {
         if (arguments.length > 1) {
            throw Error('Second argument not supported');
         }
         if (proto === null) {
            throw Error('Cannot set a null [[Prototype]]');
         }
         if (typeof proto !== 'object') {
            throw TypeError('Argument must be an object');
         }
         F.prototype = proto;
         return new F();
      };
   })();
};

Polyfilling Object.getPrototypeOf as discussed above, to ES5

  1. Use an abstraction like the lodash isPlainObject method (source here), or…
  2. Use a polyfill like this one. Just keep in mind the gotcha.

 

EcmaScript 6

I got a bit excited when I saw an earlier proposed prototype-for (also seen with the name prototype-of) operator: <| . Additional example here. This would have provided a terse syntax for providing an object literal with an object to use as its prototype. It looks like it must have lost traction though as it was removed in the June 15, 2012 Draft.

There are a few extra methods in ES6 that deal with prototypes, but on trawling the EcmaScript 6 draft spec, nothing at this stage that really stands out as revolutionising the way I write JavaScript or being a mental effort/time saver for me. Of course I may have missed something. I’d like to hear from anyone that has seen something interesting to the contrary?

Yes we’re getting class‘s in ES6, but they are just an abstraction giving us a terse and declarative mechanism for doing what we already do with functions that we use as constructors, prototypes and the objects (or instances if you will) that are returned from our functions that we’ve chosen to act as constructors.

Architectural Ideas that Prototypes Help With

This is a common example that I often use for domain objects that are fairly hot that use one set of accessor properties added to the business objects prototype, as you can see on line 13 of my Hobbit module (Hobbit.js) below.

First a quick look at the tests/spec to drive the development. This is being run using mocha with the help of a Makefile in the root directory of my module under test.

  • Makefile
# The relevant section.
unit-test:
	@NODE_ENV=test ./node_modules/.bin/mocha \
		test/unit/*test.js test/unit/**/*test.js
  • Hobbit-test.js
var requireFrom = require('requirefrom');
var assert = require('assert');
var should = require('should');
var shire = requireFrom('shire/');

// Hardcode $NODE_ENV=test for debugging.
process.env.NODE_ENV='test';

describe('shire/Hobbit business object unit suite', function () {
   it('Should be able to instantiate a shire/Hobbit business object.', function (done) {
      // Uncomment below lines if you want to debug.
      //this.timeout(444000);
      //setTimeout(done, 444000);

      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();

      // Properties should be declared but not initialised.
      // No good checking for undefined alone, as that would be true whether it was declared or not.

      hobbit.should.have.property('id');
      (hobbit.id === undefined).should.be.true;
      hobbit.should.have.property('typeOfPerson');
      (hobbit.typeOfPerson === undefined).should.be.true;
      hobbit.should.have.property('greeting');
      (hobbit.greeting === undefined).should.be.true;
      hobbit.should.have.property('occupation');
      (hobbit.occupation === undefined).should.be.true;
      hobbit.should.have.property('emailFrom');
      (hobbit.emailFrom === undefined).should.be.true;
      hobbit.should.have.property('name');
      (hobbit.name === undefined).should.be.true;      

      done();
   });

   it('Should be able to set and get all properties of a shire/Hobbit business object.', function (done){
      // Uncomment below lines if you want to debug.
      this.timeout(444000);
      setTimeout(done, 444000);

      // Arrange
      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();      

      // Act
      hobbit.id = '32f4d01e-74dc-45e8-b3a8-9aa24840bc6a';
      hobbit.typeOfPerson = 'short and hairy';
      hobbit.greeting = {
         intro: 'Hi, I\'m a ',
         outro: ' type of person.'};
      hobbit.occupation = 'mushroom hunter';
      hobbit.emailFrom = 'Bilbo.Baggins@theshire.arn';
      hobbit.name = 'Bilbo Baggins';

      // Assert
      hobbit.id.should.equal('32f4d01e-74dc-45e8-b3a8-9aa24840bc6a');
      hobbit.typeOfPerson.should.equal('short and hairy');
      hobbit.greeting.should.equal('Hi, I\'m a short and hairy type of person.');
      hobbit.occupation.should.equal('mushroom hunter');
      hobbit.emailFrom.should.equal('Bilbo.Baggins@theshire.arn');
      hobbit.name.should.eql('Bilbo Baggins');

      done();
   });
});
  • Now the business object itself Hobbit.js

    Now what’s happening here is that on instance creation of new Hobbit, the empty members object you see created on line 9 is the only instance data. All of the Hobbit‘s accessor properties are defined once per export of the Hobbit module which is assigned the constructor function object. So what we store on each instance are the values assigned in the Hobbit-test.js from lines 47 through 54. That’s just the strings. So very little space is used for each instance of the Hobbit function returned by invoking the Hobbit constructor that the Hobbit module exports.
// Could achieve a cleaner syntax with Object.create, but constructor functions are a little faster.
// As this will be hot code, it makes sense to favour performance in this case.
// Of course profiling may say it's not worth it, in which case this could be rewritten.
var Hobbit = (function () {
   function Hobbit (/*Optionally Construct with DTO and serializer*/) {
      // Todo: Implement pattern for enforcing new.
      Object.defineProperty (this, 'members', {
         value: {}
      });
   }

   (function definePublicAccessors (){
      Object.defineProperties(Hobbit.prototype, {
         id: {
            get: function () {return this.members.id;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.id = newValue;
            },
            configurable: false, enumerable: true
         },
         typeOfPerson: {
            get: function () {return this.members.typeOfPerson;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.typeOfPerson = newValue;
            },
            configurable: false, enumerable: true
         },
         greeting: {
            get: function () {
               return this.members.greeting === undefined ?
                  undefined :
               this.members.greeting.intro +
                  this.typeOfPerson +
                  this.members.greeting.outro;
            },
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.greeting = newValue;
            },
            configurable: false, enumerable: true
         },
         occupation: {
            get: function () {return this.members.occupation;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.occupation = newValue;
            },
            configurable: false, enumerable: true
         },
         emailFrom: {
            get: function () {return this.members.emailFrom;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.emailFrom = newValue;
            },
            configurable: false, enumerable: true
         },
         name: {
            get: function () {return this.members.name;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.name = newValue;
            },
            configurable: false, enumerable: true
         }
      });

   })();
   return Hobbit;
})();

// JSON.parse provides a hydrated hobbit from the DTO.
//    So you would call this to populate this DO from a DTO
// JSON.stringify provides the DTO from a hydrated hobbit

module.exports = Hobbit;
  • Now running the test
lets test

 

Flyweights using Prototypes

A couple of interesting examples of the Flyweight pattern implemented in JavaScript are by the GoF and Addy Osmani.

The GoF’s implementation of the FlyweightFactory makes extensive use of closure to store its flyweights and uses aggregation in order to create it’s ConcreteFlyweight from the Flyweight. It doesn’t use prototypes.

Addy Osmani has a free book “JavaScript Design Patterns” containing an example of the Flyweight pattern, which IMO is considerably simpler and more elegant. In saying that, the GoF want you to buy their product, so maybe they do a better job when you give them money. In this example closure is also used extensively, but it’s a good example of how to leverage prototypes to share your less specific behaviour.

Mixins using Prototypes

Again if you check out the last example of Mixins in Addy Osmani’s book, there is quite an elegant example.

We can even do multiple inheritance using mixins, by adding which ever properties we want from what ever objects we want to the target objects prototype.

This is a similar concept to the post I wrote on Monkey Patching.

Mixins support the Open/Closed principle, where objects should be able to have their behaviour modified without their source code being altered.

Keep in mind though, that you shouldn’t just expect all consumers to know you’ve added additional behaviour. So think this through before using.

Factory functions using Prototypes

Again a decent example of the Factory function pattern is implemented in the “JavaScript Design Patterns” book here.

There are many other areas you can get benefits from using prototypes in your code.

Prototypal Inheritance: Not Right for Every Job

Prototypes give us the power to share only the secrets of others that need to be shared. We have fine grained control. If you’re thinking of using inheritance be it classical or prototypal, ask yourself “Is the class/object I’m wanting to provide a parent for truly a more specific version of the proposed parent?”. This is the idea behind the Liskov Substitution Principle (LSP) and Design by Contract (DbC) which I posted on here. Don’t just inherit because it’s convenient In my “javascript object creation patterns” post I also discussed inheritance.

The general consensus is that composition should be favoured over inheritance. If it makes sense to compose once you’ve considered all options, then go for it, if not, look at inheritance. Why should composition be favoured over inheritance? Because when you compose your object from another contract of an object, your sub object (the object doing the composing) doesn’t inherit anything or need to know anything about the composed objects secrets. The object being composed has complete freedom as to how it minds it’s own business, so long as it provides a consistent contract for consumers. This gives us the much loved polymorphism we crave without the crazy tight coupling of classical inheritance (inherit everything, even your fathers drinking problem :-s).

I’m pretty much in agreement with this when we’re talking about classical inheritance. When it comes to prototypal inheritance, we have a lot more flexibility and control around how we use the object that we’re deriving from and exactly what we inherit from it. So we don’t suffer the same “all or nothing” buy in and tight coupling as we do with classical inheritance. We get to pick just the good parts from an object that we decide we want as our parent. The other thing to consider is the memory savings of inheriting from a prototype rather than achieving your polymorphic behaviour by way of composition, which has us creating the composed object each time we want another specific object.

So in JavaScript, we really are spoilt for choice when it comes to how we go about getting our fix of polymorphism.

When surveys are carried out on..

Why Software Projects Fail

the following are the most common causes:

  • Ambiguous Requirements
  • Poor Stakeholder Involvement
  • Unrealistic Expectations
  • Poor Management
  • Poor Staffing (not enough of the right skills)
  • Poor Teamwork
  • Forever Changing Requirements
  • Poor Leadership
  • Cultural & Ethical Misalignment
  • Inadequate Communication

You’ll notice that technical reasons are very low on the list of why projects fail. You can see the same point mentioned by many of our software greats, but when a project does fail due to technical reasons, it’s usually because the complexity got out of hand. So as developers when focusing on the art of creating good code, our primary concern should be to reduce complexity, thus enhance the ability to maintain the code going forward.

I think one of Edsger W. Dijkstra’s phrases sums it up nicely. “Simplicity is prerequisite for reliability”.

Stratification is a design principle that focuses on keeping the different layers in code autonomous, I.E. you should be able to work in one layer without having to go up or down adjacent layers in order to fully understand the current layer you’re working in. Its internals should be able to move independently of the adjacent layers without effecting them or being concerned that a change in it’s own implementation will affect other layers. Modules are an excellent design pattern used heavily to build medium to large JavaScript applications.

With composition, if your composing with contracts, this is exactly what you get.

References and interesting reads

 

Exploring JavaScript Closures

May 31, 2014

Just before we get started, we’ll be using the terms lexical scope and dynamic scope a bit. In computer science the term lexical scope is synonymous with static scope.

  • lexical or static scope is where name resolution of “part of a program” depends on the location in the source code
  • dynamic scope is whether name resolution depends on the program state (dependent on execution context or calling context) when the name is encountered.

What are Closures?

Now establishing the formal definition has been quite an interesting journey, with quite a few sources not quite getting it right. Although the ES3 spec talks about closure, there is no formal definition of what it actually is. The ES5 spec on the other hand does discuss what closure is in two distinct locations.

  1. “11.1.5 Object Initialiser” section under the section that talks about accessor properties This is the relevant text: (In relation to getters): “Let closure be the result of creating a new Function object as specified in 13.2 with an empty parameter list (that’s getter specific) and body specified by FunctionBody. Pass in the LexicalEnvironment of the running execution context as the Scope.
  2. “13 Function Definition” section This is the relevant text: “Let closure be the result of creating a new Function object as specified in 13.2 with parameters specified by FormalParameterList (which are optional) and body specified by FunctionBody. Pass in funcEnv as the Scope.

Now what are the differences here that stand out?

  1. We see that 1 specifies a function object with no parameters, and 2 specifies some parameters (optional). So from this we can establish that it’s irrelevant whether arguments are passed or not to create closure.
  2. 1 also mentions passing in the LexicalEnvironment, where as 2 passes in funcEnv. funcEnv is the result of “calling NewDeclarativeEnvironment passing the running execution context‘s LexicalEnvironment as the argument“. So basically there is no difference.

Now 13.2 just specifies how functions are created. Given an optional parameter list, a body, a LexicalEnvironment specified by Scope, and a Boolean flag (for strict mode (ignore this for the purposes of establishing a formal definition)). Now the Scope mentioned above is the lexical environment of the running execution context (discussed here in depth) at creation time. The Scope is actually [[Scope]] (an internal property).

The ES6 spec draft runs along the same vein.

Lets get abstract

Every problem in computer science is just a more specific problem of a problem we’re familiar with in the natural world. So often it helps to find the abstract problem that we are already familiar with in order to help us understand the more specific problem we are dealing with. Patterns are an example of this. Before I was programming as a profession I was a carpenter. I find just about every problem I deal with in programming I’ve already dealt with in physical carpentry and at a higher level still with physical architecture.

In search of the true formal definition I also looked outside of JavaScript at the language agnostic term, which should just be an abstraction of the JavaScript closure anyway. Yip… Wikipedias definition “In programming languages, a closure (also lexical closure or function closure) is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables or upvalues) of that function. A closure—unlike a plain function pointer—allows a function to access those non-local variables even when invoked outside its immediate lexical scope.

My abstract formal definition

A closure is a function containing a reference to the lexical (static) environment via the function objects internal [[Scope]] property (ES5 spec 13.2.9) that it is defined within at creation time, not call time (ES5 spec 13.2.1). The closure is closed over it’s parent lexical environment and all of it’s properties. You can access these properties as variables, but not as properties, because you don’t have access to the internal [[Scope]] property directly in order to reference it’s properties. So this example fails. More correctly (ES5 spec 8.6.2) “Of the standard built-in ECMAScript objects, only Function objects implement [[Scope]].

var outerObjectLiteral = {

   x: 10,

   foo: function () {
      console.log(x); // ReferenceError: x is not defined obviously
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

See here for an explanation on the differences between properties and variables. That’s basically it. Of course there are many ways we can use a closure and that’s often where confusion creeps in about what a closure actually is and is not. Feel free to bring your perspective on this in the comments section below.

When is a closure born?

So lets get this closure closing over something. JavaScript addresses the funarg problem with closure.

var x = 10;

var outerObjectLiteral = {   

   foo: function () {
      // Because our internal [[Scope]] property now has a property (more specifically a free variable) x, we can access it directly.
      console.log(x); // Writes 10 to the console.
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

The closure is created on line 13. Now at line 9 we have access to the closed over lexical environment. When we print x on line 7, we get 10 which is the value of x on [[Scope]] that our closure was statically bound to at function object creation time (not the dynamically scoped x = 20). Now of course you can change the value of the free variable x and it’ll be reflected where ever you use the closed over variable because the closure was bound to the free variable x, not the value of the free variable x.

This is what you’ll see in Chrome Dev Tools when execution is on line 10. Bear in mind though that both foo and invokeMe closures were created at line 13.

Closure

Now I’m going to attempt to explain what the structure looks like in a simplified form with a simple hash. I don’t know how it’s actually implemented in the varius EcmaScript implementations, but I do know what the specification (single source of truth) tells us, it should look something like the following:

////////////////
// pseudocode //
////////////////
foo = closure {
   FormalParameterList: {}, // Optional
   FunctionBody: <...>,
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10
   }
}

The closure is born when the function is created (“the result of creating a new Function object” as stated above). Not when it’s returned by the outer function (I.E. upwards funarg problem) and not when it’s invoked as Angus Croll mentioned here under the “The [[Scope]] property” section.

Angus quotes the ES5 spec 10.4.3.5-7. On studying this section I’m pretty sure it is meant for the context of actually creating the function object rather than invoking an existing function object. The clauses I’ve detailed above (11.1.5 Object Initialiser and 13 Function Definition), confirm this.

The ES6 spec draft “14.1.22 Runtime Semantics: Evaluation” also confirms this theory. Although it’s titled Runtime Semantics, it has several points that confirm my theory… The so called runtime semantics are the runtime semantics of function object creation rather than function object invocation. As some of the steps specified are FunctionCreate, MakeMethod and MakeConstructor (not FunctionInvoke, InvokeMethod or InvokeConstructor). The ES6 spec draft “14.2.17 Runtime Semantics: Evaluation” and also 14.3.8 are similar.

Why do we care about Closure?

Without closures, we wouldn’t have the concept of modules which I’ve discussed in depth here.

Modules are used very heavily in JavaScript both client and server side (think NPM), and for good reason. Until ES6 there is no baked in module system. In ES6 modules become part of the language. The entire Node.js ecosystem exists to install modules via the CommonJS initiative. Modules on the client side most often use the Asynchronous Module Definition (AMD) implementation RequireJS to load modules, but can also use the likes of CommonJS via Browserify, which allows us to load node.js packages in the browser.

As of writing this, the TC39 committee have looked at both the AMD and CommonJS approaches and come up with something completely different for the ES6 module draft spec. Modules provide another mechanism for not allowing secrets to leak into the global object.

Modules are not new. David Parnas wrote a paper titled “On the Criteria To Be Used in Decomposing Systems into Modules” in 1972. This explores the idea of secrets. Design and implementation decisions that should be hidden from the rest of the programme.

Here is an example of the Module pattern that includes both private and public methods. my.moduleMethod has access to private variables outside of it’s VariableEnvironment (the current scope) via the Environment record which references the outer LexicalEnvironment via it’s internal [[Scope]] property.

Information hiding: state and implementation. In JavaScript we don’t have access modifiers, but we don’t need them either. We can hide our secrets with various patterns. Closure is a key concept for many of these patterns. Closure is a key building block for helping us to programme against contract rather than implementation, helping us to form consistent abstractions, giving us the ability to engage with a concept while safely ignoring some of its details. Thus hiding unnecessary complexity from consumers.

I think Steve McConnell explains this very well in his classic “Code Complete” book. Steve uses the house abstraction as his metaphor. “People use abstraction continuously. If you had to deal with individual wood fibers, varnish molecules, and steel molecules every time you used your front door, you’d hardly make it in or out of your house each day. Abstraction is a big part of how we deal with complexity in the real world. Software developers sometimes build systems at the wood-fiber, varnish-molecule, and steel-molecule level. This makes the systems overly complex and intellectually hard to manage. When programmers fail to provide larger programming abstractions, the system itself sometimes fails to make it through the front door. Good programmers create abstractions at the routine-interface level, class-interface level, and package-interface level-in other words, the doorknob level, door level, and house level-and that supports faster and safer programming.

Encapsulation: you can not look at the details (the internal implementation, the secrets).

Partial function application and Currying: I have a set of posts on this topic. Closure is an integral building block of these constructs. Part 1, Part 2 and Part 3.

Functional JavaScript relies heavily on closure.

Are there any Costs or Gotchas of using Closures?

Of course. You didn’t think you’d get all this expressive power without having to think about how you’re going to use it did you? As we’ve discussed, closures were created to address the funarg problem. In doing that, the closure references the lexical (static) scope of the outer scope. So even once the free variables are out of scope, closure will still reference them if they were saved at function creation time. They can not be garbage collected until the function that references (is closed over) the outer scope has fallen out of scope. I.E. the reference count is 0.

var x = 10;
var noOneLikesMe = 20;
var globalyAccessiblePrivilegedFunction;

function globalyScopedFunction(z) {

  var noOneLikesMeInner = 40;

  function privilegedFunction() {
    return x + z;
  }

  return privilegedFunction;

}

// This is where privilegedFunction is created.
globalyAccessiblePrivilegedFunction = globalyScopedFunction(30);

// This is where privilegedFunction is applied.
globalyAccessiblePrivilegedFunction();

Now only the free variables that are needed are saved at function creation time. We see that when execution arrives at line 7, the currently scoped closure has the x free variable saved to it, but not z, noOneLikesMe, or noOneLikesMeInner.

noOneLikesMe

When we enter innerFunction on line 10, we see the hidden [[Scope]] property has both the outer scope and the global scope saved to it.

TwoClosures

Say for example execution has passed the above code snippet. If the closed over variables can still be referenced by calling globalyAccessiblePrivilegedFunction again, then they can not be garbage collected. This is a frequently abused problem with the upwards funarg problem. If you’ve got hot code that is creating many functions, make sure the functions that are closed over free variables are dropped out of scope as soon as you no longer have a need for them. This way garbage collection can deallocate the memory used by the free variables.

Looking at how the specification would look simplified, we can see that each Environment record inherits what it knows it’s going to need from the Environment record of its lexical parent. This chaining inheritance goes all the way up the lexical hierarchy to the global function object as seen below. In this case the family tree is quite short. Remember this structure is formed at function creation time, not invocation time. the free variables (not their values) are statically baked.

////////////////
// pseudocode //
////////////////
globalyScopedFunction = closure {
   FormalParameterList: { // Optional
      z: 30 // Values updated at invocation time.
   },
   FunctionBody: {
      var noOneLikesMeInner = 40;

      function privilegedFunction() {
         return x + z;
      }

      return privilegedFunction;
   },
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10 // Free variable saved because we know it's going to be used in privilegedFunction.
   },
   privilegedFunction: = closure {
      FormalParameterList: {}, // Optional
      FunctionBody: {
         return x + z;
      },
      Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
         x: 10 // Free variable inherited from the outer Environment.
         z: 30 // Formal parameter saved from outer lexical environment.
      }
   }
}

Scope

I discuss closure here very briefly and how it can be used to create block scoped variables prior to block scoping with the let keyword in ES6, supposed to be officially approved by December 2014. I discuss scoping here in a little more depth.

Closure misunderstandings

Closures are created when a function is returned

A closure is formed when one of those inner functions is made accessible outside of the function in which it was contained” found here is simply incorrect. There are also a lot of other misconceptions found at that link. I’d advise to read with a bag of salt.

Now we’ve already addressed this one above, but here is an example that confirms that the closure is in fact created at function creation time, not when the inner function is returned. Yes, it does what it looks like it does. Fiddle with it?

(function () {

   var lexicallyScopedFunction = function () {
      console.log('We\'re in the lexicalyScopedFunction');
   };

   (function innerClosure() {
      lexicallyScopedFunction();
   }());

}());

On line 8, we get to see the closure that was created from the execution of line 11.

lexicallyScopedFunction

Closures can create memory leaks

Yes they can, but not if you let the closure go out of scope. Discussed above.

Values of free variables are baked into the Closure

Also untrue. Now I’ve put in-line comments to explain what’s happening here. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      loopCountingFunctions[i] = (function printLoopCount() {
         // What you see here is that each time this code is run, it prints the last value of the loop counter i.
         // Clearly showing that for each new printLoopCount function created and saved to the loopCountingFunctions array,
         // the variable i is saved to the Environment record, not the value of the variable i.
         console.log(i);
      });
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 3
runLoopPrinter[1](); // 3
runLoopPrinter[2](); // 3

An aside… getLoopPrinter is a global function. Once execution is on line 3 you get to see that global functions also have closure… supporting my comments above

global functions have closure too

Now in the above example, this is probably not what you want to happen, so how do we give each printLoopCount function it’s on value? Well by creating a parameter for each iteration of the loop, each with the new value. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      (function (i) {
         // Now what happens here is each time the above loop runs this code,
         // inside this scope (the scope of this comment) i is a new formal parameter which of course
         // gets statically saved to each printLoopCount functions Environment record (or more simply each closure of printLoopCount).
         loopCountingFunctions[i] = (function printLoopCount() {
            console.log(i);
         });
      })(i)
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 0
runLoopPrinter[1](); // 1
runLoopPrinter[2](); // 2

As always, let me know your thoughts on this post, any thing you think I may have the wrong handle on, or anything that otherwise stood out.

References and interesting reads

Evaluation of AngularJS, EmberJS, BackboneJS + MarionetteJS

December 28, 2013

This post will continue to be modified for at least a month from the publish date. I just didn’t want to wait another month before publishing, so people can start to get some use out of it early. If you have resources, comments, anything you think that could be useful to others, please add a comment and if it makes sense, I may add it to the post. This will also be used as a resource for the attendees to the CHC.JS MV* Battle Royale meet-up.

Recently I’ve undertaken the task of reviewing some JavaScript MV* frameworks to help organise/structure the client side code within an application I’m currently working on. This is about the third time I’ve done this. Each time has been for a different type of application with completely different requirements, frameworks and libraries to consider.
Unlike Angular and Ember, Backbone is a small library. Marionette adds quite a lot of extra functionality and provides some nice abstractions on top . All mentioned frameworks/libraries are free and open source.

I found a useful tool for helping with the selection process about a year ago. It’s called TodoMVC and it contains a generous collection of applications all satisfying the requirements of a single specification (a small web app that allows the person using it to add todo notes etc.). So basically they all do the same thing, but use a different JavaScript framework or library to do it. It’s still being maintained. Addy Osmani’s blog post on the project is here.

The idea is that you can work through a decent size selection of applications that all do the same thing.
This assists the R&D developer or architect to make informed decisions on which JavaScript framework or library will suite their purposes, if any.
There are also a couple of Todo apps (vanillajs and jquery) that don’t use a framework at all.
There’s a template to use as a starting point, so you can create your own.

Just bear in mind though, that the TodoMVC app doesn’t really show case what Ember and Angular has to offer.

On Addy’s post There are a collection of good points on how to create your selection criterion under the heading “Our Suggested Criteria For Selecting A Framework”.

I’ve heard a few times that “all you really need to do in order to make an informed decision on which framework or library to go with is just write a small app for each of the frameworks, do a bit of reading and maybe watch a few screen casts. Shouldn’t take more than a day”. I disagree with this. I don’t think there is any way you can learn all or most of the pros and cons of each framework in a day or even two. Depending on how much time you have, my recommended approach would be to go through the following activities in the following order (give or take). Spending as little or as much time as you have, ideally in a few iterations, for each of the offerings you’re investigating.

  1. Listen to a pod-cast (say, on your way to/from a clients or even in your sleep. Good time savers.)
  2. Read some of the documentation
  3. Watch a screen-cast on each one
  4. Play with some examples
  5. Evaluate on features you (definitely or may) want verses features available. Features need to be learned. If you don’t need them, you will probably be better going with the offering that doesn’t have the features you don’t need, but has the architecture to add them (thinking Backbone) if/when you do need them.
  6. Are the features implemented in an architecture that you believe is good (I.E. are the layers muddied)?
  7. Read some blog posts, tutorials.
  8. Read some opinions and evaluate for yourself.
  9. Start testing it’s limits
  10. Decide whether you like its opinions imposed
  11. Does it impose enough or to many opinions for you and your team

As the JavaScript MV* landscape is constantly and very quickly changing, the outcome of your evaluation will have a short use by date.

This is my attempt to distil the attributes of the discussed offerings. I’ve attempted to come at this with an open mind. Hopefully this will help save some work for those that come after me. lists are sorted in the order of most useful to me. I make no apologies for the abundance of links, as I’ve also used this for a resource collection point and hope that this post will fall into the category of a “one stop shop” for what I consider to “currently” be the top three contenders in the client side MV* line-up. In saying that though, there are other strong contenders like Meteor not discussed here, as it’s more than just a client side MV* framework. Without further ado, here they are…

Angular.js

AngularJS

Intro

Opinionated framework that has Models, Views and Controllers, but does not conform to the MVC pattern.

Core Team

Igor Minár, Miško Hevery, Vojta Jína.
All work at google.

Backed by the commercial giant Google (you decide whether that’s a good thing).

Community

Conferences

  1. ng-conf

Statistics

  • Version: 1.2.5
  • Payload Size: Depends on handlebars development version 85kb
    1. development version 716.7kb
    2. minified 99.8kb
    3. minified and compressed < 36kb
  • Age: Initial Github commits: January 2010

Performance

See Backbone Performance below.

Documentation

Pod-casts

  1. Angular.js

Screen-casts

  1. AngularJS on YouTube
  2. EggHead.io Lessons

Blog Posts, Tutorials, etc.

  1. Learning AngularJS in one day
  2. Angular docs Tutorial

Features

  • Directives: used via Non HTML compliant tags, attributes, comment and class names. Although there are options to make it compliant:  via the class (not recommended) and data attributes.
  • scope. The first half of this video shows how the scope may be confusing to those new to Angular. If I can not tell how code will work without running it, it violates the Principle of Least Astonishment (PoLA). It seems quite clunky to me.

Positive

  1. Good for long running and complex applications with deep nested view hierarchies
  2. Two-way data binding
  3. All tests run against IE8 (good for those that are locked into legacy MS)
  4. Test driven (and more vocal about it than Ember)
  5. Payload is about 1/3 smaller than Ember

Negative

  1. Steep learning curve compared to Backbone, but not as steep as Ember.
  2. Dirty checking to keep views and models in sync is costly. Ember keeps sync in a more elegant way. Possible perceived downside to this is Ember models have to inherit from DS.Model (next point addresses this as a positive though). Also discussed here under the “Performance issues” heading.
  3. Models are Plain Old JavaScript Objects (POJO’s). Doesn’t have to be anything special. Now there’s an argument here that attempts to explain this as being a selling point of Angular, but in reality what happens is a violation of the Uniform Access Principle, thus creating tight coupling. How’s that? Well now the view needs to know too much about the model’s members. Discussed in more detail here. For example if one of the models properties is a function, the view has to know this. So you see this sort of thing in the view {{area()}} (so we’re pulling our JavaScript into our view.) where as with Ember because it’s models are well defined and you can use computed properties on them, all the view needs to specify is an identifier, then you’ll see this sort of thing in the view {{area}}. The Ember model then creates a computed property with the same name. The opposing view is that in ES5 you can just hide your functions etc. behind property getters and setters. Most developers take the path of least resistance, so I think most will be doing it the wrong way.

Interesting Plug-ins

  1. ?

Useful Tools

  • “AngularJS Batarang” for Chrome browser (it’s an extension)

Ember.js

EmberJS

Intro

Opinionated framework that has Models, Views and Controllers, but does not conform to the MVC pattern.

Core Team

Yehuda Katz, formerly of Rails and SproutCore projects.
Tom Dale, Peter Wagenet, Trek Glowacki, Erik Bryn, Kris Selden, Stefan Penner, Leah Silber, Alex Matchneer.

Backed by the JavaScript community.

Community

Conferences

Statistics

  • Version: 1.2.0
  • Payload Size:
    1. development version 1.1MB
    2. production version 1.0MB
    3. minified and GZipped 67kb
  • Age: Initial Github commits: April 2011

Performance

See Backbone Performance below.

Documentation

Pod-casts

  1. JavaScript Jabber Ember Tools
  2. JavaScript Jabber Ember.js (also covers some backbone)
  3. JavaScript Jabber Ember.js & Discourse
  4. EmberWatch

Screen-casts

  1. Building an Ember.js Application
  2. Ember101
  3. EmberWatch
  4. tutsplus
  5. EmberWatch 

Blog Posts, Tutorials, etc.

Features

  • the ember-application class gets added to the root element (body) in the ember JavaScript file. I was wondering how this class was magically added to the markup. Couldn’t find any documentation on it, so had to look through the JavaScript.

Positive

  1. Good for long running and complex applications with deep nested view hierarchies
  2. Aggregates model data changes and update the DOM late in the RunLoop.
  3. Well defined models and computed properties (See Angular negative point around this).
  4. Test driven

Negative

  1. Steepest learning curve out of the three. Why? Because there’s more in it. If you need it, great! Maybe you don’t. If not, is the extra learning worth using it? Part of the “more in it” may also be around the elegant way things have been designed, I.E. more constraints to push the users down the right path, thus higher chance of less friction and pain in the future of your application, that is of course if your application does things they way Ember says they must be done. I’m seeing some of these things in the likes of the well defined models and computed properties.
  2. Payload is the largest out of all three.

Interesting Plug-ins

  1. ?

Useful Tools

  • “Ember Inspector” for Chrome browser (it’s an extension)
  • ember-tools. Listen to and/or read the pod-cast linked to above. Provides file organisation, scaffolding, template pre-compilation, generators, CommonJS (that’s node.js style) modules. and other goodies. Useful for setting up your project to conform to the Ember conventions, so you don’t end up fighting them.

Angular versus Ember views

Pod-casts

  1. Angular vs Ember Cage Match NDC

Screen-casts

  1. Angular vs Ember Cage Match NDC

Blog Posts, Tutorials, etc.

  1. Evil Trout Ember versus Angular (possible bias toward Ember)
  2. Why AngularJS beat EmberJS

My Thoughts

Both Frameworks Appear to be Targeting a Similar Problem Space

Don’t believe everything you read. Test it before you buy it. I’ve come across quite a few articles that are just incorrect. Even by reputable people. Sometimes because the frameworks have changed how they do things and/or their documentation has changed. So don’t just take it all at face value. The concept of MVC has changed over the past decade. Although concepts have changed, a pattern doesn’t change, that’s why it’s a pattern. Something everybody familiar with a pattern understands. If an implementation starts to change, then it no longer conforms to the pattern and should not be named after the pattern, as this just brings confusion. Microsoft’s ASP.NET MVC framework is a perfect example of this. It does not follow the MVC pattern (documented in 1979) and so should never have been named MVC. Ask me in the comments to explain if your not aware of how this is. In the MVC pattern Models are not injected into views by Controllers. With the MVC pattern, Views listen to events from a Model (The View is actually oblivious to the Model) which the Controller has hooked up, since the Controller knows about both the View and the Model. This may not be your understanding of MVC? More than likely this is due to certain frameworks being labelled as MVC when they are not, thus bringing the confusion. The following image provided by Gang-Of-Four depicts the MVC pattern.

MVC

Angular Doesn’t Pretend JavaScript has Real Classes

Personally I find both frameworks have opinions that make me nauseous.
Like Angular’s scope and Embers class-hierarchy abstraction. Yes Harmony will have pseudo classes for the classical programmers that struggle with JavaScripts declarative prototypal inheritance. (disclaimer: my roots are in classical OO languages) The way I feel about it: Say a whole lot of JavaScript programmers start using a classical OO language and decide they don’t like the way it does classical inheritance, so the classical object oriented language authorities decide to add syntactic sugar on top of the language to make it’s classical inheritance “look” more like prototypal inheritance for those that struggle with the classical paradigm. Now seriously, why would you muddy the language to cater for those that are not prepared to spend the time learning how it works?

Another and probably the most obvious reason why JavaScript didn’t have classes, is so that object hierarchies could be built up via composition (only inheriting what is actually needed) rather than having to inherit every member needlessly from a base class (essentially knowing far more than is actually needed).  Once you have to re-factor your way out of a code base that has abused inheritance thus creating very tightly coupled code by violating one of the object-oriented design principles (information hiding), the perils of over using inheritance will become very clear.

I’m open to exploring what the other client side JavaScript frameworks and libraries have to offer and I’d love to hear from everyone that’s had experience with them.

Angular and Ember do a Lot For You

With all the bells and whistles, both frameworks impose strong opinions that you must follow in order to make the magic (in a lot of the cases convention) work. Once you’ve learnt Angular and/or Ember, productivity is maximised. But… you must be building your application the way the framework creators want you to. At this stage, I’m not supper comfortable with that. This is where Backbone and friends comes in to its own.

Backbone.jsMarionette.js

BackboneJS + MarionetteJS

Intro

Backbone is an unopinionated library that has Models, Views but no Controllers out of the box. That’s right, a library rather than a framework because your code needs to know about it, rather than it knowing about and executing your code. It does not follow the MVC, MVP or MVVM patterns. It’s views and routers act similarly to a controller. Marionette brings the controller to Backbone (if you want or need it), thus you can keep your router doing what it should be doing (just routing, with no controller logic).

What I find strange is that a Backbone view contains a model. I’m not sure I’d even call this a MV* library, as it may introduce confusion.

Backbone’s sweet spot is providing the user with brief and casual interaction. Doesn’t provide help or guidance with deallocating memory and detaching events. Assumptions are that the user isn’t going to be using this application all day without closing the browser window. Although in saying that, there are many applications that use Backbone for this type of thing, but they must provide explicit code to release event handlers. Marionette provided some help here for older versions of Backbone. and Backbone has improved things with newer versions. You will still need to keep in mind that event handlers need to be released though (Backbone’s view.remove takes care of this now). Marionette provides abstractions to deal with these like the close method which provides a place to add clean-up code and then calls Backbone’s remove. Failing to remove event handlers are the largest cause of memory leaks in Backbone.

Core Team

Backbone: Jeremy Ashkenas

Marionette: Derick Bailey

Community

IRC: #marionette on FreeNode. Little activity.

Conferences

  1. BackboneConf

Statistics

  • Version: 1.1.0
  • Payload size: Depends on Underscore development version 43kb or minified and gzipped 4.9kb
    1. Backbone development version 59kb
    2. Backbone minified and gzipped 6.4kb
  • Age: Backbone: Initial Github commits: September 2010

Performance

The second half of this video shows the difference between Backbone and Ember performance. What I’ve seen to date, is that in terms of performance, Backbone leads, second is Ember, third is Angular. You need to decide how much performance matters to your situation and whether or not it’s “good enough” for the framework/library you choose.

Documentation

Pod-casts

  1. Marionette.js
  2. JavaScript Jabber Ember.js (also covers some backbone)
  3. Backbone.js

Screen-casts

  1. How to build modular Backbone applications using MarionetteJS
  2. Tuts+ Intro to Marionette
  3. Plugging in MarionetteJS. This resource is about adding Marionette to a MongoDB document explorer. Also features source code.
  4. Github
  5. BackboneConf 2013 Talks

Blog Posts, Tutorials, etc.

  1. Github
  2. backbone and ember
  3. Marionette Wiki

Books

  1. Backbone Fundamentals

Features

  • ?

Positive

  1. Free to use any templating engine. You can use underscore as it’s the only dependency of backbone, or any other of your choosing.
  2. A lot of excellent documentation
  3. Very flexible in how you may want to use it
  4. Minimalist library
  5. Easy to learn (not a lot of it).
  6. Payload including dependencies is the smallest out of all three. About 9 times smaller than Ember.

Negative

  1. No two way data-binding. Although if you want/need it, you could use the likes of the data binding offerings below in the Interesting Plug-ins section.
  2. No provision for handling nested views. This is where the likes of Marionette’s Backbone.BabySitter comes in
  3. More work required to build large scale applications than the likes of Angular or Ember (just a library after all).
  4. If your large complex application is written in Backbone, chances are you have added a lot of boiler plate code. Any new developers coming onto the project will have to get up to speed on this code. If your large complex application uses Angular or Ember and the new developers coming onto the project have worked with these frameworks, they more than likely won’t have to learn the boiler plate code that they would have to with the likes of Backbone, because it’s part of the framework.

Interesting Plug-ins

  1. There is a similar offering: backbone.layoutmanager which I haven’t really looked into, but according to Derick Bailey (Marionette BDFL) is more of a framework where as Marionette is a library.
  2. Two way data binding with Rivets.jsKnockback.jsbackbone.stickit
    NYTimes backbone.stickit “is a Backbone data binding plug-in that binds Model attributes to View elements with a myriad of options for fine-tuning a rich application experience”. What looks to be nice about this is that unlike most model binding plug-ins I’ve seen, it doesn’t require you to add any extra tags like Angular to your view. In fact your views are not contaminated at all.
  3. Backbone.routefilter plug-in allows you to add behaviour that will be executed immediately before and/or after a route (Backbone.Router or Marionette.AppRouter) executes.

Useful Tools

  • “Backbone Debugger” for Chrome browser (it’s an extension)
  • Frameworks that leverage backbone and provide more functionality
    1. chaplinJS
    2. thoraxJS (adds handlebars integration plus other functionality)

Now a few more concepts that I think are important to know about if your serious about using a client side JavaScript MV* framework/library and in regards to module loading, this applies to the server side also.

Templating

Blog Posts etc.

  1. net tuts+ Best Practices When Working With JavaScript Templates
  2. net tuts+ An Introduction to Handlebars

Some Offerings

I covered some of the template engines here under “Templating Engines evaluated”, or just use the likes of the Template Engine Chooser

Coupling Domain with Framework

As Boris Smus has said and I think it’s right on the money (although I disagree with his comments around JavaScript class as per my comments above):
Once you bite the bullet and decide to invest in a framework, you often have no easy way to move your code out of it.
If you pick Backbone, but decide mid-cycle that it’s not for you, you are in for a world of hurt:
If you have core functionality that you want to release, release it in pure JavaScript, not as a jQuery plug-in, or some MV* module.

Because there are so many JavaScript frameworks coming and going, and we don’t want to invest to heavily into any one of them,
we really need to keep our investment separate from the library/framework code.

To avoid library/framework and class-system lock-in, a good approach in regards to JavaScript MV* libraries/frameworks,
Is to keep the core functionality separate from the user interface code, thus giving us two separate layers.
This gives us flexibility to swap user interfaces as they come and go, yet still keep the majority of our code in an API layer.
The API layer being a logical single layer, but can be modularised, and loaded as needed, AMD style.
With this separation, we can implement the two layers in the following manner.

1) Build the base layer using pure JavaScript prototypal inheritance.
This is the part you write with the intention of keeping and possibly using parts for other projects also.
This base layer will implement an API that you will want to spend a bit of time getting right.
This is the code that will make the most use of unit tests.
To get the separation clear in your head, think of the user interface code as a client that uses this API as if it was service API sitting on the server.
This way you can avoid creating leaky abstractions.

2) Use an MV* library/framework to implement the user interface, and call into the base layer directly.
This lets you move quickly and focus entirely on writing the user interface.
This architecture should facilitate building your user interface on a solid foundation and avoid investing heavily into an offering that you may want to swap out further down the track.

Modules

In most browsers, just including a script tag will cause the rest of the page to stop rendering until the script has loaded then executed.
Which is why if loading scripts synchronously, they should be concatenated, minified, compressed and included at the bottom.
Loading scripts asynchronously don’t block, which is why you can load multiple scripts in parallel where ever you want (any more than 2-3 concurrently and performance will degrade). Make sure to concatenate your scripts though.

What we see as our projects get larger, is that scripts start to have many dependencies in a way that may overlap and nest.

The simplest way to load asynchronously is to create a script tag and inject it into an existing DOM element on your page.
Because the DOM element already exists, the rendering is not blocked.
See the first code example here

// Create a new script element.
var script = document.createElement('script');

// Find an existing script element on the page (usually the one this code is in).
var firstScript = document.getElementsByTagName('script')[0];

// Set the location of the script.
script.src = "http://example.com/myscript.min.js";

// Inject with insertBefore to avoid appendChild errors.
firstScript.parentNode.insertBefore( script, firstScript );

If you want or need to get serious about script loading (which you’re probably going to have to do at some stage), use a best-of-breed script loader. This will also push you down the path of defining modular JavaScirpt (AKA modules).

Next we look at employing script loaders to load our modules…

Formats available for Writing and Using Modular JavaScript

Asynchronous Module Definition (AMD)

For writing modular JavaScript in the browser. To save re-writing what’s already been done… http://addyosmani.com/writing-modular-js/ see “AMD” section, explains it well. What does AMD actually give us? http://requirejs.org/docs/whyamd.html#amd Separation of Concerns, essentially placing value on interface rather than implementation. Mapping of module IDs to different paths. Lots more. Allows asynchronous loading of modules and their dependencies, which is something we need on the client side, but is not generally a requirement for the server side. For getting started, see “Getting Started With Modules” under the AMD section here. Also check out the AMD specification and of course the most common AMD implementation: RequireJS. Then at some stage you’re probably going to want to concatenate and minify your modules and that’s where the likes of r.js comes in. r.js also has a node.js adapter which allows you to use node’s implementation of  require.

Tom Dale (core team member on Ember) also has some interesting ideas around why he thinks AMD is not the answer.

CommonJS API (Optimised for the server)

Although we have the likes of browserify a CommonJS module implementation that can run in the browser or browser-build… makes CommonJS modules available in the browser and is very fast. Ryan Florence discusses module loaders in the pod-cast listed above “JavaScript Jabber Ember Tools” where he decided to move to CommonJS rather than RequireJS for his Ember Tools mostly due to speed. So it’s horses for courses. Decide what your requirements are, then decide which module loader satisfies the most of them. Also see “writing modular js” under the “CommonJS” section.
Providing a rich standard library. The intention is that an application developer will be able to write an application using the CommonJS APIs and then run that application across different JavaScript interpreters and host environments. With CommonJS-compliant systems, you can use JavaScript to write:

  • Server-side JavaScript applications
  • Command line tools
  • Desktop GUI-based applications
  • Hybrid applications (Titanium, Adobe AIR)

Why it doesn’t excel in the browser “out of the box”: http://requirejs.org/docs/whyamd.html#commonjs
ES Harmony (Modules implemented in the language. were not quite there yet, but the current offerings look to be a pretty good step in the right direction).

http://addyosmani.com/writing-modular-js/ (specifically “ES Harmony” section) discusses where TC39 are going in regards to implementing modules in ES.next.

So AMD and CommonJS can be used on server side or client side. In some cases one will work better for you than the other. You’ll need to do your homework as to what to use in which scenarios. Both have advantages and disadvantages that may work for or against you.

I’m keen to get a discussion going here on peoples experiences with the three MV* offerings mentioned. Especially those that have experience with two or more.

JavaScript Properties

October 2, 2012

In ECMAScript 5 we now have two distinct kinds of properties.

  1. Data properties
  2. Accessor properties

A property is a named collection of attributes.
value: any JavaScript value
writable: boolean
configurable: boolean, common for both Data and Accessor
enumerable: boolean, common for both Data and Accessor
get: a function that returns a value
set: a function that takes an argument as its value

configurable

Any attempts to delete the property or change its (writable, configurable, or enumerable) attributes will fail if set to false.
if using strict mode, we get a run time error.
if not using strict mode, the behaviour is as it was with ES3,
the deletion attempt is ignored.
If set to false:
-It can not be re-set to true.
-We can change the value and writable attributes, but writable only from true to false.

enumerable

The property will be enumerated over when a for-in loop is encountered if set to true.
if using strict mode, it’s as if the property doesn’t exist, it’s ignored.

In ES5,

  • a default property descriptor; if the property is defined the old fashioned way, without using Object.defineProperty,
    the boolean attributes will all default to true.
  • A default property descriptor; if the property is defined using Object.defineProperty and the boolean attribute values not specified,
    the boolean attributes will all default to false.

I was wondering about this, as I had heard conflicting stories.
IMO this follows the Principle of least astonishment (POLA)

var obj1 = {};
var obj1PropertyDesc;
var obj2 = {};
var obj2PropertyDesc;

Object.defineProperty(obj1, 'propOnObj1', {
   value: 'value of propOnObj1' //,
   // writable: false,
   // enumerable: false,
   // configurable: false,
});

obj1PropertyDesc = Object.getOwnPropertyDescriptor(obj1, 'propOnObj1');

// obj1PropertyDesc {
//    configurable: false,
//    enumerable: false,
//    value: "value of propOnObj1",
//    writable: false
// }

obj2.propOnObj2 = 'value of propOnObj2';

obj2PropertyDesc = Object.getOwnPropertyDescriptor(obj2, 'propOnObj2');

// obj2PropertyDesc {
//    configurable: true,
//    enumerable: true,
//    value: "value of propOnObj2",
//    writable: true
// }

So in general

Properties declared the old ES3 way are configurable (can be deleted).
Properties declared using Object.defineProperty; by default are not configurable (can not be deleted).
See edge cases below.

The delete operator is used to remove a property from an object.
It does not touch properties in the prototype chain.
If you have a prototype that has a property with the same name, it will now be used when your code references the derived object’s property that no longer exists.

var objLiteral = {
   aProperty: 'value of super property'
}

var anObject = Object.create(objLiteral); // create is an ES5 method, but easy enough to replicate for ES3 implementations

anObject.aProperty = 'value of derived property';

anObject.aProperty  // 'value of derived property'
delete anObject.aProperty;
anObject.aProperty  // 'value of super property'

Edge cases

JavaScript Patterns pg 12 states “Implied globals created without var (regardless if created inside functions) can be
deleted.”
Thanks to Angus Croll for pointing this out as untrue.

obj1 = 'kims global property';
var obj1PropertyDesc;
var obj2PropertyDesc;

obj1PropertyDesc = Object.getOwnPropertyDescriptor(this, 'obj1');

// obj1PropertyDesc {
//    configurable: true,
//    enumerable: true,
//    value: "kims global property",
//    writable: true,
// }

(function (){
   obj2 = 'kims global property declared within function scope';
}());

obj2PropertyDesc = Object.getOwnPropertyDescriptor(this, 'obj2');

// obj2PropertyDesc {
//    configurable: false,
//    enumerable: true,
//    value: "kims global property declared within function scope",
//    writable: true
// }

delete obj2;
// Nope, obj2 was not deleted.
// turn strict mode on and we get the following error:
// Uncaught SyntaxError: Delete of an unqualified identifier in strict mode.

When you declare a global,
you are actually defining a property of the global object.
If you use the var keyword on that global, you are still creating a property.
That property is non-configurable (can not be deleted with the delete operator).
Only object properties with the configurable option set to true can be deleted.
Nothing else can be deleted.
Variables which are properties that we can’t access their property descriptor, can never be deleted.

var obj1 = {};
var obj1PropertyDesc;

obj1PropertyDesc = Object.getOwnPropertyDescriptor(this, 'obj1');

// obj1PropertyDesc {
//    configurable: false
//    enumerable: true
//    value: Object
//    writable: true
// }

There are a couple of notable internal properties that are found on all ES3 and ES5 objects.
[[Get]] and [[Put]].
The Ecma specs enclose internal properties in double square brackets as a convention only.
In ES3 [[Get]] and [[Put]] are used to return and set the internal [[Value]] property.
According to the Ecma5 spec, the internal [[Get]] and [[Put]] properties appear to do the same thing, although it’s not stated explicitly.
This may just be an oversight of the spec.

Accessor Properties

All the examples so far have been showing data properties.
By default properties are data properties unless they define a getter and/or setter,
in which case they are defined as accessor properties.
There are two attributes that are distinct to accessor properties.
get and set.
Both of which allow a method (and only a method) to be assigned to them to get or set respectively.

JavaScript getter error

  • Internally the getter calls the functions internal [[Call]] method with no arguments.
  • Internally the setter calls the functions internal [[Call]] method with an arguments list containing the assigned value as its sole argument.
    The setter may but is not required to have an effect on the value returned by subsequent calls to the properties internal [[Get]] method.

So these attributes may or may not leverage the internal [[Get]] and [[Put]] properties that are found in ES3 and ES5 on all objects.
You can in fact define only a getter (readonly), or only a setter (write-only) accessor if you so choose.

Defining accessor properties literally:

var testObj = {
   // An ordinary data property
   dataProp: 'value',

   // An accessor property defined as a pair of functions
   // get accessorProp() { return this.dataProp; },
   set accessorProp(value) { this.dataProp = value; }
};

testObj.accessorProp = 'an updated string';
alert(testObj.accessorProp); // undefined
alert(testObj.dataProp);  // an updated string

Can we create a data (default) property and then change it to be an accessor property?

var testObj = {}; // Start with no properties at all
// Add a nonenumerable data property x with value 1.
Object.defineProperty(testObj, 'x', { value : 1,
writable: true,
enumerable: false,
configurable: true});

// Check that the property is there but is non-enumerable
alert(testObj.x); // 1

// check that we can't enumerate the testObj
alert(Object.keys(testObj)); // returns an empty array of strings

// Now modify the property x so that it is read-only
Object.defineProperty(testObj, 'x', { writable: false });

// Try to change the value of the property
testObj.x = 2;
// Fails silently or throws TypeError in strict mode
alert(testObj.x); // 1

// The property is still configurable, so we can change its value like this:
Object.defineProperty(testObj, 'x', { value: 2 });

alert(testObj.x); // 2

// what happens if we change configurable to false?
Object.defineProperty(testObj, 'x', { configurable: false });
Object.defineProperty(testObj, 'x', { value: 2.5 }); // Uncaught TypeError: Cannot redefine property: x

// Now change x from a data property to an accessor property
// providing we haven't set configurable to false as above.
Object.defineProperty(testObj, 'x', {
   get: function() {
      return 0;
   }
});

alert(testObj.x); // 0

Yip.

Ok, so what does a property descriptor of an Accessor Property look like?

var objWithMultipleProperties;
var objWithMultiplePropertiesDescriptor;

objWithMultipleProperties = Object.defineProperties({}, {
   x: { value: 1, writable: true, enumerable:true, configurable:true },
   y: { value: 1, writable: true, enumerable:true, configurable:true },
   r: {
      get: function() {
         return Math.sqrt(this.x*this.x + this.y*this.y)
      },
      enumerable:true,
      configurable:true
   }
});

objWithMultiplePropertiesDescriptor = Object.getOwnPropertyDescriptor(objWithMultipleProperties, 'r');
// objWithMultiplePropertiesDescriptor {
//    configurable: true,
//    enumerable: true,
//    get: function () {
//       // other members in here
//    },
//    set: undefined,
//   // ...
// }

The Global Object

When the JavaScript interpreter starts (or whenever a web browser loads a new page),
it creates a new global object and gives it an initial set of properties that define:
• global properties like undefined, Infinity, and NaN
• global functions like isNaN(), parseInt(), and eval()
• constructor functions like Date(), RegExp(), String(), Object(), and Array()
• global objects like Math and JSON

delete undefined;  // not deleted
delete Infinity;   // not deleted
delete NaN;        // not deleted
delete isNaN;      // deleted
delete parseInt;   // deleted
delete eval;       // deleted
delete Date;       // deleted
delete RegExp;     // deleted
delete String;     // deleted
delete Object;     // deleted
delete Array;      // deleted
delete Math;       // deleted
delete JSON;       // deleted
undefined = 'kims undefined'; // nonassignable
Infinity = 'kims infinity';   // nonassignable
NaN = 'kims nan';             // nonassignable
  1. Why are undefined, Infinity and NaN not removed?
  2. Are they non-configurable?
  3. Why are they non-assignable?
  4. How do we test this?
  5. Are they constants?
  6. Are Infinity, NaN and undefined reserved words?

I’ll answer these questions shortly.

ES3 properties

According to the standard
8.6 “Each property consists of a name, a value and a set of attributes”.
8.6.1 A property can have zero or more attributes from the following set:

These attributes along with others (see ES3 spec) are reserved for internal use.

Attribute Description
ReadOnly The property is a read-only property.
Attempts by ECMAScript code to write to the property will be ignored.
(Note, however, that in some cases the value of a property with the ReadOnly attribute may change over time because of actions taken by the host environment; therefore “ReadOnly” does not mean “constant and unchanging”!)
DontEnum The property is not to be enumerated by a for-in enumeration
DontDelete Attempts to delete the property will be ignored.
Internal Internal properties have no name and are not directly accessible via the property accessor operators. This means the property is not accessible to the ECMAScript program.
How these properties are accessed is implementation specific.
How and when some of these properties are used is specified by the language specification.

These property attribute values can not be changed
An interesting Internal property is the [[Prototype]]
There are a number of ways to access the internal [[Prototype]] property indirectly.
I’ve detailed them in my post on prototypes here.

More on ES5 properties

writable, enumerable and configurable replace the ES3 property attributes: ReadOnly, DontEnum, DontDelete.
The property attributes and their values define the property descriptor object (including Data or Accessor properties and those that apply to both (enumerable and configurable)).

The property attributes can be manually managed by the:
Object.defineProperty and Object.defineProperties methods
Object.getOwnPropertyDescriptor

var myObj = {};

Object.defineProperty(myObj, 'propOnMyObj', {
   value: 'property descriptor',
   writable: true,    // ReadOnly = false in ES3
   enumerable: false, // DontEnum = true in ES3
   configurable: true // DontDelete = false in ES3
});

console.log(myObj.propOnMyObj); // 'property descriptor'

// getOwnPropertyDescriptor is the only way to get the properties attributes.
// They don't exist as visible properties on the property (other than for setting them as above),
// they're stored internally in the ECMAScript engine.
var myPropertyDescriptor = Object.getOwnPropertyDescriptor(myObj, 'propOnMyObj');

console.log(myPropertyDescriptor.enumerable); // false
console.log(myPropertyDescriptor.writable);   // true
// etc.

getOwnPropertyDescriptor

There’s lots of new methods defined in ES5.

Now, back to the six questions we had above.

  1. Why are undefined, Infinity and NaN not removed?
    Because  their property descriptors configurable attribute is set to false.
  2. Are they non-configurable?
    Yes, as above.
  3. Why are they non-assignable?
    Because their property descriptors writable attribute is set to false.
  4. How do we test this?
    NaN
    Number
    global Infinity property
    undefined
  5. Are they constants?
    Effectively, yes.
  6. Are Infinity, NaN and undefined reserved words?
    No. Avoid using their names to remove ambiguity.

ES3
Infinity read/write (the value can be changed). Holds positive infinity.
Number is a property on the global object, which has readonly properties Infinity and NaN.
NaN read/write (the value can be changed).
undefined

ES5
Infinity (well… POSITIVE_INFINITY) is a property on the global Number property with the value Infinity
We now also have NEGATIVE_INFINITY with the value –Infinity
Number
Infinity is also declared directly on the global object.
NaN is a property on the global object with the value NaN
undefined is a property on the global object with the value (you guessed it) undefined.
These are all constants now.

Important differences between Properties and Variables

Variables are properties, but not vice versa.

The VariableObject in ES3 is called the VariableEnvironment in ES5.
Can be seen in the specs.
Not sure why they changed what they called it.

Each execution context (be it global or any function) has an associated VariableObject.
Variables (and functions) created within a given context are bound as properties of that context’s VariableObject.
Even function parameters are added as properties of the VariableObject.
Discussed in depth in:
ECMAScript3 spec under “10 Execution Contexts”
ECMAScript5 spec under “10.3 Execution Contexts” onwards
This is why we can access global variables as properties of the global object…
Because that’s what they are.

  • The global object is created before control enters any execution context.
  • The global object is the same as the global contexts VariableObject.
  • In the HTML DOM; the window property of the global object is the global object.

Now variables of functions are similar, but we can’t access them as properties.
Why?…
ECMAScript has an Activation Object.
When control enters the execution context of a function, an activation object is created and associated with the execution context.
The activation object is initialised with:

  1. The this value
  2. an arguments property (referred to as a binding in ES5 spec) that has the DontDelete attribute (configurable set to false in ES5).

The activation object is then used as the VariableObject.
We can access members of the activation object but not the activation object itself,
which is why we can’t access the members as properties.
Further details in:
ECMAScript3 spec under “10.2 Entering An Execution Context”
ECMAScript5 spec under “10.4 Establishing an Execution Context”

Feature Detection (Yes, Including JavaScript)

I know this is not property specific, but it was something I thought noteworthy.

There’s a library that looks to have potential for JavaScript feature detection.
“has.js”
This should be useful for detecting what your users browsers are capable of EcmaScript wise.
The project lead is Peter Higgins (Dojo Toolkit project lead).
Has a good sized group of committers.
May have potential to be a better Modernizr.
The source is here.
Explanation of has.js features here.

Additional References:

Succinct explanation of Variables vs Properties in JavaScript

EcmaScript5 Objects and Properties

Slideshow by Doug Crockford on ES5’s new parts

Dmitry Soshnikov’s elaborations on the Ecma standards:

http://dmitrysoshnikov.com/ecmascript/es5-chapter-1-properties-and-property-descriptors/

http://dmitrysoshnikov.com/ecmascript/chapter-7-2-oop-ecmascript-implementation/

http://dmitrysoshnikov.com/ecmascript/chapter-2-variable-object/

Scoping & Hoisting in JavaScript

November 14, 2011

Scoping

JavaScript scoping is different to classical languages, and can take some getting used to for programmers used to languages such as C, C++, C#, Java.
Classical languages like the before mentioned have block scope.
JavaScript has function scope.

In the following example “5” will be alerted.

var foo = 1;
function bar() {
   if (!foo) {
      var foo = 5;
   }
   alert(foo);
}
bar();


In the following example “1” will be alerted.

var a = 1;
function b() {
   a = 10;
   return;
   function a() {}
}
b();
alert(a);


In the following example Firebug will show 1, 2, 2.

var x = 1;
console.log(x); // 1
if (true) {
   var x = 2;
   console.log(x); // 2
}
console.log(x); // 2


In JavaScript, blocks such as if statements, do not create new scope. Only functions create new scope.

There is a workaround though 😉
JavaScript has Closure.
If you need to create a temporary scope within a function, do the following.

function foo() {
   var x = 1;
   if (x) {
      (function () {
         var x = 2;
         // some other code
      }());
   }
// x is still 1.
}

Line 3: begins a closure
Line 6: the closure invokes itself with ()

I discuss closure in depth in a later post.

Hoisting

Terminoligy

As far as I know…
function declaration or function statement
are the same thing.
function expression or variable declaration with function assignment
are the same thing.


A function statement looks like the following:

function foo( ) {}


A function expression looks like the following:

var foo = function foo( ) {};


A function expression must not start with the word “function”.

//anonymous function expression
var a = function () {
   return 3;
};

//named function expression
var a = function bar() {
   return 3;
};

//self invoking named function expression.
(function sayHello() {
   alert('hello!');
})();

//self invoking anonymous function expression.
(function ( ) {
   var hidden_variable;
   // This function can have some impact on
   // the environment, but introduces no new
   // global variables.
}() );


In JavaScript, a name enters a scope in one of four basic ways:

  1. Language-defined: All scopes are, by default, given the names this and arguments.
  2. Formal parameters: Functions can have named formal parameters, which are scoped to the body of that function.
  3. Function declarations: These are of the form function foo() {}.
  4. Variable declarations: These take the form var foo;.

Function declarations and variable declarations are always hoisted
invisibly to the top of their containing scope by the JavaScript interpreter.
Function parameters and language-defined names are, obviously, already there. This means that code like this:

function foo() {
   bar();
   var x = 1;
}

Is actually interpreted like this:

function foo() {
   var x;
   bar();
   x = 1;
}


It turns out that it doesn’t matter whether the line that contains the declaration would ever be executed.
The following two functions are equivalent:

function foo() {
   if (false) {
      var x = 1;
   }
   return;
   var y = 1;
}
function foo() {
   var x, y;
   if (false) {
      x = 1;
   }
   return;
   y = 1;
}

The assignment portion of the declaration is not hoisted.
Only the identifier is hoisted.
This is not the case with function declarations, where the entire function body will be hoisted as well,
but remember that there are two normal ways to declare functions. Consider the following JavaScript:

function test() {
   foo(); // TypeError 'foo is not a function'
   bar(); // 'this will run!'
   var foo = function () { // function expression assigned to local variable 'foo'
      alert('this won't run!');
   }
   function bar() { // function declaration, given the name 'bar'
      alert('this will run!');
   }
}
test();

In this case, only the function declaration has its body hoisted to the top. The name ‘foo’ is hoisted, but the body is left behind, to be assigned during execution.

Including named function expressions added to the local object

//'use strict';

var container;

function Container() {

   var that = this;
   var descender = 3;
   var targetMin = 0;
   var ascender = 0;
   var targetMax = 3;    

   this.inc = function () {

      if (ascender < targetMax) {
         ascender += 1;
         console.log('ascender incremented: now equals ' + ascender);
         return that;
      } else {
         that.inc = function () {
            console.log('inc now modified to return ' + targetMax);
         };
         that.inc();
         return that;
      }
   };

   alert(dec); // Uncaught ReferenceError: dec is not defined. we're actually looking for the global objects dec property which doesn't exist.
   alert(this.dec); // Prints 'undefined' because this.dec is hoisted and declared, but of course not yet defined.
   // If this.dec was not hoisted, we would get the same 'Uncaught ReferenceError: this.dec is not defined'.

   this.dec = function () {

      if (descender > targetMin) {
         descender -= 1;
         console.log('descender decremented: now equals ' + descender);
         return that;
      } else {
         that.dec = function () {
            console.log('dec now modified to return ' + targetMin);
         };
         that.inc();
         return that;
      }
   };
}

container = new Container();
container.inc().inc().inc().inc();
console.log(container.inc);
container.dec().dec().dec().dec();
console.log(container.dec);

Out of interest, the output looks like the following:

modify routine on the fly

Name Resolution Order

The most important special case to keep in mind is name resolution order. Remember that there are four ways for names to enter a given scope. The order I listed them above is the order they are resolved in. In general, if a name has already been defined, it is never overridden by another property of the same name. This means that a function declaration takes priority over a variable declaration. This does not mean that an assignment to that name will not work, just that the declaration portion will be ignored. There are a few exceptions:

  • The built-in name arguments behaves oddly. It seems to be declared following the formal parameters, but before function declarations. This means that a formal parameter with the name arguments will take precedence over the built-in, even if it is undefined. This is a bad feature. Don’t use the name arguments as a formal parameter.
  • Trying to use the name this as an identifier anywhere will cause a Syntax Error. This is a good feature.
  • If multiple formal parameters have the same name, the one occurring latest in the list will take precedence, even if it is undefined.

Now that you understand scoping and hoisting, what does that mean for coding in JavaScript?
The most important thing is to always declare your variables with var statements.
Declare your variables at the top of the scope (as already mentioned JavaScript only has function scope). See the Variable Declarations section.
If you force yourself to do this, you will never have hoisting-related confusion.
However, doing this can make it hard to keep track of which variables have actually been declared in the current scope.
I generally like to follow these coding standards with JavaScript.

function foo(a, b, c) {
   var x = 1;
   var bar;
   var baz = 'something';
   // other non hoistable code here
}