Keeping Your Linux Server/s In Time With Your Router

March 28, 2015

Your NTP Server

With this set-up, we’ve got one-to-many Linux servers in a network that all want to be synced with the same up-stream Network Time Protocol (NTP) server/s that your router (or what ever server you choose to be your NTP authority) uses.

On your router or what ever your NTP server host is, add the NTP server pools. Now how you do this really depends on what your using for your NTP server, so I’ll leave this part out of scope. There are many NTP pools you can choose from. Pick one or a collection that’s as close to you’re NTP server as possible.

If your NTP daemon is running on your router, you’ll need to decide and select which router interfaces you want the NTP daemon supplying time to. You almost certainly won’t want it on the WAN interface (unless you’re a pool member) if you have one on your router.

Make sure you restart your NTP daemon.

Your Client Machines

If you have ntpdate installed, /etc/default/ntpdate says to look at /etc/ntp.conf which doesn’t exist without ntp being installed. It looks like this:

# Set to "yes" to take the server list from /etc/ntp.conf, from package ntp,
# so you only have to keep it in one place.
NTPDATE_USE_NTP_CONF=yes

but you’ll see that it also has a default NTPSERVERS variable set which is overridden if you add your time server to /etc/ntp.conf. If you enter the following and ntpdate is installed:

dpkg-query -W -f='${Status} ${Version}\n' ntpdate

You’ll get output like:

install ok installed 1:4.2.6.p5+dfsg-3ubuntu2

Otherwise install it:

apt-get install ntp

The public NTP server/s can be added straight to the bottom of the /etc/ntp.conf file, but because we want to use our own NTP server, we add the IP address of our server that’s configured with our NTP pools to the bottom of the file.

server <IP address of your local NTP server here>

Now if your NTP daemon is running on your router, hopefully you have everything blocked on its interface/s by default and are using a white-list for egress filtering.

In which case you’ll need to add a firewall rule to each interface of the router that you want NTP served up on.

NTP talks over UDP and listens on port 123 by default.

After any configuration changes to your ntpd make sure you restart it. On most routers this is done via the web UI.

On the client (Linux) machines:

sudo service ntp restart

Now issuing the date command on your Linux machine will provide the current time, yes with seconds.

Trouble-shooting

The main two commands I use are:

sudo ntpq -c lpeer

Which should produce output like:

            remote                       refid         st t when poll reach delay offset jitter
===============================================================================================
*<server name>.<domain name> <upstream ntp ip address> 2  u  54   64   77   0.189 16.714 11.589

and the standard NTP query program followed by the as argument:

ntpq

Which will drop you at ntpq’s prompt:

ntpq> as

Which should produce output like:

ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1 15720  963a   yes   yes  none  sys.peer    sys_peer  3

Now in the first output, the * in front of the remote means the server is getting it’s time successfully from the upstream NTP server/s which needs to be the case in our scenario. Often you may also get a refid of .INIT. which is one of the “Kiss-o’-Death Codes” which means “The association has not yet synchronized for the first time”. See the NTP parameters. I’ve found that sometimes you just need to be patient here.

In the second output, if you get a condition of reject, it’s usually because your local ntp can’t access the NTP server you set-up. Check your firewall rules etc.

Now check all the times are in sync with the date command.

350Z Hi-fi Install

February 28, 2015

When we acquired the 2004 350Z, it had a low quality hi-fi installed in it. The FM radio broadcast band in Japan is 76-90 MHz Which misses a large portion of the New Zealand stations.

It’s been a few years since I performed a car hi-fi installation, so I lent on some friends. Specifically Matthew Fung which sold me the AVK6’s and the LRx 5.1MT.

Previous Hi-fi install

Now this install was in my VY Ute. This image shows the sub woofer enclosure in place under the deck right next to the fuel tank. The metal deck panel gets fixed over this and then the tray liner goes over top of that.

VY sub woofer enclosure in place

 

Unlike the 350Z install, this enclosure didn’t have to be pretty at all because it would never be seen.

VY sub woofer and enclosure

 

I learnt a lot from this install that I would take forward to future installs. Namely the 350Z install

350Z Hi-fi Install

Some of Matthew’s advice was to spend in the following fashion:

35% of your budget on speakers, 35% on amplifier, 20% head unit and 10% on sub woofer. This is good advice.

Required Components

Other than the 350Z

  • Head unit: Alpine CDE-148EBT From Quality Car Audio
  • DIN/DDIN kit: 99-7402 from Quality Car Audio
  • Power amplifier: Italian made Audison LRx 5.1MT from Matthew Fung
  • Front speakers: Audison AVK6 6.5’s from Matthew Fung
  • Rear speakers: Existing factory from Matthew Fung
  • Sub woofers: 2 x Alpine SWR-12D4 From Quality Car Audio. One of the reasons for two sub woofers was appearance.
  • Sub woofer box: Fabricated with some old 19mm MDF cover sheets that have been in my garage since I left the carpentry trade about 15 years ago.
  • Square drive (because they are far superior to posi) super screws
  • Sound deadening: Single layer of Soundstream Deathmat from HyperDrive. Once finished about 15kg will be applied to floors, both front and boot and front doors.
  • In-line fuse block: 80amp from JayCar
  • Battery terminals: from JayCar
  • Power cable: 8m of STINGER SPW14TR 4 AWG GAUGE from Ebay
  • Speaker cable: Stinger SHW512B 100 ft Roll of HPM 12 Gauge Matte Blue Car Speaker Wire run to all speakers from Ebay
  • RCA cables: 3 x Stinger SI8217 Audio RCA Interconnect Cable 2 Channels 8000 Series 17 ft from Ebay
  • Internet of course
  • Carbon fiber vinyl wrap
  • Front speaker mounts: Rubber drain pipe transitions
  • Approximately one week labour

Yes I was tempted to skimp on the likes of the power cable, speaker wire, RCA cables because the good ones are much dearer than the cheap ones. This is where you really get what you pay for. If you want to produce a great result, aim for the best you can get here. A few hundred dollars really makes a big difference. I’ve used cheaper parts here on previous installs and the overall difference is very noticeable. It also give you room to upgrade components in the future without having to run all your wires again which is where most of the install time gets sucked up.

Day One

Design and Construct Sub Woofer Enclosure

Generally speaking if you are running two of the exact same sub woofers in the same air space you can just combine the air spaces, but if you swap one of the sub woofers for one that behaves differently, then you are likely to run into issues where the sound waves interfere with each other. Also if you have a sub woofer enclosure that has an air space for each sub woofer you also get bracing for free from the divider. Bracing is all about making the enclosure more rigid so that the sound waves don’t cause the casing to vibrate/move with the waves more than it should. The more rigid you can make the enclosure the more you help the enclosure do what it’s designed to do… produce accurate low frequencies.

sub woofer enclosure

I chose to have one enclosure with two separate air spaces, thus providing a rigid casing. There are three distinct volume calculations here.

  1. One for the main air space
  2. One for the arch way (Looks like a ‘D’ rotated 90° counter clockwise) you can see cut in the rear of the enclosure
  3. The space behind the rear of the main enclosure. This had to be done to accommodate the cast aluminium frame of the SWR-12D4.

I also allowed for adjustment in air space with the two ends of the enclosure, as they could be moved in or out depending on whether I needed more or less air space. As you can see in the image above, the rear and the front of the enclosure have tapered cuts down toward the bottom of the enclosure. These tapered cuts were made once A) the box had been constructed, B) the ends had been adjusted and fixed in place and C) I was certain of the internal air space.

All joints are wood glued and sealed with an acrylic sealant. Solvent based sealants can play havoc with some of the sub woofer internals, thus causing premature failure. The screws are varying lengths depending on the angle of the join. All screw holes were pre-drilled into the initial MDF that needs to be retained with the diameter of the thread, so that the thread doesn’t grip onto the MDF being retained. Then the MDF that the screws actually bite into are pre-drilled with a bit the diameter of the shank (which is a little smaller than the thread). If you don’t do this second part, the MDF will just split which means very little grip and essentially a very weak joint.

All screws were countersunk with a… countersink bit. This is done so that when you cover the box in your chosen covering, it all appears flat. Two circular holes cut with jig saw. The holes that the speaker cable passes through are also sealed with acrylic sealant.

I also chose not to use terminals on the sub woofer enclosure to fix speaker wires to, but rather run the speaker wires straight from the power amplifier to the sub woofer, thus removing one of the couplings. The sub woofer enclosure can still be disconnected at the power amplifier and at the speaker. If you’re planning on swapping sub woofers in and out frequently,

As I’m a carpenter by trade (from aprx 1990 – 2000), this construction was trivial.

For those of you unfamiliar with the 350Z, this is what the target space looks like (plan view) with the dimensions:

350Z sub woofer space

 

You’ll also notice the cross section arrows. They should actually be facing the other direction. The A:A section is the below image in reverse. If I was to go full width with the enclosure, it wouldn’t be able to be installed as the rear hatch opening is considerably narrower than the enclosures target location. It ends up being just over 100mm gap on each side of the enclosure once installed. I just fill this space with plumbers foam pipe lagging which actually looks fine. There is also a small gap in front of the enclosure (between enclosure and rear speakers) and behind it (between enclosure and strut brace). I’ve just pushed 12mm black PEF rod down the gap and it again looks fine. One of the rules in carpentry is, if you can’t hide a transition, then emphasis it. That’s what we’ve done with these gaps. The enclosure doesn’t need to be fixed in place in case of head on collision, as it can’t move forward but can tilt forward a little, but as the roof is too low to allow it to move forward far, it’s safe. Just make sure those huge sub woofers are well fixed as they would be like canon balls in an accident.

Volume Calculations

The gross internal target volume for the sub woofer was 0.85 ft³. It’s range is 0.23 ft³ both ways. I ended up being just below 0.85 ft³. So I was happy.

  1. For the main air space, that’s the largest one. I just worked out the area of the quadrilateral (bottom, front, top, rear) then multiply by the depth. All in cm’s, then convert cm³ to ft³. As I’m lazy, the easiest way is to just throw your values into the calculator here or here. These dimensions are all internal and specified in mm unless stated otherwise.
    EnclosureQuadrilateral
  2. For the semi-circle (‘D’ rotated 90° counter clockwise) it’s just πr2/2 for the area then multiplied by the depth of 1.9cm
  3. The following was the calculation for the triangle behind the ‘D’. this is also used to mount the power amplifier. The outer panel here carries across two of these internal spaces. Hope that makes sense? You’ll see an image below anyway of how the power amplifier is mounted on the rear of the enclosure. I also had the very rear panel of the enclosure over-hang the internal dimension by about 50mm. This was so that I could mount some LED strip lighting if I wanted to. The Audison LRx 5.1MT has it’s logo light up which has for now satisfied my geek addiction for LED mood lighting.
    enclosure triangle volume calculation

 

SWR-12D4 Dimensions

SWR-12D4 dimensions

SWR-12D4 Design

SWR-12D4 design

 

Day Two

Just more of the same thing plus fitment and making small adjustments. I found that by draping a towel over the rear strut brace and just sliding the enclosure over it and tilting it forward it’d just slot into place nicely.

Day Three

Removal of Internals

Removed all the internal panels that I’d need in order to run cables and apply sound deadening. I numbered each piece I pulled off in order, so that it was easier to work out the re-fit order when the time came. I also mask any plugs or screws etc to the panel so they don’t get lost.

  • Remove center console
  • Remove panels to get to rear speakers and to run power cable and speaker wires. I also decided to make use of the empty space which is about 2 ft³ behind the drivers seat. For now this is still left open. I’m looking around wreckers for another cubby-hole like the one behind the passenger seat, as they are identical. This will allow me to claim back the space that we’ve lost due to the enclosure going into the limited boot space.
  • Removed door sill panels and kick panels. Both of which can be pulled off gently.
  • Remove seats. Also followed the sequence in this post to reset the windows going all the way up.

Once I was done with the full install and started the car, the Engine Management Light (EML) came on. This will continue to happen until you reset it. First thing to do though is find out what the fault is if there is one. Follow the sequence of events here. Diagnose the fault code based on the flashes detailed here and here. My code ended up being “P0000″ (No Self Diagnostic Failure Indicated). That’s:

10 (0) Long Blink
10 (0) Short Blinks
10 (0) Short Blinks
10 (0) Short Blinks

So I reseted the ECU and I was done.

Apply Sound Deadening

The 350Z’s have a lot of road noise by default. If you want to keep that out so that it doesn’t interfere with your new sounds and keep your sounds inside the car it can be a good idea to apply this liberally. Plus it’s a good time to do this when you have most of your internal panels removed. I’ve found that it made quite a big difference. The sounds are barely audible outside the vehicle even when played at a modest level. When cranked, you can still feel the low frequencies outside more than you can hear them. Believe it or not, that includes the lower frequencies. I applied this a little higher than you can see in the below image. All the rattles you hear in a lot of vehicles with large sub woofer setups are gone. This means the low frequencies are not loosing energy in the panels but are instead going into the cabin space. Which is exactly where we want them.

Sound Deadening to floors

Day Four

Power from Battery to Power Amplifier

Run power cable from battery through firewall to boot of car where amplifier will sit. This link shows a left hand drive 350Z. For right hand drive Z’s, it’s pretty similar.

Run RCA Leads

from front console to where sub woofer enclosure will live. Careful to keep speaker and RCA leads as far away from power cables as possible to reduce interference.

Here you can see the power cable on the left of the drive-shaft tunnel (top of image) and the three RCA leads on the right of the drive-shaft tunnel (bottom of image) zip-tied together and passing through the body just behind where the carpet finishes. You can also see the RCA lead ends looped around the gear stick in the image waiting to be plugged into the new head unit once installed.

Also notice the fold-out work light far left of the image. This is cordless and has magnets on the rear of both LED panels and the middle shaft. The LED panels can be pivoted around the shaft. This is one of the most handy work lights I’ve ever used. It also has a torch light in the middle shaft. This can be run with A) one panel running, B) two panels running, C) just the torch light running. It’s rechargeable and seems to run for as long as I need it each day. I think it’s supposed to run for three hours with only one panel running, but it seemed to run for longer than that. $30 from bunnings.

RCA leads and power cable

Run Speaker Wires

All except front left door and sub woofers.

For the front speakers, the hardest part of just about the entire install is getting the 12 gauge speaker wire through the plastic door harness without joiners. This post provided the detail I needed. Previous car hi-fi installs have been similar in that this task has been the most frustrating. I started with the right door (drivers door) which is harder than the passenger door for several reasons.

  • There is next to no visibility from underneath the steering wheel looking at where the speaker wire must pass through the plastic harness
  • The plastic harness has a greater population of used pins that you have to be very careful not to hit with your knife or drill. I used both tools.

The other thing to keep in mind is that the plastic harness that clicks into place on the car body must go in top first from the inside, not bottom first. This cost me a bit of time (I bent the metal retainer out of shape because I put bottom in first) and I ended up coming back to the driver side after I’d done the passenger side successfully and learnt a few more tricks.

For the front speakers, I ran the speaker wires down the door sills right next to the plastic clips that hold the sill covers on. Then through the holes at the back of the sill and through into the boot near where the power amplifier would be mounted.

I also ran 12 gauge wire for the rear speakers. There are plenty of holes to thread them through and keep them well concealed. Keeping in mind that they need to stay as far away from power cables as possible.

 

Factory Speaker Wire Colours

I took note of factory speaker wire colours in case I needed them later. I didn’t, but here they are anyway for anyone needing them:

Front Left

  • Red with silver loops
  • Blue with silver loops

Front Right: Didn’t capture these

Rear Left

  • Light green with silver loops
  • Light brown with dark brown stripes

Rear Right

  • Light orange with silver loops
  • Black and pink stripes with silver loops

 

Day Five

Fit Tweeters

See the image on Day Six for how the tweeter looks mounted. The small plastic triangle panel that the tweeter is fixed to from memory needs to be pulled in toward the car where it meets the window pane and then slid down. Mine had an existing round grill that looked like a tweeter was mounted underneath, but all it was was a grill. I drilled a hole just big enough to pass the tweeters existing wire through it. At this stage I’ve used double sided sponge tape. The tweeters are fairly heavy so we’ll wait and see if the tape continues to hold them on. If it doesn’t I’ll have to resort to some glue. Currently it’s been about a week and they’re still staying put. I used some acetone to clean the surface of both the rear of the tweeter and the area that it was going to be fixed to. Just be careful with this though, as it will start to melt the plastic, so make sure you only do it where it’s not going to be seen.

As you can see in the image, no wires are visible and it’s a good position for the sound stage. I’ve got the crossovers set to the highest gain for the tweeters. I thought this may have been to much, but it seems perfect. The lower frequencies are surprisingly well handled by the Audison AVK6 6.5″ drivers.

Run Speaker Wire to Left Front Door

Similar to the right door.

 

Day Six

Apply Sound Deadening to Side Doors

A lot of road noise comes in the doors on the 350Z’s. Also with your hi-fi setup, you’d loose a lot of energy through your doors. Applying sound deadening liberally to the doors as well as the floors stops a lot of this. It’ll also kill any rattles you may have had otherwise.

Fit Front Speakers with Crossovers

There is a sunken space where I fixed my crossovers. I soldered and crimped terminals to the speaker wires that get screwed to the crossover. I soldered and applied heat-shrink sleeving to the tweeter wires / 12 gauge wire junction.

Audison AVK6 6.5

 

Below is a closer look at what the 6.5″ and tweeter mounts look like. I used a drainage transition rubber pip for the 6.5″ driver. These are great for mitigating vibration and for cutting to the exact right size.
Now you have to get the off-set right here. The speaker from rear of rim to back is 70mm. I cut my rubber mounts to 40mm. This allowed 30mm sitting inside the door panel. The window pane when it’s wound down goes behind the speaker, so you don’t have a lot of room there. I think you’ve got about 35mm, so I had about 5mm clearance. If it’s not enough, the glass is going to hit the rear of your speaker and could be disastrous. 40mm mounts were actually to wide. It was quite hard to fit the door panels on as they were touching the speaker rim. I’d suggest making your mounts anything over 35mm (don’t go less), target should be about 36/37mm. So as you can see, this has got to be fairly accurate else either your window will hit the rear of the speaker or your door panel will be up against the speaker rim.
There is an excellent step by step guide to most of this here.

TweeterAndDoorSpeaker

Fit Rear Speakers

I was using the existing factory speakers as they don’t really matter that much. Why don’t they matter much? Most of your sound stage should be coming from the front of you.

In my case there were no signs of which speaker terminal was positive or negative. The best way to work this out is to use a 9v battery and touch wires from the batteries + and – terminals to each of the speakers terminals. if the speaker pops out slightly when connected, it means you’ve connected the batteries + to the speakers +. If the speaker pops in slightly when connected, it means you’ve connected the batteries + to the speakers -. This way you can tell which terminal you should connect your designated positive speaker wire. This method doesn’t harm your speaker as the voltage is too low.

 

Day Seven

Install Head Unit

Most of the steps you would need are here.

Fit DIN/DDIN kit 99-7402 to the dash assembly. As the CDE-148EBT is a single DIN unit, we get some space back in the form of another compartment below the head unit.

The Alpine CDE-148EBT comes with a harness. I only needed to solder -> heat shrink on the following harness wires to the existing wires. The existing wires were all factory labelled with tags. This was obviously very helpful. I had already located a wiring guide for this operation that I would not now need. Strange thing was with the guide that the colours were incorrect for my Z.

  • Orange (head unit illumination)
  • Red (ignition)
  • Yellow (battery)
  • Black (ground) just replaced existing ground wire
  • Blue/white (remote turn on, supplies 12v to power amp to tell it to turn on when head unit is on) I missed this one and had to pull the console out again and connect it. I go through this in day eight.

Connect Aux in, USB extension lead, all six RCA’s.

There are a lot of wires behind the head unit and it can be a bit tricky getting everything back in. Be gentle and patient.

Re-fit Seats and Some Panels

Self explanatory.

Apply Sound Deadening

Around where sub woofer enclosure is going to live. I applied it to the sides also.

SoundDeadeningAroundEnclosure

Fit 80 amp In-line Fuse

I fitted the fuse block about 150mm from the battery. The end of the in-line fuse block is just visible in the below image.

80 amp inline fuse

Install Sub Woofer Box

EnclosureFromTop

Fit Sub Woofers

If you plan on swapping your subs in and out a bit then I’d advise using the likes of these inserts to screw into. Screw these into your MDF then screw into them:

EnclosureMDFScrewFixings

It was about here that I realised that I should have gone with the  SWR-12D2 (the 2 ohm version) sub woofers. As I hadn’t actually used the sub woofers yet, I’d be able to just exchange the 4 ohm subs with the 2 ohm subs. Quality Car Audio refused to provide an exchange for the still new subs. Both 4 ohm and 2 ohm models where exactly the same price.

Some reading on under-powering sub woofers:

Connected Power Cable to Amplifier

Ran Earth Cable to Bare Metal

Unscrewed a bracket that came through the left side of the car in the spare wheel well and fitted the terminal I had soldered and crimped onto my 4 gauge earth cable between the bracket and the body of the car after I sanded the paint off for good connection. Cranked that bolt up nice and tight.

Connected all Speaker Wires to Power Amplifier

Connected RCA Leads

Test Front and Rear Speakers

Re-install internal panels

 

Day Eight

Power Amplifier Not Auto Powering On

Take power amp in to be tested as it wouldn’t power on. The car power amplifiers I’d used in the past would turn on when they receive signal from the pre-outs. That’s signal that’s pre-amplified. Turns out a jumper, well a plastic jumper with a single 12v remote turn on from the head unit was needed on the bottom left pin of the power amplifier. So out with the head unit again and run the extra wire. That’s why you want to leave re-fitting of panels as late as possible.

Cleaned up

Charged Battery

Initial Tuning

Power Amplifier

Currently I have the Audison LRx 5.1MT gains set to (where 0 is 0 and 10 is max):

  • Fronts: 8
  • Rears: 3 (existing very crappy factory speakers)
  • mono Sub woofer channel: 8

Head unit

  • Turn off internal power amp as it’s not used and helps to reduce power supply interference/noise
  • The Alpine CDE-148EBT has a lot of options and features when it comes to tuning your sound. This will keep me busy for a long time I’d say.

Sub Woofer Configuration

  1. I first tried 2 ohm setup with one speaker as a benchmark because I knew that it was ideal
  2. I then tried parallel 1 ohm which I don’t think provided enough energy to the sub woofers
  3. Third configuration I tried was in series 4 ohm which worked quite well.

I’ve also noticed that with the sub woofers pointing forward so I can actually hear them may not have been the best design decision. Especially with two of them running in mono as it affects the sound stage a bit. it probably would have produced a slightly better result if they had of been pointing toward the rear of the car under the strut brace so that the bass could be felt more than heard. In saying that, it’s still early days and I have a lot of tuning to get the levels just right. Also as I didn’t want to remove the spare wheel which sits under the boot base in order to mount the power amplifier under there, one of the only places left would be to mount it on top of the sub woofer enclosure, which although it is rather a good looking piece of hardware. I think 12″ sub woofers look better.

EnclosureFromFront

A bit more reading on parallel verses series wiring of your sub woofers.

Mounted Power Amplifier to Rear of Sub Woofer Enclosure

Audison LRx 5.1MT

Overall Sound

I’m kind of surprised that I’ve managed to achieve such a truthful sound of my recordings. I wasn’t sure this was possible in a car. Think of studio monitors and that’s what this system reminds me of. All of a sudden many recordings sound bad and the good recordings sound outstanding and produce the emotion that high quality music (recording, mix, mastering) is renowned for. For example my personal recordings that I did a few years ago sound amazing as I had the help from one of New Zealand’s best sound engineers / producers (Ian McAllister).

I think also the fact that I didn’t take any short-cuts that I’m aware of? It’s all the small details that add up as well as using great components.

Ongoing

Hunting down boot carpet for reduced space. The factory carpet in the Z is not actually carpet but just felt. So I’m going to get some real carpet with the ‘Z’ embroidered into it. There are a few places that actually supply this.

 

GnuPG Key-Pair with Sub-Keys

January 31, 2015

There are quite a few other posts on this topic, but my set-up hasn’t been exactly the same as any I found, so I found myself using quite a few resources to achieve exactly what I wanted.

Synopsis

For my personal work, I mostly use GNU/Linux distributions. All of the following operations have been carried out on such platforms and should work on any Debian derivative.

The initial set-up was performed on a machine other than a laptop. Then I discuss the process I took to get my key pairs into a laptop environment.

All keys are created using the RSA cryptosystem.

I’m going to create a large (4096 bit) RSA key-pair as my master (often called primary) key and then create a smaller (2048 bit) key-pair for signing and then another (2048 bit) key-pair for encrypting/decrypting.

Most of the work is done on the command line.

If you haven’t already got gnupg installed (accessed by the gpg command), run the following command as root. More than likely it’s already installed by default though:

apt-get install gnupg

Run gpg from command line. If it’s the first time it’s been run it’ll produce output like the following. This initialises your .gnupg directory and configuration:

gpg: directory `/home/<you>/.gnupg' created
gpg: new configuration file `/home/<you>/.gnupg/gpg.conf' created
gpg: WARNING: options in `/home/<you>/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/home/<you>/.gnupg/secring.gpg' created
gpg: keyring `/home/<you>/.gnupg/pubring.gpg' created
gpg: Go ahead and type your message ...

Just press Ctrl+d to terminate gpg.

Use the sks key-server pool

This section is optional apart from the first three lines that need to be added to the ~/.gnupg/gpg.conf file. The step of using the pool over TLS can of course be done later.

Rather than rely on a single specific key-server and also over an encrypted channel by using the hkps protocol. If a single server is not functioning properly it’s automatically removed from the pool.

In order to use the hkps protocol (hkp over TLS):

sudo apt-get install gnupg-curl

Now you will have a ~/.gnupg/gpg.conf file you can add the following lines to the end of the config file (SHA-1 (the default) is no longer considered secure).

personal-digest-preferences SHA512
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed
keyid-format 0xlong
with-fingerprint

There may be a keyserver and keyserver-options option in the ~/.gnupg/gpg.conf already. If so, modify it, if not, add it.

keyserver hkps://hkps.pool.sks-keyservers.net
keyserver-options ca-cert-file=/home/kim/.gnupg/sks-keyservers.netCA.pem

This assumes you downloaded the sks-keyservers.net CA certificate and put it in ~/.gnupg/ . You can of course put it anywhere, but the keyserver-options path will need to reflect your placement.

Verify the certificate’s fingerprint.

Anywhere below where the --keyserver option is specified, can be omitted if you’ve set-up the key-server pool.

Master Key-Pair Generation

This process will create a master key-pair that can be used for signing and a sub key-pair for encrypting/decrypting. We’re actually only going to use the master key-pair that’s created out of this process and we won’t use it for anything other than simply being a master, creating other key-pairs with it, signing other peoples keys etc. We won’t be using it for signing, encrypting/decrypting. We will create two additional sub-keys for this purpose in a bit.

This allows us to remove the master key from our computer and put it in a safe place (disconnected entirely from the network) that can’t be easily accessed. This means that if any of our computers are compromised, the attacker only gets access to our sub-keys which are the keys we use to actually do our day to day work of signing, encrypting outgoing messages and decrypting incoming messages.

On top of this they also need our pass phrase in order to compromise our identity. If in fact an attacker is able to compromise this as well, then we bring our master key out of hiding and can easily revoke the compromised sub key-pair(s) of which the public part is probably on a key-server or your blog or website. This way, when ever anyone gets your public sub-keys from one of the many key-servers or your blog or website, they will see that the public key(s) have been compromised and thus deprecated.

Now run:

gpg --gen-key

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection?

I chose 1. That’s (1) RSA and RSA (default)

RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)

Now because this is the master and we’re not actually going to be using it for signing our own messages and encrypting/decrypting and in theory we’ll probably just keep extending it’s expiry date indefinitely, we make it 4096 bit. Why? Because hardware is getting faster all the time and at some stage 2048 bit keys will not be large enough for cryptographic security. Why would we keep extending the master key-pair expiry date? Because we’ve worked hard to acquire other peoples trust (signatures of it) and we don’t really want to go through all that again. That’s why I’ve decided to not actually use the master for day to day work and do everything in my power to make sure it’s never compromised. If somehow the master key-pair was compromised, then I’d still have a revocation certificate that I could use to revoke it. It’d just be a pain though. I go through the creation of the revocation certificate for the master key-pair below.

4096 # Use smaller for sub-keys, as we can replace them easily when it becomes easier to crack them.
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)

I chose 5y

Because I want my master key to expire eventually if it’s compromised along with the pass-phrase and somehow I lost the multiple copies of the master revocation certificate. If it never gets compromised, I’ll just keep extending the expiry date.

Key expires at Fri 06 Dec 2019 23:32:56 NZDT
Is this correct? (y/N)

I chose:

y
You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name:

Enter your real name:

Kim Carter
Email address:

Enter your email address:

Kim.Carter@binarymist.net
Comment:

Here you can enter something like your website address or your on-line handle or what ever is useful for providing some more identification

lethalduck
You selected this USER-ID:
    "Kim Carter (lethalduck) <Kim.Carter@binarymist.net>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?

Enter ‘O’ to continue:

O

Now you’re asked for a passphrase. Make this long and hard to guess. I don’t remember this myself. That’s why I use a password vault. To have unique credentials for everything.

You need a Passphrase to protect your secret key.

This is not my passphrase, but it’s a good example of one. Adding the extra characters that are all the same actually makes for a much harder to crack code. Oh, you’ll be prompted to enter this twice.

....................MW$]T&LP[=:[f/8=RQQ0M!++kMreX"....................

Now you’re asked to generate the entropy. This is done by interacting with the computer. keystrokes, mouse movements, storage media work. I find running my rsync scripts now is quite effective.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 187 more bytes)

I added a pass phrase and waited for the entropy to be collected.
Once gpg has enough entropy, your key-pairs (master for signing, sub-key for encrypting/decrypting) will be created.

gpg: /home/kim/.gnupg/trustdb.gpg: trustdb created
gpg: key F90A5A4E marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2019-12-06
pub 4096R/F90A5A4E 2014-12-07 [expires: 2019-12-06]
Key fingerprint = D6B6 1E46 4DC9 A3E9 F450 F7F8 C9FA 6F23 F90A 5A4E
uid Kim Carter (lethalduck) <Kim.Carter@binarymist.net>
sub 4096R/65CA12E5 2014-12-07 [expires: 2019-12-06]

Add photo to a uid

Now I wanted to add a photo to my master key-pair.
PGP specifies that the image be no grater than 120×144. GPG recommends it be 240×288. So I chose the smaller size and reduced the quality as much as possible. Could only get it down to 10kb before the image became unrecognisable.

gpg --edit-key F90A5A4E
# or safer...
gpg --edit-key '<your fingerprint'
# Don't know your fingerprint?
gpg --list-keys
gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <Kim.Carter@binarymist.net>

gpg>

To add a jpeg:

addphoto

gpg complained that my 10kb image was very large, so I ditched adding the photo.

Add a sub-key for signing

Now before we go any further I just want to make note of the prefixes and suffixes that you’ll often encounter with gpg commands.

Listing your keys with

gpg -K # list secret keys

or

gpg -k # list public keys

will show the following prefixes for your keys.

sec === (sec)ret key
ssb === (s)ecret (s)u(b)-key
pub === (pub)lic key
sub === public (sub)-key

Roles of the key-pair will be represented by the middle character below.

Constant Character Explanation
PUBKEY_USAGE_SIG S Key is good for signing
PUBKEY_USAGE_CERT C Key is good for certifying other signatures
PUBKEY_USAGE_ENC E Key is good for encryption
PUBKEY_USAGE_AUTH A Key is good for authentication

When we add sub-keys, they are bound to the master key. The master key is modified to reference the sub-keys

What we want to do is add a sub-key for signing so we can move the master key-pair off of the machine and into a safe place.
We also want to change the expiry date and reduce the size to 2048 of both the new signing sub-key and also create another sub-key for encryption with a shorter expiry date.

Create backup of your ~/.gnupg directory:

umask 077; tar -cf $HOME/gnupg-backup.tar -C $HOME .gnupg

To add a signing sub-key:

gpg --edit-key F90A5A4E
# or safer...
gpg --edit-key '<your fingerprint'
# Don't know your fingerprint?
gpg --list-keys

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <Kim.Carter@binarymist.net>

gpg>

Now we add the key

addkey
Key is protected.

You need a passphrase to unlock the secret key for
user: "Kim Carter (lethalduck) <Kim.Carter@binarymist.net>"
4096-bit RSA key, ID F90A5A4E, created 2014-12-07

Please select what kind of key you want:
   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
Your selection?

Now we want (4) RSA (sign only)

4

Output:

RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)

Choose 2048 because we can easily regenerate this key-pair or extend the expiry date and at this stage 2048 is secure enough.

2048

Output:

Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)

I set this to 2y

Key expires at Wed 07 Dec 2016 01:21:11 NZDT
Is this correct? (y/N)

y

Really create? (y/N)

y

After this gpg collects more entropy. When it’s done it dumps you back to the gpg prompt

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 186 more bytes)
.......+++++
.+++++

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
[ultimate] (1). Kim Carter (lethalduck) <Kim.Carter@binarymist.net>

gpg>

Now you can see from the ‘S’ suffix that we do now have a sub-key that’s “good for signing”

Same again but for encrypting

While still at the gpg prompt, run addkey again but choose option 6.

That’s (6) RSA (encrypt only)
Choose 2048 for the keysize.
Choose 2y (two years) for how long the key is valid for.

Eventually you’ll see:

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <Kim.Carter@binarymist.net>

gpg>

Now you can see from the ‘E’ suffix that we do now have a sub-key that’s “good for encryption”

To save the new keys before finishing with gpg, type save.

Create Revocation Certificate for Master Key

gpg --output F90A5A4E.gpg-revocation-certificate --gen-revoke F90A5A4E

Output:

sec  4096R/F90A5A4E 2014-12-07 Kim Carter (lethalduck) <Kim.Carter@binarymist.net>

Create a revocation certificate for this key? (y/N)

Type y

Please select the reason for the revocation:
  0 = No reason specified
  1 = Key has been compromised
  2 = Key is superseded
  3 = Key is no longer used
  Q = Cancel
(Probably you want to select 1 here)
Your decision?

Type 1

Enter an optional description; end it with an empty line:
>

Enter anything you like here.

This revocation certificate was generated when the key was created.
>
Reason for revocation: Key has been compromised
This revocation certificate was generated when the key was created.
Is this okay? (y/N)

y

Output:

You need a passphrase to unlock the secret key for
user: "Kim Carter (lethalduck) <Kim.Carter@binarymist.net>"
4096-bit RSA key, ID F90A5A4E, created 2014-12-07

ASCII armored output forced.
Revocation certificate created.

Please move it to a medium which you can hide away; if Mallory gets
access to this certificate he can use it to make your key unusable.
It is smart to print this certificate and store it away, just in case
your media become unreadable.  But have some caution:  The print system of
your machine might store the data and make it available to others!

Now store your master key-pair revocation certificate somewhere off of the network. Preferably in more than one place also.

Copy ~/.gnupg to an external device (/media/<your encrypted USB device>) for safe keeping before we remove the master key-pair from your computer.

Remove master key

Following are the commands to do this.

gpg --export-secret-subkeys F90A5A4E > /media/<your encrypted USB device>/subkeys
gpg --delete-secret-key F90A5A4E

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

sec  4096R/F90A5A4E 2014-12-07 Kim Carter (lethalduck) <Kim.Carter@binarymist.net>

Delete this key from the keyring? (y/N)

Type y

This is a secret key! - really delete? (y/N)

Type y

gpg --import /media/<your encrypted USB device>/subkeys

Output:

gpg: key F90A5A4E: secret key imported
gpg: key F90A5A4E: "Kim Carter (lethalduck) <Kim.Carter@binarymist.net>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1

Now check to make sure that the master key-pair is no longer on your computer but is on your USB device:

gpg -K

Output:

sec#  4096R/F90A5A4E 2014-12-07 [expires: 2019-12-06]
uid                  Kim Carter (lethalduck) <Kim.Carter@binarymist.net>
ssb   4096R/65CA12E5 2014-12-07
ssb   2048R/7A3122BD 2014-12-07
ssb   2048R/8FF9669C 2014-12-07
gpg --home=/media/<your encrypted USB device>/.gnupg/ -K

Output:

sec   4096R/F90A5A4E 2014-12-07 [expires: 2019-12-06]
uid                  Kim Carter (lethalduck) <Kim.Carter@binarymist.net>
ssb   4096R/65CA12E5 2014-12-07
ssb   2048R/7A3122BD 2014-12-07
ssb   2048R/8FF9669C 2014-12-07

You can see that the first command shows sec#. This means there is no master key-pair in your ~/.gnupg/ directory.

Upload your Public Keys to KeyServer

Remember if you used a key-server pool, anywhere the --keyserver option is specified, can be omitted.

I’ve chosen https://pgp.mit.edu/
You can choose any public keyserver. They all communicate with each other and sync updates at least daily. You can also send more than one public key by adding additional Ids after the –send-keys.

gpg --keyserver hkp://pgp.mit.edu/ --send-keys F90A5A4E

Output:

gpg: sending key F90A5A4E to hkp server pgp.mit.edu

Download public keys from KeyServer

gpg --keyserver hkp://pgp.mit.edu/ --recv-keys <key id to receive and merge signatures>

A safer way to do this is to not just trust every key from a key-server, but rather to verify the key belongs to who you think it belongs to before you download and trust it. Try one at a time and use the fingerprint rather than just the short Id.

gpg --keyserver hkp://pgp.mit.edu/ --recv-key '<fingerprint>'

The single quotes are mandatory around the fingerprint. Double quotes will also work.

Refreshing local Keys from Key-Server

gpg --refresh-keys

Set-up the Laptop with your key-pairs

Copy the contents of the desktops ~/.gnupg/ to the laptops ~/.gnupg/ . I just used the same USB drive for this, but made sure I didn’t mix this .gnupg/ up with the one that had the master key. Then delete the copy without the master key once copied to save any confusion. Also keep in mind that when you delete files from a flash drive they are not actually deleted. That’s why it’s important to use an encrypted USB drive. Also keep it in a very safe place, make a copy of it and keep that off site in a very safe place also.

Make sure you check the permissions of the ~/.gnupg files you just copied to the laptop so that they are the same as the files crated with the gpg command.

Adding another E-Mail Address

Now it’s easier if you do this here in the sequence, but I didn’t think about it until after I’d uploaded the public keys. If you do want to add another uid once you’ve moved the master key, copied your master key’less sub-keys to your laptop, it just means you’ve got to operate on the master key that you moved into /media/<your encrypted USB device>/.gnupg/, then copy the contents of /media/<your encrypted USB device>/.gnupg/ back to ~/.gnupg/ on both your desktop and laptop machines not forgetting to change file permissions again, remove master key from ~/.gnupg/ and upload the modified public keys again.

This is how you would add the additional uid:

gpg --home=/media/<your encrypted USB device>/.gnupg --edit-key F90A5A4E
# or safer...
gpg --home=/media/<your encrypted USB device>/.gnupg --edit-key '<your fingerprint'
# Don't know your fingerprint?
gpg --list-keys

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

gpg: DBG: locking for `/media/<your encrypted USB device>/.gnupg/trustdb.gpg.lock' done via O_EXCL
pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <Kim.Carter@binarymist.net>

gpg>

Add the extra uid now:

adduid

Output:

Real name:

Enter your real name:

Kim Carter

Output:

Email address:

Enter the additional email address you want:

kim.carter@owasp.org

Output:

Comment:

Add the web page that adds some proof of identity:

https://www.owasp.org/index.php/New_Zealand

Output:

You selected this USER-ID:
    "Kim Carter (https://www.owasp.org/index.php/New_Zealand) <kim.carter@owasp.org>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?

Type O

Output:

You need a passphrase to unlock the secret key for
user: "Kim Carter (lethalduck) <Kim.Carter@binarymist.net>"
4096-bit RSA key, ID F90A5A4E, created 2014-12-07

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1)  Kim Carter (lethalduck) <Kim.Carter@binarymist.net>
[ unknown] (2). Kim Carter (https://www.owasp.org/index.php/New_Zealand) <kim.carter@owasp.org>

gpg>

Now we want the same trust level applied to the second uid as the existing:

trust

Output:

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1)  Kim Carter (lethalduck) <Kim.Carter@binarymist.net>
[ unknown] (2). Kim Carter (https://www.owasp.org/index.php/New_Zealand) <kim.carter@owasp.org>

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision?

Type 5

Output:

Do you really want to set this key to ultimate trust? (y/N)

Type y

Output

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1)  Kim Carter (lethalduck) <Kim.Carter@binarymist.net>
[ unknown] (2). Kim Carter (https://www.owasp.org/index.php/New_Zealand) <kim.carter@owasp.org>

gpg>

Don’t worry that it still looks like it’s unknown. Once you save and try to edit again, you’ll see the change has been saved.

If you want to make the uid that you’ve tentatively just added your primary, select it:

uid 2

issue the command:

primary

and finally save:

save

Sign Someone Else’s Public Key

You’re going to have to download, import the persons key into your ~/.gnupg/pubring.gpg

If you’ve got a key-server pool configured, you won’t need the --keyserver option.

gpg --recv-key '<fingerprint of public key you want to import>'
gpg --home=/media/<your encrypted USB device>/.gnupg/ --primary-keyring=~/.gnupg/pubring.gpg --sign-key '<fingerprint of public key you want to sign>'

There will be some other output here. I wasn’t actually asked which trust level I wanted to provide, so I carried out the following edit.

gpg --edit-key '<fingerprint of public key you want to sign>'

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

pub  4096R/<id of public key you just signed>  created: 2014-05-09  expires: never       usage: SC
                               trust: unknown       validity: unknown
sub  4096R/<a sub-key>  created: 2014-05-09  expires: never       usage: E
sub  4096R/<another sub-key>  created: 2014-05-09  expires: 2019-05-08  usage: S
[ unknown] (1). <key holders name> (4096 bit key generated 9/5/2014) <e-mail1@gmail.com>
[ unknown] (2)  <key holders name> (Their key) <e-mail@somethingelse.com>
[ unknown] (3)  <key holders name> (Their Yahoo) <e-mail@yahoo.com>
[ unknown] (4)  <key holders name> (Their Other Email Account) <e-mail@whatever.org>

gpg>

Issue the trust command:

trust

Output:

pub  4096R/<id of public key you just signed>  created: 2014-05-09  expires: never       usage: SC
                               trust: unknown       validity: unknown
sub  4096R/<a sub-key>  created: 2014-05-09  expires: never       usage: E
sub  4096R/<another sub-key>  created: 2014-05-09  expires: 2019-05-08  usage: S
[ unknown] (1). <key holders name> (4096 bit key generated 9/5/2014) <e-mail1@gmail.com>
[ unknown] (2)  <key holders name> (Their key) <e-mail@somethingelse.com>
[ unknown] (3)  <key holders name> (Their Yahoo) <e-mail@yahoo.com>
[ unknown] (4)  <key holders name> (Their Other Email Account) <e-mail@whatever.org>

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision?
3

Output:

pub  4096R/<id of public key you just signed>  created: 2014-05-09  expires: never       usage: SC
                               trust: marginal      validity: unknown
sub  4096R/<a sub-key>  created: 2014-05-09  expires: never       usage: E
sub  4096R/<another sub-key>  created: 2014-05-09  expires: 2019-05-08  usage: S
[ unknown] (1). <key holders name> (4096 bit key generated 9/5/2014) <e-mail1@gmail.com>
[ unknown] (2)  <key holders name> (Their key) <e-mail@somethingelse.com>
[ unknown] (3)  <key holders name> (Their Yahoo) <e-mail@yahoo.com>
[ unknown] (4)  <key holders name> (Their Other Email Account) <e-mail@whatever.org>
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg>

Email the Signed Public-Key

In order to send an email with the freshly signed public-key, attach the file generated with the following command, encrypt and send the email to the owner of the public key specified by their uid. Details on how to encrypt the e-mail are specific to the e-mail client you choose to use.

gpg --armor --output <long id of receivers public key>.signed-by.0xc9fa6f23f90a5a4e --export '<fingerprint of public key you just signed>'

Upload the Signed Public-Key to a Key Server

 

gpg --send-key '<fingerprint of public key you just signed>'

Output:

gpg: sending key <long id of receivers public key> to hkps server hkps.pool.sks-keyservers.net

Verify to make sure you’re public domain signing is good.

Import Your Public-Key Signed by Someone Else

At some stage you may need to import a copy of your public-key in the form of a file that someone else has added their signature to

gpg --import ./0xC9FA6F23F90A5A4E.signed-by-<someone else's long id>.asc

Then view your new signatures:

gpg --list-sigs 0xC9FA6F23F90A5A4E

Then upload them again with --send-key
and pull them down to your other machines with --refresh-keys. You’ll also need to --recv-key their keys so that your key recognises who the signatories are. Or… just simply copy over your ~/.gnupg/ directory. Make sure to check your permissions before and after the copy though. We don’t want anyone other than you being able to read these files. Especially the secring.gpg and any pem certs you have.

Browser based E-Mail

Two browser extensions that look OK are:

  1. Mailvelope for Firefox and Chrome (I’m using this)
    Getting set-up details
    Details of how this works here
  2. Mymail-Crypt for Gmail

Desktop based E-Mail

Thunderbird with enigmail

Additional Resources I’ve Collected

Posts/articles, Documentation

Podcasts

Installation and Hardening of Debian Web Server

December 27, 2014

These are the steps I took to set-up and harden a Debian web server before being placed into a DMZ and undergoing additional hardening before opening the port from the WWW to it. Most of the steps below are fairly simple to do, and in doing so, remove a good portion of the low hanging fruit for nasty entities wanting to gain a foot-hold on your server->network.

Install and Set-up

Debian wheezy, currently stable (supported by the Debian security team for a year or so).

Creating ESXi 5.1 guest

First thing to do is to setup a virtual switch for the host under the Configuration tab. Now I had several quad port Gbit Ethernet adapters in this server. So I created a virtual switch and assigned a physical adapter to it. Now when you create your VM, you choose the VM Network assigned to the virtual switch you created. Provision your disks. Check the “Edit the virtual machine settings before completion” and Continue. You will now be able to modify your settings before you boot the VM. I chose 512MB of RAM at this stage which is far more than it actually needs. While I’m provisioning and hardening the Debian guest, I have the new virtual switch connected to the clients LAN.

ESX Network Configuration

Once we’re done, we can connect the virtual switch up to the new DMZ physical switch or strait into the router. Upload the debian .iso that you downloaded to the ESXi datastore. Then edit the VM settings and select the CD/DVD drive. Select the “Datastore ISO File” option and browse to the .iso file and select the “Connect at power on” option.

6_NewVMSelectIso

Kick the VM in the guts and flick to the VM’s Console tab.

OS Installation

Partitioning

Deleted all the current partitions and added the following. / was added to the start and the rest to the end, in the following order.
/, /var, /tmp, /opt, /usr, /home, swap.

Partitioning Disks

Now the sizes should be setup according to your needs. If you have plenty of RAM, make your swap small, if you have minimal RAM (barely (if) sufficient), you could double the RAM size for your swap. It’s usually a good idea to think about what mount options you want to use for your specific directories. This may shape how you setup your partitions. For example, you may want to have options nosuid,noexec on /var but you can’t because there are shell scripts in /var/lib/dpkg/info so you could setup four partitions. /var without nosuid,noexec and /var/tmp, /var/log, /var/account with nosuid,noexec. Look ahead to the Mounting of Partitions section for more info on this.
In saying this, you don’t need to partition as finely grained as you want options for. You can still mount directories on directories and alter the options at that point. This can be done in the /etc/fstab file and also ad-hoc (using the mount command) if you want to test options out.

You can think about changing /opt (static data) to mount read-only in the future as another security measure.

Continuing with the Install

When you’re asked for a mirror to pull packages from, if you have an apt-cacher[-ng] proxy somewhere on your network, this is the chance to make it work for you thus speeding up your updates and saving internet bandwidth. Enter the IP address and port and leave the rest as default. From the Software selection screen, select “Standard system utilities” and “SSH server”.

10_SoftwareSelection

When prompted to boot into your new system, we need to remove our installation media from the VMs settings. Under the Device Status settings for your VM (if you’re using ESXi), Uncheck “Connected” and “Connect at power on”. Make sure no other boot media are connected at power on. Now first thing we do is SSH into our new VM because it’s a right pain working through the VM hosts console. When you first try to SSH to it you’ll be shown the ECDSA key fingerprint to confirm that the machine you think you are SSHing to is in fact the machine you want to SSH to. Follow the directions here but change that command line slightly to the following:

ssh-keygen -lf ssh_host_ecdsa_key.pub

This will print the keys fingerprint from the actual machine. Compare that with what you were given from your remote machine. Make sure they match and accept and you should be in. Now I use terminator so I have a lovely CLI experience. Of course you can take things much further with Screen or Tmux if/when you have the need.

Next I tell apt about the apt-proxy-ng I want it to use to pull it’s packages from. This will have to be changed once the server is plugged into the DMZ. Create the file /etc/apt/apt.conf if it doesn’t already exist and add the following line:

Acquire::http::Proxy "http://[IP address of the machine hosting your apt cache]:[port that the cacher is listening on]";

Replace the apt proxy references in /etc/apt/sources.list with the internet mirror you want to use, so we contain all the proxy related config in one line in one file. This will allow the requests to be proxied and packages cached via the apt cache on your network when requests are made to the mirror of your choosing.

Update the list of packages then upgrade them with the following command line. If your using sudo, you’ll need to add that to each command:

apt-get update && apt-get upgrade # only run apt-get upgrade if apt-get update is successful (exits with a status of 0)

-


The steps you take to harden a server that will have many user accounts will be considerably different to this. Many of the steps I’ve gone through here will be insufficient for a server with many users.
The hardening process is not a one time procedure. It ends when you decommission the server. Be prepared to stay on top of your defenses. It’s much harder to defend against attacks than it is to exploit a vulnerability.

Passwords

After a quick look at this, I can in fact verify that we are shadowing our passwords out of the box. It may be worth looking at and modifying /etc/shadow . Consider changing the “maximum password age” and “password warning period”. Consult the man page for shadow for full details. Check that you’re happy with which encryption algorithms are currently being used. The files you’ll need to look at are: /etc/shadow and /etc/pam.d/common-password . The man pages you’ll probably need to read in conjunction with each other are the following:

  • shadow
  • pam.d
  • crypt 3
  • pam_unix

Out of the box crypt supports MD5, SHA-256, SHA-512 with a bit more work for blowfish via bcrypt. The default of SHA-512 enables salted passwords. How can you tell which algorithm you’re using, salt size etc? the crypt 3 man page explains it all.
So by default we’re using SHA-512 which is better than MD5 and the smaller SHA-256.

Now by default I didn’t have a “rounds” option in my /etc/pan.d/common-password module-arguments. Having a large iteration count (number of times the encryption algorithm is run (key stretching)) and an attacker not knowing what that number is, will slow down an attack. I’d suggest adding this and re creating your passwords. As your normal user run:

passwd

providing your existing password then your new one twice. You should now be able to see your password in the /etc/shadow file with the added rounds parameter

$6$rounds=[chosen number of rounds specified in /etc/pam.d/common-password]$[8 character salt]$0LxBZfnuDue7.n5<rest of string>

Check /var/log/auth.log
Reboot and check you can still log in as your normal user. If all good. Do the same with the root account.

Using bcrypt with slowpoke blowfish is a much slower algorithm, so it’s even better for password encryption, but more work to setup at this stage.

Some References

Consider setting a password for GRUB, especially if your server is directly on physical hardware. If it’s on a hypervisor, an attacker has another layer to go through before they can access the guests boot screen. If an attacker can access your VM through the hypervisors management app, you’re pretty well screwed anyway.

Disable Remote Root Logins

Review /etc/pam.d/login so we’re only permitting local root logins. By default this was setup that way.
Review /etc/security/access.conf . Make sure root logins are limited as much as possible. Un-comment rules that you want. I didn’t need to touch this.
Confirm which virtual consoles and text terminal devices you have by reviewing /etc/inittab then modify /etc/securetty by commenting out all the consoles you don’t need (all of them preferably). Or better just issue the following command to fill the file with nothing:

cat /dev/null > /etc/securetty

I back up this file before I do this.
Now test that you can’t log into any of the text terminals listed in /etc/inittab . Just try logging into the likes of your ESX/i vSphere guests console as root. You shouldn’t be able to now.

Make sure if your server is not physical hardware but a VM, then the hosts password is long and made up of a random mix of upper case, lower case, numbers and special characters.

Additional Resources

http://www.debian.org/doc/manuals/securing-debian-howto/ch4.en.html#s-restrict-console-login

SSH

My feeling after a lot of reading is that currently RSA with large keys (The default RSA size is 2048 bits) is a good option for key pair authentication. Personally I like to go for 4096, but with the current growth of processing power (following Moore’s law), 2048 should be good until about 2030. Update: I’m not so sure about the 2030 date for this now.

Create your key pair if you haven’t already and setup key pair authentication. Key-pair auth is more secure and allows you to log in without a password. Your pass-phrase should be stored in your keyring. You’ll just need to provide your local password once (each time you log into your local machine) when the keyring prompts for it. Of course your pass-phrase needs to be kept secret. If it’s compromised, it won’t matter how much you’ve invested into your hardening effort. To tighten security up considerably Make the necessary changes to your servers /etc/ssh/sshd_config file. Start with the changes I’ve listed here.
When you change things like setting up AllowUsers or any other potential changes that could lock you out of the server. It’s a good idea to be logged in via one shell when you exit another and test it. This way if you have locked yourself out, you’ll still be logged in on one shell to adjust the changes you’ve made. Unless you have a need for multiple users, lock it down to a single user. You can even lock it down to a single user from a specific host.
After a set of changes, issue the following restart command as root or sudo:

service ssh restart

You can check the status of the daemon with the following command:

service ssh status

Consider changing the port that SSH listens on. May slow down an attacker slightly. Consider whether it’s worth adding the extra characters to your SSH command. Consider keeping the port that sshd binds to below 1025 where only root can bind a process to.

We’ll need to tunnel SSH once the server is placed into the DMZ. I’ve discussed that in this post.

Additional Resources

Check SSH login attempts. As root or via sudo, type the following to see all failed login attempts:

cat /var/log/auth.log | grep 'sshd.*Invalid'

If you want to see successful logins, type the following:

cat /var/log/auth.log | grep 'sshd.*opened'

Consider installing and configuring denyhosts

Disable Boot Options

All the major hypervisors should provide a way to disable all boot options other than the device you will be booting from. VMware allows you to do this in vSphere Client.

Set BIOS passwords.

Lock Down the Mounting of Partitions

Getting started with your fstab.

Make a backup of your /etc/fstab before you make changes. I ended up needing this later. Read the man page for fstab and also the options section in the mount man page. The Linux File System Hierarchy (FSH) documentation is worth consulting also for directory usages.
Add the noexec mount option to /tmp but not /var because executable shell scripts such as pre, post and removal reside within /var/lib/dpkg/info .
You can also add the nodev nosuid options.
You can add the nodev option to /var, /usr, /opt, /home also.
You can also add the nosuid option to /home .
You can add ro to /usr

To add mount options nosuid,noexec to /var/tmp, /var/log, /var/account, we need to bind the target mount onto an existing directory. The following procedure details how to do this for /var/tmp. As usual, you can do all of this without a reboot. This way you can modify until your hearts content, then be confident that a reboot will not destroy anything or lock you out of your system.
Your /etc/fstab unmounted mounts can be tested like this

sudo mount -a

Then check the difference with

mount

mount options can be set up on a directory by directory basis for finer grained control. For example my /var mount in my /etc/fstab may look like this:

UUID=<block device ID goes here> /var ext4 defaults,nodev 0 2

Then add another line below that in your /etc/fstab that looks like this:

/var /var/tmp none nosuid,noexec,bind 0 2

The file system type above should be specified as none (as stated in the “The bind mounts” section of the mount man page http://man.he.net/man8/mount). The bind option binds the mount. There was a bug with the suidperl package in debian where setting nosuid created an insecurity. suidperl is no longer available in debian.

If you want this to take affect before a reboot, execute the following command:

sudo mount --bind /var/tmp /var/tmp

Then to pickup the new options from /etc/fstab:

sudo mount -o remount /var/tmp

For further details consult the remount option of the mount man page.

At any point you can check the options that you have your directories mounted as, by issuing the following command:

mount

You can test this by putting a script in /var and copying it to /var/tmp. Then try running each of them. Of course the executable bits should be on. You should only be able to run the one that is in the directory mounted without the noexec option. My file “kimsTest” looks like this:

#!/bin/sh
echo "Testing testing testing kim"

Then I…

myuser@myserver:/var$ ./kimsTest
Testing testing testing kim
myuser@myserver:/var$ ./tmp/kimsTest
-bash: ./tmp/kimsTest: Permission denied

You can set the same options on the other /var sub-directories (not /var/lib/dpkg/info).

Enable read-only / mount

There are some contradictions on /run/shm size allocation. Increase the size vs Don’t increase the size

Additional Resources

Work Around for Apt Executing Packages from /tmp

Disable Services we Don’t Need

RPC portmapper

dpkg-query -l '*portmap*'

portmap is not installed by default, so we don’t need to remove it.

Exim

dpkg-query -l '*exim*'

Exim4 is installed.
You can see from the netstat output below (in the “Remove Services” area) that exim4 is listening on localhost and it’s not publicly accessible. Nmap confirms this, but we don’t need it, so lets disable it. We should probably be using ss too.

When a run level is entered, init executes the target files that start with k with a single argument of stop, followed with the files that start with s with a single argument of start. So by renaming /etc/rc2.d/s15exim4 to /etc/rc2.d/k15exim4 you’re causing init to run the service with the stop argument when it moves to run level 2. Just out of interest sake, the scripts at the end of the links with the lower numbers are executed before scripts at the end of links with the higher two digit numbers. Now go ahead and check the directories for run levels 3-5 as well and do the same. You’ll notice that all the links in /etc/rc0.d (which are the links executed on system halt) start with ‘K’. Making sense?

Follow up with

sudo netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0: 0.0.0.0:* LISTEN 1910/sshd
tcp6 0 0 ::: :::* LISTEN 1910/sshd

And that’s all we should see.

Additional resources for the above

Disable Network Information Service (NIS). NIS lets several machines in a network share the same account information, such as the password file (Allows password sharing between machines). Originally known as Yellow Pages (YP). If you needed centralised authentication for multiple machines, you could set-up an LDAP server and configure PAM on your machines in order to contact the LDAP server for user authentication. We have no need for distributed authentication on our web server at this stage.

dpkg-query -l '*nis*'

Nis is not installed by default, so we don’t need to remove it.

Additional resources for the above

Remove Services

First thing I did here was run nmap from my laptop

nmap -p 0-65535 <serverImConfiguring>
PORT STATE SERVICE
23/tcp filtered telnet
111/tcp open rpcbind
<mySshPortNumber>/tcp open <ssh>

Now because I’m using a non default port for SSH, nmap thinks some other service is listening. Although I’m sure if I was a bad guy and really wanted to find out what was listening on that port it’d be fairly straight forward.

To obtain a list of currently running servers (determined by LISTEN) on our web server. Not forgetting that man is your friend.

sudo netstat -tap | grep LISTEN

or

sudo netstat -tlp

I also like to add the ‘n’ option to see the ports. This output was created before I had disabled exim4 as detailed above.

tcp 0 0 *:sunrpc *:* LISTEN 1498/rpcbind
tcp 0 0 localhost:smtp *:* LISTEN 2311/exim4
tcp 0 0 *:57243 *.* LISTEN 1529/rpc.statd
tcp 0 0 *:<my ssh port number> *:* LISTEN 2247/sshd
tcp6 0 0 [::]:sunrpc [::]:* LISTEN 1498/rpcbind
tcp6 0 0 localhost:smtp [::]:* LISTEN 2311/exim4
tcp6 0 0 [::]:53309 [::]:* LISTEN 1529/rpc.statd
tcp6 0 0 [::]:<my ssh port number> [::]:* LISTEN 2247/sshd

Rpcbind

Here we see that sunrpc is listening on a port and was started by rpcbind with the PID of 1498.
Now Sun Remote Procedure Call is running on port 111 (also the portmapper port) netstat can tell you the port, confirmed with the nmap scan above. This is used by NFS and as we don’t need NFS as our server isn’t a file server, we can get rid of the rpcbind package.

dpkg-query -l '*rpc*'

Shows us that rpcbind is installed and gives us other details. Now if you’ve been following along with me and have made the /usr mount read only, some stuff will be left behind when we try to purge:

sudo apt-get purge rpcbind

Following are the outputs of interest:

The following packages will be REMOVED:
nfs-common* rpcbind*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
Do you want to continue [Y/n]? y
Removing nfs-common ...
[ ok ] Stopping NFS common utilities: idmapd statd.
dpkg: error processing nfs-common (--purge):
cannot remove `/usr/share/man/man8/rpc.idmapd.8.gz': Read-only file system
Removing rpcbind ...
[ ok ] Stopping rpcbind daemon....
dpkg: error processing rpcbind (--purge):
cannot remove `/usr/share/doc/rpcbind/changelog.gz': Read-only file system
Errors were encountered while processing:
nfs-common
rpcbind
E: Sub-process /usr/bin/dpkg returned an error code (1)

Another

dpkg-query -l '*rpc*'

Will result in pH. That’s a desired action of (p)urge and a package status of (H)alf-installed.
Now the easiest thing to do here is rename your /etc/fstab to something else and rename the /etc/fstab you backed up before making changes to it back to /etc/fstab then because you know the fstab is good,

reboot

Then try the purge, dpkg-query and netstat commands again to make sure rpcbind is gone and of course no longer listening. I had to actually do the purge twice here as config files were left behind from the fist purge.

Also you can remove unused dependencies now after you get the following message:

The following packages were automatically installed and are no longer required:
libevent-2.0-5 libgssglue1 libnfsidmap2 libtirpc1
Use 'apt-get autoremove' to remove them.
The following packages will be REMOVED:
rpcbind*

sudo apt-get -s autoremove

Because I want to simulate what’s going to be removed because I”m paranoid and have made stupid mistakes with autoremove years ago and that pain has stuck with me. I autoremoved a meta-package which depended on many other packages. A subsequent autoremove for packages that had a sole dependency on the meta-package meant they would be removed. Yes it was a painful experience. /var/log/apt/history.log has your recent apt history. I used this to piece back together my system.

Then follow up with the real thing… Just remove the -s and run it again. Just remember, the less packages your system has the less code there is for an attacker to exploit.

Telnet

telnet installed:

dpkg-query -l '*telnet*'
sudo apt-get remove telnet

telnet gone:

dpkg-query -l '*telnet*'

Ftp

We’ve got scp, why would we want ftp?
ftp installed:

dpkg-query -l '*ftp*'
sudo apt-get remove ftp

ftp gone:

dpkg-query -l '*ftp*'

Don’t forget to swap your new fstab back and test that the mounts are mounted as you expect.

Secure Services

The following provide good guidance on securing what ever is left.

Scheduled Backups

Make sure all data and VM images are backed up routinely. Make sure you test that restoring your backups work. Backup system files and what ever else is important to you. There is a good selection of tools here to help. Also make sure you are backing up the entire VM if your machine is a virtual guest by export / import OVF files. I also like to backup all the VM files. Disk space is cheap. Is there such a thing as being too prepared for disaster? It’s just a matter of time before you’ll be calling on your backups.

Keep up to date

Consider whether it would make sense for you or your admin/s to set-up automatic updates and possibly upgrades. Start out the way you intend to go. Work out your strategy for keeping your system up to date and patched. There are many options here.

Logging, Alerting and Monitoring

From here on, I’ve made it less detailed and more about just getting you to think about things and ways in which you can improve your stance on security. Also if any of the offerings cost money to buy, I make note of it because this is the exception to my rule. Why? Because I prefer free software and especially when it’s Open Source FOSS.

Some of the following cross the “logging” boundaries, so in many cases it’s difficult to put them into categorical boxes.

Attackers like to try and cover their tracks by modifying information that’s distributed to the various log files. Make sure you know who has write access to these files and keep the list small. As a Sysadmin you need to read your log files often and familiarise yourself with them so you get used to what they should look like.

SWatch

Monitors “a” log file for each instance you run (or schedule), matches your defined patterns and acts. You can define different message types with different font styles. If you want to monitor a lot of log files, it’s going to be a bit messy.

Logcheck

Monitors system log files, emails anomalies to an administrator. Once installed it needs to be set-up to run periodically with cron. Not a bad we run down here. How to use and customise it. Man page and more docs here.

NewRelic

Is more of a performance monitoring tool than a security tool. It has free plans which are OK, It comes into it’s own in larger deployments. I’ve used this and it’s been useful for working out what was causing performance issues on the servers.

Advanced Web Statistics (AWStats)

Unlike NewRelic which is a Software as a Service (SaaS), AWStats is FOSS. It kind of fits a similar market space as NewRelic though, but also has Host Intrusion Prevention System (HIPS) features. Docs here.

Pingdom

Similar to NewRelic but not as feature rich. Update: Recently stumbled into Monit which is a better alternative. Free and open source. I’ve been writing about it here.

Multitail

Does what its name sounds like. Tails multiple log files at once. Provides realtime multi log file monitoring. Example here. Great for seeing strange happenings before an intruder has time to modify logs, if your watching them that is. Good for a single system if you’ve got a spare screen to throw on the wall.

PaperTrail

Targets a similar problem to MultiTail except that it collects logs from as many servers as you want and copies them off-site to PaperTrails service and aggregates them into a single easily searchable web interface. Allows you to set-up alerts on anything. Has a free plan, but you only get 100MB per month. The plans are reasonably cheap for the features it provides and can scale as you grow. I’ve used this and have found it to be excellent.

Logwatch

Monitors system logs. Not continuously, so they could be open to modification without you knowing, like SWatch and Logcheck from above. You can configure it to reduce the number of services that it analyses the logs of. It creates a report of what it finds based on your level of paranoia. It’s easy to set-up and get started though. Source and docs here.

Logrotate

Use logrotate to make sure your logs will be around long enough to examine them. Some usage examples here. Ships with Debian. It’s just a matter of applying any extra config.

Logstash

Targets a similar problem to logrotate, but goes a lot further in that it routes and has the ability to translate between protocols. Requires Java to be installed.

Fail2ban

Ban hosts that cause multiple authentication errors. or just email events. Of course you need to think about false positives here too. An attacker can spoof many IP addresses potentially causing them all to be banned, thus creating a DoS.

Rsyslog

Configure syslog to send copy of the most important data to a secure system. Mitigation for an attacker modifying the logs. See @ option in syslog.conf man page. Check the /etc/(r)syslog.conf file to determine where syslogd is logging various messages. Some important notes around syslog here, like locking down the users that can read and write to /var/log.

syslog-ng

Provides a lot more flexibility than just syslogd. Checkout the comprehensive feature-set.

Some Useful Commands

  • Checking who is currently logged in to your server and what they are doing with the who and w commands
  • Checking who has recently logged into your server with the last command
  • Checking which user has failed login attempts with the faillog command
  • Checking the most recent login of all users, or of a given user with the lastlog command. lastlog comes from the binary file /var/log/lastlog.

This, is a list of log files and their names/locations and purpose in life.

Host-based Intrusion Detection System (HIDS)

Tripwire

Is a HIDS that stores a good know state of vital system files of your choosing and can be set-up to notify an administrator upon change in the files. Tripwire stores cryptographic hashes (delta’s) in a database and compares them with the files it’s been configured to monitor changes on. Not a bad tutorial here. Most of what you’ll find with tripwire now are the commercial offerings.

RkHunter

A similar offering to Tripwire. It scans for rootkits, backdoors, checks on the network interfaces and local exploits by running tests such as:

  • MD5 hash changes
  • Files commonly created by root-kits
  • Wrong file permissions for binaries
  • Suspicious strings in kernel modules
  • Hidden files in system directories
  • Optionally scan within plain-text and binary files

Version 1.4.2 (24/02/2014) now checks ssh, sshd and telent (although you shouldn’t have telnet installed). This could be useful for mitigating non-root users running a modified sshd on a 1025-65535 port. You can run ad-hoc scans, then set them up to be run with cron. Debian Jessie has this release in it’s repository. Any Debian distro before Jessie is on 1.4.0-1 or earlier.

The latest version you can install for Linux Mint Qiana (17) and Rebecca (17.1) within the repositories is 1.4.0-3 (01/05/2012)

Change-log here.

Chkrootkit

It’s a good idea to run a couple of these types of scanners. Hopefully what one misses the other will not. Chkrootkit scans for many system programs, some of which are cron, crontab, date, echo, find, grep, su, ifconfig, init, login, ls, netstat, sshd, top and many more. All the usual targets for attackers to modify. You can specify if you don’t want them all scanned. Runs tests such as:

  • System binaries for rootkit modification
  • If the network interface is in promiscuous mode
  • lastlog deletions
  • wtmp and utmp deletions (logins, logouts)
  • Signs of LKM trojans
  • Quick and dirty strings replacement

Stealth

The idea of Stealth is to do a similar job as the above file integrity scanners, but to leave almost no sediments on the tested computer (called the client). A potential attacker therefore has no clue that Stealth is in fact scanning the integrity of its client files. Stealth is installed on a different machine (called the controller) and scans over SSH.

Ossec

Is a HIDS that also has some preventative features. This is a pretty comprehensive offering with a lot of great features.

Unhide

While not strictly a HIDS, this is quite a useful forensics tool for working with your system if you suspect it may have been compromised.

Unhide is a forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. Unhide runs in Unix/Linux and Windows Systems. It implements six main techniques.

  1. Compare /proc vs /bin/ps output
  2. Compare info gathered from /bin/ps with info gathered by walking thru the procfs. ONLY for unhide-linux version
  3. Compare info gathered from /bin/ps with info gathered from syscalls (syscall scanning)
  4. Full PIDs space ocupation (PIDs bruteforcing). ONLY for unhide-linux version
  5. Compare /bin/ps output vs /proc, procfs walking and syscall. ONLY for unhide-linux version. Reverse search, verify that all thread seen by ps are also seen in the kernel.
  6. Quick compare /proc, procfs walking and syscall vs /bin/ps output. ONLY for unhide-linux version. It’s about 20 times faster than tests 1+2+3 but maybe give more false positives.

It includes two utilities: unhide and unhide-tcp.

unhide-tcp identifies TCP/UDP ports that are listening but are not listed in /bin/netstat through brute forcing of all TCP/UDP ports available.

Can also be used by rkhunter in it’s daily scans. Unhide was number one in the top 10 toolswatch.org security tools pole

Web Application Firewalls (WAF’s)

which are just another part in the defense in depth model for web applications, get more specific in what they are trying to protect. They operate at the application layer, so they don’t have to deal with all the network traffic. They apply a set of rules to HTTP conversations. They can also be either Network or Host based and able to block attacks such as Cross Site Scripting (XSS), SQL injection.

ModSecurity

Is a mature and feature full WAF that is designed to work with such web servers as IIS, Apache2 and NGINX. Loads of documentation. They also look to be open to committers and challengers a-like. You can find the OWASP Core Rule Set (CRS) here to get you started which has the following:

  • HTTP Protocol Protection
  • Real-time Blacklist Lookups
  • HTTP Denial of Service Protections
  • Generic Web Attack Protection
  • Error Detection and Hiding

Or for about $500US a year you get the following rules:

  • Virtual Patching
  • IP Reputation
  • Web-based Malware Detection
  • Webshell/Backdoor Detection
  • Botnet Attack Detection
  • HTTP Denial of Service (DoS) Attack Detection
  • Anti-Virus Scanning of File Attachments

Fusker

for Node.js. Although doesn’t look like a lot is happening with this project currently. You could always fork it if you wanted to extend.

The state of the Node.js echosystem in terms of security is pretty poor, which is something I’d like to invest time into.

Fire-walling

This is one of the last things you should look at when hardening an internet facing or parameterless system. Why? Because each machine should be hard enough that it doesn’t need a firewall to cover it like a blanket with services underneath being soft and vulnerable. Rather all the services should be either un-exposed or patched and securely configured.

Most of the servers and workstations I’ve been responsible for over the last few years I’ve administered as though there was no firewall and they were open to the internet. Most networks are reasonably easy to penetrate, so we really need to think of the machines behind them as being open to the internet. This is what De-perimeterisation (the concept initialised by the Jericho Forum) is all about.

Some thoughts on firewall logging.

Keep your eye on nftables too, it’s looking good!

Additional Resources

Just keep in mind the above links are quite old. A lot of it’s still relevant though.

Machine Now Ready for DMZ

Confirm DMZ has

  • Network Intrusion Detection System (NIDS), Network Intrusion Prevention System (NIPS) installed and configured. Snort is a pretty good option for the IDS part, although with some work Snort can help with the Prevention also.
  • incoming access from your LAN or where ever you plan on administering it from
  • rules for outgoing and incoming access to/from LAN, WAN tightly filtered.

Additional Web Server Preparation

  • setup and configure soft web server
  • setup and configure caching proxy. Ex:
    • node-http-proxy
    • TinyProxy
    • Varnish
    • nginx
  • deploy application files
  • Hopefully you’ve been baking security into your web app right from the start. This is an essential part of defense in depth. Rather than having your application completely rely on other entities to protect it, it should also be standing up for itself and understanding when it’s under attack and actually fighting back.
  • set static IP address
  • double check that the only open ports on the web server are 80 and what ever you’ve chosen for SSH.
  • setup SSH tunnel
  • decide on and document VM backup strategy and set it up.

Machine Now In DMZ

Setup your CNAME or what ever type of DNS record you’re using.

Now remember, keeping any machine on (not just the internet, but any) a network requires constant consideration and effort in keeping the system as secure as possible.

Work through using the likes of harden and Lynis for your server and harden-surveillance for monitoring your network.

Consider combining “Port Scan Attack Detector” (psad) with fwsnort and Snort.

Hack your own server and find the holes before someone else does. If you’re not already familiar with the tricks of how systems on the internet get attacked read up on the “Attacks and Threats” Run OpenVAS, Run Web Vulnerability Scanners

From here on is in scope for other blog posts.

Journey To Self Hosting

November 29, 2014

I was recently tasked with working out the best options for hosting web applications and their data for a client. This was their foray into whether to throw all their stuff into the cloud or to build their own infrastructure to host everything on.

Hosting Options

There are a lot of options available now. Most of which are derivatives of either external cloud or internal (possibly cloud). All of which come with features and some price tags that need to be weighed up. I’ve been collecting resources of providers and their offerings (both cloud and in-house) for quite a while. So I didn’t have to go far to pull them together for comparison.

All sites and apps require a different amount of each resource type to be allocated to them. For example many web sites are still predominantly static, which require more network band-width than any other resource, some memory, a little processing power and provided they’re being cached on the server, not a lot else. These resources are very cheap.

If you’re running an e-commerce site, then you can potentially add more Disk I/O which is usually the first bottleneck, processing power and space for your data store. Add in redundancy, backups and administration of.
Fast disks (or lets just call it storage) are cheap. In fact most hardware is cheap.

Administration of redundancy, backups and staying on top of security starts to cost more. Although the “staying on top of security” will need to be done whether you’re on someone else’s hardware or on your own. It’s just that it’s a lot easier on your own because you’re in control and dictate the amount of visibility you have.

The Cloud

The Cloud

Pros

It’s out of your hands.
Indeed it is, in more ways than one. Your trust is going to have to be honoured here (or not). Yes you have SLA’s, but what guarantee do the SLA’s give you that the people working on your system and data are not having a bad day. Maybe they’ve broken up with their girlfriend, or what ever. It takes very little to miss something that could drastically compromise your system and or data.

VPS’s can be spun up quickly, but remember, good things take time. Everything has a cost. Things are quick and easy for a reason. There is a cost to this, think about what those (often hidden) costs are.

In some cases it can be cheaper, but you get what you pay for.

Cons

Your are trusting others with your data. Even others that you are not aware of. In many cases, hosting providers can be (and in many cases are) forced by governments and other agencies to give up your secrets. This is very common place now and you may not even know it’s happened.

Your provider may go out of business.

There is an inherent lack of security in all the cloud providers I’ve looked at and worked with. They will tell you they take security seriously, but when someone that understands security inspects how they do things, the situation often looks like Swiss cheese.

In-House Cloud

In-House Cloud

Pros

You are in control of your data and your application, providing you or “your” staff:

  • and/or external consultants are competent and haven’t made mistakes in setting up your infrastructure
  • Are patching all software/firmware involved
  • Are Fastidiously hardening your server/s (this is continuous. It doesn’t stop at the initial set-up)
  • Have set-up the routes and firewall rules correctly
  • Have the correct alerts set-up
  • Have implemented Intrusion Detection and Prevention Systems (IDS’s/IPS’s)
  • Have penetration tested the set-up and not just from a technical perspective. It’s often best to get pairs to do the reviews.

The list goes on. If you are at all in doubt, that’s where you consider the alternatives. In saying that, most hosting and cloud providers perform abysmally, despite their claims that your applications and data is safe with them.

It “can” cost less than entrusting your system and data to someone (or many someone’s) on the other side of the planet. Weigh up the costs. They will not always be what they appear at face value.

Hardware is very cheap.

Cons

Potential lack of in-house skills.

People with the right skills and attitudes are not cheap.

It may not be core business. You may not have the necessary capitols in-house to scope, architect, cost, set-up, administer. Potentially you could hire someone to do the initial work and the on going administration. The amount of on going administration will be partly determined by what your hosting. Generally speaking hosting company web sites, blogs etc, will require less work than systems with distributed components and redundancy.

Spinning up an instance to develop or prototype on, doesn’t have to be hard. In fact if you have some hardware, provisioning of VM images is usually quick and easy. There is actually a pro in this too… you decide how much security you want baked into these images and the processes taken to configure.

Consider download latencies from people you want to reach possibly in other countries.

In some cases it can be more expensive, but you get what you pay for.

Outcome

The decision for this client was made to self host. There will be a follow up post detailing some of the hardening process I took for one of their Debian web servers.

Procurement & Config of Sun Fire V240 & ALOM

October 25, 2014

This is the sequence of events I took to prepare a Sun Fire V240 for hosting pfSense which is a free and open source FreeBSD based enterprise grade routing solution for a client of mine.

Recently I was tasked with setting up a network with what I considered to be enterprise grade hardware and software as cheaply as possible. When I take on these sorts of tasks, security is forefront in my mind, so I often look toward components that are as open as possible and that don’t sport any known (to me at least) back-doors and are able to be easily upgraded and patched at little to no cost.

A requirement was clean shut-downs on power failure events at least for the critical servers.

Procured Kit

  1. APC Smart-UPS 5000 with batteries in good condition. Worth a little under $6k if you’re buying new. I wouldn’t buy new. If you shop around, these can be picked up at a fraction of that cost. From my experience the APC kit is some of the best UPS gear available.
    APC Smart-UPS 5000
  2. AP9630 UPS network management card $92 new. Most of the details around setting these UPS’s up I’ve already posted on. If you search my blog for “APC UPS” you’ll find it.
    APC AP9630
  3. Enterprise grade router/firewall:
    Sun Fire V240 (RISC architecture). 2 x UltraSparc-IIIi 1.5Ghz CPU. 4Gbit on-board Ethernet ports. Lights-out management port. 4GB RAM. 2U. Dual redundant PSU’s. 2 x 72GB Hot Swap 10k SCSI HDD’s. With rack mount rails. Currently going for around $1.5k on Ebay. Price paid: $160 incl shipping. I doubt you’d find anything of these specifications off the shelf for under a $1000. This is a lot of server for a very small amount of money.
    Sun Fire V240
  4. Firmware: pfSense. Free and open source.

Planning

As part of my planning I evaluated (again) whether or not free software routing solutions are actually up to the task of the enterprise. My research led me to believe some were… based on others that had already been down this route ( PTP ;-) ). Openness is a biggie for me. I like to know that eyes are on the software rather than it being closed up in a proprietary package.

I evaluated m0n0Wall, ipCop (Linux based), smoothwall and pfSense. pfSense had been used in quite a few large environments successfully. When I had made my decision on the firmware to use, I went through the hardware requirements and of course started looking for high quality second-hand gear.

For the router hardware I was going to need at lease 1GHz CPU as I wanted to run Snort as my IDS/IPS. PCI-X or PCI-e network adapters (which of course I didn’t need to worry about with the Sun Fire server). Snort needs 512MB RAM minimum. Preferably at least 1GB.

Gaining Access to the Sun Fire V240

Now I had no idea of how the previous owner had setup the configuration of the ALOM (Advanced Lights Out Management). In fact I hadn’t administered a Sun Fire server before at all. On page 11 of the Sun FireTM V210 and V240 Servers Getting Started Guide it states the following:

The system console is directed to ALOM by default and is configured to show server console information on startup.
ALOM enables you to monitor and control your server over either a serial
connection (using the SERIAL MGT port), or Ethernet connection (using the NET MGT port).
For information about configuring an Ethernet connection, refer to the Sun Advanced Lights Out Manager Software User’s Guide.” The NET MGT port can also be disabled and in my case it turned out it was, but I’ll get to that later. I didn’t have a spare DB-9 to RJ-45 adapter lying around to wire it up and connect to the SERIAL MGT port.

Sun Fire V240 rear

Telnet?

(but didn’t get that far)

Since I was going to go down the path of trying to connect to the ALOM console via the NET MGT Ethernet port, I thought telnet would probably be the path of least resistance.

Page 10 of the “Sun Advanced Lights Outs Manager Software User’s Guide” stated the following:

The 10-Mbyte Ethernet port enables you to access ALOM from within your company
network. You can connect to ALOM remotely using any standard Telnet client”. On the V240, the
ALOM Ethernet port is referred to as the NET MGT port.

Using a laptop with Kali Linux installed (because it has lots of great tools for network reconnaissance), Running

ethtool eth0

told me that my NIC supported:
10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full

Wireshark?

Tried connecting directly to the NET MGT port with wireshark running on my laptop. Didn’t get any packets from the device. At the time I thought it may have been because my laptop’s NIC was using 100baseT, but later on I found out that the NET MGT port was disabled.

Tried pinging my broadcast address ping -b 255.255.255.255 then checked my arp table arp -a. No results that looked like what I was looking for. Of course this strategy would have taken quite some time to complete… and in my case it would have yielded no results anyway.

NMap?

I started with the private IPv4 address spaces. Using Wi-fi on my Kali box, tried the 16 bit block:

nmap -sn 192.168.*.*

Got a false positive of a cable modem. How did I work out that it was a false positive?

nmap -A <falsePositiveIPOfCableModem> # Gave me the model and everything I needed to know about the device to rule it out.

Next up the 20 bit block

nmap -sn 172.16.0.0/12
Nmap done: 1048576 IP addresses (0 hosts up) scanned in 108670.97 seconds

In earlier releases of nmap the -sn switch was known as -sP

I decided I needed to try and speed up the scan, so I connected directly to the V240 NET MGT port with a Cat5 patch cable (ethtool told me my laptop’s NIC had MDI-X on (force crossover mode)) and made sure my network card supported 10baseT which the “Sun Advanced Lights Outs Manager Software User’s Guide” told me it needed for the NET MGT port. Turns out the NET MGT port didn’t support 10baseT. Details a bit further down.

Added a static IP address to the /etc/network/interfaces. Currently it looked like:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet dhcp

So I commented out the auto wlan0 and iface wlan0 inet dhcp and added the following:

auto eth0
iface eth0 inet static
address 10.1.1.6
netmask 255.255.255.0
broadcast 10.1.1.255
#gateway 10.1.1.1 # Make sure you don't add a gateway, as we're connecting directly to the V240

followed by:

service networking restart

then changed my /etc/NetworkManager/NetworkManager.conf
managed=true to be managed=false
So Network manager didn’t keep interfering with my interfaces.

I followed this with a

service network-manager restart

followed with ifconfig to make sure my network interface was using the correct IP address, netmask and broadcast. It wasn’t, so…

ifdown eth0
ifup eth0
ifconfig

Success, it now was.

Now to make sure my network card was communicating in a manner that the V240’s NET MGT port would understand.

Using ethtool

ethtool eth0

told me 10baseT was supported, but it also told me my current speed was 100Bb/s. So I tried changing the speed with

ethtool -s eth0 speed 10

and received Cannot advertise speed 10. So made the following temporary changes as they’ll be lost on reboot. Changed the duplex… Ran the following:

ethtool -s eth0 speed 10 duplex half

Now with a:

ethtool eth0

I got:

Speed: unknown!
Duplex: Unknown! (255)

So turned the auto negotiation off:

ethtool -s eth0 speed 10 duplex half autoneg off

Now with a:

ethtool eth0

I got:

Speed: 10Mb/s
Duplex: Half
Auto-negotiation: off
#and some other settings.

Some useful ethtool resources:

With these settings the NET MGT port didn’t have it’s green link led on. So I kept playing with the settings. Turns out it would only work with speed 100 duplex full contrary to page 10 of the “Sun Advanced Lights Out Manager Software User’s Guide”
These were the settings that gave me link:

Supported pause frame use: No #Don't think I fiddled with this.
Supports auto-negotiation: Yes
Advertised link modes: Not reported #Don't think I fiddled with this.
Advertised pause frame use: Symmetric #Don't think I fiddled with this.
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: full
Port: Twisted Pair #Don't think I fiddled with this.
PHYAD: 1 #Don't think I fiddled with this.
Transceiver: internal #Don't think I fiddled with this.
Auto-negotiation: off
MDI-X: on
Supports Wake-on: g #Don't think I fiddled with this.
Wake-on: d #Don't think I fiddled with this.
Current message level: 0x000000ff (255)
drv prove link timer ifdown ifup rx_err tx_err
Link detected: yes

I was now confident that if the Sun Fire V240 NET MGT port was enabled, we’d find it’s IP address if it was using one from the private space. It was time to try the last and largest private address space. Oh, I also used wireshark to make sure nmap was doing what I expected on my laptop when I ran:

nmap -v -sn 10.0.0.0/8

I was a little confused to start with as nmap told me Scanning 4096 hosts I soon realised after checking the CIDR (Classless InterDomain Routing) and by the output nmap produced, that nmap was doing the scanning in chunks. As there was going to be a lot of results, I setup the output to files:

nmap -v -sn -oA 'scan-%Y-%m-%d_%H-%M 10.0.0.0/8

This produces the output in all three formats as discussed here.

SERIAL MGT Port?

This private address range was going to take a few days to scan, so I decided to have a poke at the SERIAL MGT port on the Sun Fire V240.

To use the SERIAL MGT port, a RJ-45 patch cable connected to a DB-9 adapter ($4.50 from globalpc) is required Unless you get the official Sun adaptor “530-3100-01″, or still have the one that came in the new box. So I splashed out and went with the $4.50 option. It cost me more in gas to get to the shop than buy the part. I Wired it up according to page 25 of the “Sun Fire V210 and V240 Servers Installation Guide“.

RJ-45 to DB-9 Adapter Crossovers
SERIAL MGT Port Adapter (DB-9) Pin
1 (RTS) 8 (CTS)
2 (DTR) 6 (DSR)
3 (TXD) 2 (RXD)
4 (Signal Ground) 5 (Signal Ground)
5 (Signal Ground) 5 (Signal Ground)
6 (RXD) 3 (TXD)
7 (DSR) 4 (DTR)
8 (CTS) 7 (RTS)

Red wire in with green.

RJ45-DB9 RJ45

Installed minicom and setserial and did pretty much the same as I did here. Plugged the console cable in and tried to establish a connection.

Then found that by default ALOM only communicates through the SERIAL MGT port at startup (of ALOM I thought), but it seems that at power on of the server also.

At the {1} ok prompt, I typed #. (that’s hash followed with dot) to escape from the system console sc>

I then entered the showsc command and found that the MGT NET port was disabled.
I then ran a

usershow

to see which user accounts existed and was prompted to set a password for the admin user.
When you connect to ALOM for the first time, you are automatically connected as the admin account.“.
So obviously the seller of the system reset ALOM.

SettingAdminPassword

Also audited the user accounts, and the details on the permission levels are here.

Ran the following script. A nice little dialog from Ramesh here (see step 4) too.

setupsc
  • Turned NET MGT port on
  • Changed the default if_connection from none to ssh
  • Answered no to email alerts (only for logged in users)
  • Yes to configure the network interfaces
  • No to DHCP
  • Entered the IP address for the NET MGT port
  • Entered the netmask for the NET MGT port
  • Entered the gateway for the Net Mgt port
  • Should powerstate memory be enabled [y]? y
  • Enabled power on sequencing

Then we need to restart the ALOM to apply the new settings.

resetsc -y

If you still have minicom running, it’ll show you what happens during the boot sequence and then present you with a login prompt.

Extra Resources

SSH

At this point I plugged the Ethernet cable from my test switch (10 Mbit/s capable) back into the NET MGT port of the Sun Fire V240 and tested that ALOM was responding on the IP address that I set the NET MGT port to.

ping &amp;lt;myNetMgtIP&amp;gt;

It was answering. So I attempted to SSH in on a different machine.

ssh admin@&amp;lt;myNetMgtIP&amp;gt;

I was presented with the hosts key fingerprint

The authenticity of host &amp;lt;myNetMgtIP&amp;gt; (&amp;lt;myNetMgtIP&amp;gt;)' can't be established.
RSA key fingerprint is &amp;lt;myExistingHostKeyInHex&amp;gt;.
Are you sure you want to continue connecting (yes/no)?

I wanted to know I was connecting to what I thought I was connecting to, so answered no.
Then in minicom I queried the hosts key fingerprint

ssh-keygen -l -t rsa

I was provided with the key fingerprint that matched what I was presented with when I attempted to SSH, so I new I was actually communicating with the server I thought I was.

I then regenerated the hosts key fingerprint

ssh-keygen -r -t rsa

and was provided with the new key. A restart of the SSH daemon is required to load the new host key.

sc> restartssh

Then SSH in. Confirm when prompted that the host key matches the newly provided key.

ssh admin@&amp;lt;myNetMgtIP&amp;gt;
The authenticity of host '&amp;lt;myNetMgtIP&amp;gt; (&amp;lt;myNetMgtIP&amp;gt;)' can't be established.
RSA key fingerprint is &amp;lt;myNewHostKeyInHex&amp;gt;.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '&amp;lt;myNetMgtIP&amp;gt;' (RSA) to the list of known hosts.

Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.

Sun(tm) Advanced Lights Out Manager &amp;lt;versionHere&amp;gt; ()

Please login: admin
Please Enter password: *********

sc&amp;gt;

We’re in!

At any time for a list of commands, you can type help.

logout
Connection to &amp;lt;myNetMgtIP&amp;gt; closed.

We’re out!

Node.js Asynchronicity and Callback Nesting

July 26, 2014

Just a heads up before we get started on this… Chrome DevTools (v35) now has the ability to show the full call stack of asynchronous JavaScript callbacks. As of writing this, if you develop on Linux you’ll want the dev channel. Currently my Linux Mint 13 is 3 versions behind. So I had to update to the dev channel until I upgraded to the LTS 17 (Qiana).

All code samples can be found at GitHub.

Deep Callback Nesting

AKA callback hell, temple of doom, often the functions that are nested are anonymous and often they are implicit closures. When it comes to asynchronicity in JavaScript, callbacks are our bread and butter. In saying that, often the best way to use them is by abstracting them behind more elegant APIs.

Being aware of when new functions are created and when you need to make sure the memory being held by closure is released (dropped out of scope) can be important for code that’s hot otherwise you’re in danger of introducing subtle memory leaks.

What is it?

Passing functions as arguments to functions which return immediately. The function (callback) that’s passed as an argument will be run at some time in the future when the potentially time expensive operation is done. This callback by convention has it’s first parameter as the error on error, or as null on success of the expensive operation. In JavaScript we should never block on potentially time expensive operations such as I/O, network operations. We only have one thread in JavaScript, so we allow the JavaScript implementations to place our discrete operations on the event queue.

One other point I think that’s worth mentioning is that we should never call asynchronous callbacks synchronously unless of course we’re unit testing them, in which case we should be rarely calling them asynchronously. Always allow the JavaScript engine to put the callback into the event queue rather than calling it immediately, even if you already have the result to pass to the callback. By ensuring the callback executes on a subsequent turn of the event loop you are providing strict separation of the callback being allowed to change data that’s shared between itself (usually via closure) and the currently executing function. There are many ways to ensure the callback is run on a subsequent turn of the event loop. Using asynchronous API’s like setTimeout and setImmediate allow you to schedule your callback to run on a subsequent turn. The Promises/A+ specification (discussed below) for example specifies this.

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var nestedCoffee = sUTDirectory('nestedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function (done) {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the ugly nested callback coffee machine', function (done) {

      var result = function (error, state) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: ' + state.description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(error, expectedErrorOutCome, 'brew encountered an error. The following are the error details: ' + error.message);
         }
         done();
      };

      nestedCoffee().brew(result);
   });
});
lets test

The System Under Test

'use strict';

module.exports = function nestedCoffee() {

   // We don't do instant coffee ####################################

   var boilJug = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
   };
   var addInstantCoffeePowder = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Crappy instant coffee powder is being added.');
   };
   var addSugar = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Sugar is being added.');
   };
   var addBoilingWater = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Boiling water is being added.');
   };
   var stir = function () {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Coffee is being stirred. Hmm...');
   };

   // We only do real coffee ########################################

   var heatEspressoMachine = function (state, callback) {
      var error = undefined;
      var wrappedCallback = function () {
         console.log('Espresso machine heating cycle is done.');
         if(!error) {
            callback(error, state);
         } else
            console.log('wrappedCallback encountered an error. The following are the error details: ' + error);
      };
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      // If there is an error, wrap callback with our own error function

      // Even if you call setTimeout with a time of 0 ms, the callback you pass is placed on the event queue to be called on a subsequent turn of the event loop.
      // Also be aware that setTimeout has a minimum granularity of 4ms for timers nested more than 5 deep. For several reasons we prefer to use setImmediate if we don't want a 4ms minimum wait.
      // setImmediate will schedule your callbacks on the next turn of the event loop, but it goes about it in a smarter way. Read more about it here: https://developer.mozilla.org/en-US/docs/Web/API/Window.setImmediate
      // If you are using setImmediate and it's not available in the browser, use the polyfill: https://github.com/YuzuJS/setImmediate
      // For this, we need to wait for our huge hunk of copper to heat up, which takes a lot longer than a few milliseconds.
      setTimeout(
         // Once espresso machine is hot callback will be invoked on the next turn of the event loop...
         wrappedCallback, espressoMachineHeatTime.milliseconds
      );
   };
   var grindDoseTampBeans = function (state, callback) {
      // Perform long running action.
      console.log('We are now grinding, dosing, then tamping our dose.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var mountPortaFilter = function (state, callback) {
      // Perform long running action.
      console.log('Porta filter is now being mounted.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var positionCup = function (state, callback) {
      // Perform long running action.
      console.log('Placing cup under portafilter.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var preInfuse = function (state, callback) {
      // Perform long running action.
      console.log('10 second preinfuse now taking place.');
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.
      callback(null, state);
   };
   var extract = function (state, callback) {
      // Perform long running action.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      // To save on writing large amounts of code, the callback would get passed to something that would run it at some point in the future.
      // We would then return immediately with the expectation that callback will be run in the future.

      // Uncomment the below to test the error.
      //callback({message: 'Oh no, something has gone wrong!'})
      callback(null, state);
   };
   var espressoMachineHeatTime = {
      // if you don't want to wait for the machine to heat up assign minutes: 0.2.
      minutes: 30,
      get milliseconds() {
         return this.minutes * 60000;
      }
   };
   var state = {
      description: ''
      // Other properties
   };
   var brew = function (onCompletion) {
      // Some prep work here possibly.
      heatEspressoMachine(state, function (err, resultFromHeatEspressoMachine) {
         if(!err) {
            grindDoseTampBeans(state, function (err, resultFromGrindDoseTampBeans) {
               if(!err) {
                  mountPortaFilter(state, function (err, resultFromMountPortaFilter) {
                     if(!err) {
                        positionCup(state, function (err, resultFromPositionCup) {
                           if(!err) {
                              preInfuse(state, function (err, resultFromPreInfuse) {
                                 if(!err) {
                                    extract(state, function (err, resultFromExtract) {
                                       if(!err)
                                          onCompletion(null, state);
                                       else
                                          onCompletion(err, null);
                                    });
                                 } else
                                    onCompletion(err, null);
                              });
                           } else
                              onCompletion(err, null);
                        });
                     } else
                        onCompletion(err, null);
                  });
               } else
                  onCompletion(err, null);
            });
         } else
            onCompletion(err, null);
      });
   };
   return {
      // Publicise brew.
      brew: brew
   };
};

What’s wrong with it?

  1. It’s hard to read, reason about and maintain
  2. The debugging experience isn’t very informative
  3. It creates more garbage than adding your functions to a prototype
  4. Dangers of leaking memory due to retaining closure references
  5. Many more…

What’s right with it?

  • It’s asynchronous

Closures are one of the language features in JavaScript that they got right. There are often issues in how we use them though.   Be very careful of what you’re doing with closures. If you’ve got hot code, don’t create a new function every time you want to execute it.

Resources

  • Chapter 7 Concurrency of the Effective JavaScript book by David Herman

 

Alternative Approaches

Ranging from marginally good approaches to better approaches. Keeping in mind that all these techniques add value and some make more sense in some situations than others. They are all approaches for making the callback hell more manageable and often encapsulating it completely, so much so that the underlying workings are no longer just a bunch of callbacks but rather well thought out implementations offering up a consistent well recognised API. Try them all, get used to them all, then pick the one that suites your particular situation. The first two examples from here are blocking though, so I wouldn’t use them as they are, they are just an example of how to make some improvements.

Name your anonymous functions

  1. They’ll be easier to read and understand
  2. You’ll get a much better debugging experience, as stack traces will reference named functions rather than “anonymous function”
  3. If you want to know where the source of an exception was
  4. Reveals your intent without adding comments
  5. In itself will allow you to keep your nesting shallow
  6. A first step to creating more extensible code

We’ve made some improvements in the next two examples, but introduced blocking in the arrays prototypes forEach loop which we really don’t want to do.

Example of Anonymous Functions

var boilJug = function () {
   // Perform long running action
};
var addInstantCoffeePowder = function () {
   // Perform long running action
   console.log('Crappy instant coffee powder is being added.');
};
var addSugar = function () {
   // Perform long running action
   console.log('Sugar is being added.');
};
var addBoilingWater = function () {
   // Perform long running action
   console.log('Boiling water is being added.');
};
var stir = function () {
   // Perform long running action
   console.log('Coffee is being stirred. Hmm...');
};
var heatEspressoMachine = function () {
   // Flick switch, check water.
   console.log('Espresso machine is being turned on and is now heating.');
};
var grindDoseTampBeans = function () {
   // Perform long running action
   console.log('We are now grinding, dosing, then tamping our dose.');
};
var mountPortaFilter = function () {
   // Perform long running action
   console.log('Portafilter is now being mounted.');
};
var positionCup = function () {
   // Perform long running action
   console.log('Placing cup under portafilter.');
};
var preInfuse = function () {
   // Perform long running action
   console.log('10 second preinfuse now taking place.');
};
var extract = function () {
   // Perform long running action
   console.log('Cranking leaver down and extracting pure goodness.');
};

(function () {
   // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
   return [
      'heatEspressoMachine',
      'grindDoseTampBeans',
      'mountPortaFilter',
      'positionCup',
      'preInfuse',
      'extract',
   ].forEach(
      function (brewStep) {
         this[brewStep]();
      }
   );
}());

anonymous functions

Example of Named Functions

Now satisfies all the points above, providing the same output. Hopefully you’ll be able to see a few other issues I’ve addressed with this example. We’re also no longer clobbering the global scope. We can now also make any of the other types of coffee simply with an additional single line function call, so we’re removing duplication.

var BINARYMIST = (function (binMist) {
   binMist.coffee = {
      action: function (step) {

         return {
            boilJug: function () {
               // Perform long running action
            },
            addInstantCoffeePowder: function () {
               // Perform long running action
               console.log('Crappy instant coffee powder is being added.');
            },
            addSugar: function () {
               // Perform long running action
               console.log('Sugar is being added.');
            },
            addBoilingWater: function () {
               // Perform long running action
               console.log('Boiling water is being added.');
            },
            stir: function () {
               // Perform long running action
               console.log('Coffee is being stirred. Hmm...');
            },
            heatEspressoMachine: function () {
               // Flick switch, check water.
               console.log('Espresso machine is being turned on and is now heating.');
            },
            grindDoseTampBeans: function () {
               // Perform long running action
               console.log('We are now grinding, dosing, then tamping our dose.');
            },
            mountPortaFilter: function () {
               // Perform long running action
               console.log('Portafilter is now being mounted.');
            },
            positionCup: function () {
               // Perform long running action
               console.log('Placing cup under portafilter.');
            },
            preInfuse: function () {
               // Perform long running action
               console.log('10 second preinfuse now taking place.');
            },
            extract: function () {
               // Perform long running action
               console.log('Cranking leaver down and extracting pure goodness.');
            }
         }[step]();
      },
      coffeeType: function (type) {
         return {
            'cappuccino': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'instant': {
               brewSteps: function () {
                  return [
                     'addInstantCoffeePowder',
                     'addSugar',
                     'addBoilingWater',
                     'stir'
                  ];
               }
            },
            'macchiato': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'mocha': {
               brewSteps: function () {
                  return [
                     // Lots of actions
                  ];
               }
            },
            'short black': {
               brewSteps: function () {
                  return [
                     'heatEspressoMachine',
                     'grindDoseTampBeans',
                     'mountPortaFilter',
                     'positionCup',
                     'preInfuse',
                     'extract',
                  ];
               }
            }
         }[type];
      },
      'brew': function (requestedCoffeeType) {
         var that = this;
         var brewSteps = this.coffeeType(requestedCoffeeType).brewSteps();
         // Array.prototype.forEach executes your callback synchronously (that's right, it's blocking) for each element of the array.
         brewSteps.forEach(function runCoffeeMakingStep(brewStep) {
            that.action(brewStep);
         });
      }
   };
   return binMist;

} (BINARYMIST || {/*if BINARYMIST is falsy, create a new object and pass it*/}));

BINARYMIST.coffee.brew('short black');

named functions


Web Workers

I’ll address these in another post.

Create Modules

Everywhere.

Legacy Modules (Server or Client side)

AMD Modules using RequireJS

CommonJS type Modules in Node.js

In most of the examples I’ve created in this post I’ve exported the system under test (SUT) modules and then required them into the test. Node modules are very easy to create and consume. requireFrom is a great way to require your local modules without explicit directory traversal, thus removing the need to change your require statements when you move your files that are requiring your modules.

NPM Packages

Browserify

Here we get to consume npm packages in the browser.

Universal Module Definition (UMD)

ES6 Modules

That’s right, we’re getting modules as part of the specification (15.2). Check out this post by Axel Rauschmayer to get you started.


Recursion

I’m not going to go into this here, but recursion can be used as a light weight solution to provide some logic to determine when to run the next asynchronous piece of work. Item 64 “Use Recursion for Asynchronous Loops” of the Effective JavaScript book provides some great examples. Do your self a favour and get a copy of David Herman’s book. Oh, we’re also getting tail-call optimisation in ES6.

 


EventEmitter

Still creates more garbage unless your functions are on the prototype, but does provide asynchronicity. Now we can put our functions on the prototype, but then they’ll all be public and if they’re part of a process then we don’t want our coffee process spilling all it’s secretes about how it makes perfect coffee. In saying that, if our code is hot and we’ve profiled it and it’s a stand-out for using to much memory, we could refactor EventEmittedCoffee to have its function declarations added to EventEmittedCoffee.prototype and perhaps hidden another way, but I wouldn’t worry about it until it’s been proven to be using to much memory.

Events are used in the well known Ganf Of Four Observer (behavioural) pattern (which I discussed the C# implementation of here) and at a higher level the Enterprise Integration Publish/Subscribe pattern. The Observer pattern is used in quite a few other patterns also. The ones that spring to mind are Model View Presenter, Model View Controller. The pub/sub pattern is slightly different to the Observer in that it has a topic/event channel that sits between the publisher and the subscriber and it uses contractual messages to encapsulate and transmit it’s events.

Here’s an example of the EventEmitter …

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var eventEmittedCoffee = sUTDirectory('eventEmittedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the event emitted coffee machine', function (done) {

      function handleSuccess(state) {
         var stateOutCome = 'The state of the ordered coffee is: ' + state.description;
         stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         done();
      }

      function handleFailure(error) {
         assert.fail(error, 'brew encountered an error. The following are the error details: ' + error.message);
         done();
      }

      // We could even assign multiple event handlers to the same event. We're not here, but we could.
      eventEmittedCoffee.on('successfulOrder', handleSuccess).on('failedOrder', handleFailure);

      eventEmittedCoffee.brew();
   });
});

The System Under Test

'use strict';

var events = require('events'); // Core node module.
var util = require('util'); // Core node module.

var eventEmittedCoffee;
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: ''
};

function EventEmittedCoffee() {

   var eventEmittedCoffee = this;

   function heatEspressoMachine(state) {
      // No need for callbacks. We can emit a failedOrder event at any stage and any subscribers will be notified.

      function emitEspressoMachineHeated() {
         console.log('Espresso machine heating cycle is done.');
         eventEmittedCoffee.emit('espressoMachineHeated', state);
      }
      // Flick switch, check water.
      console.log('Espresso machine has been turned on and is now heating.');
      // Mutate state.
      setTimeout(
         // Once espresso machine is hot event will be emitted on the next turn of the event loop...
         emitEspressoMachineHeated, espressoMachineHeatTime.milliseconds
      );
   }

   function grindDoseTampBeans(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('We are now grinding, dosing, then tamping our dose.');
      eventEmittedCoffee.emit('groundDosedTampedBeans', state);
   }

   function mountPortaFilter(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Porta filter is now being mounted.');
      eventEmittedCoffee.emit('portaFilterMounted', state);
   }

   function positionCup(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Placing cup under portafilter.');
      eventEmittedCoffee.emit('cupPositioned', state);
   }

   function preInfuse(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('10 second preinfuse now taking place.');
      eventEmittedCoffee.emit('preInfused', state);
   }

   function extract(state) {
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      console.log('Cranking leaver down and extracting pure goodness.');
      state.description = 'beautiful shot!';
      eventEmittedCoffee.emit('successfulOrder', state);
      // If you want to fail the order, replace the above two lines with the below two lines.
      // state.error = 'Oh no! That extraction came out far to fast.'
      // this.emit('failedOrder', state);
   }

   eventEmittedCoffee.on('timeToHeatEspressoMachine', heatEspressoMachine).
   on('espressoMachineHeated', grindDoseTampBeans).
   on('groundDosedTampedBeans', mountPortaFilter).
   on('portaFilterMounted', positionCup).
   on('cupPositioned', preInfuse).
   on('preInfused', extract);
}

// Make sure util.inherits is before any prototype augmentations, as it seems it clobbers the prototype if it's the other way around.
util.inherits(EventEmittedCoffee, events.EventEmitter);

// Only public method.
EventEmittedCoffee.prototype.brew = function () {
   this.emit('timeToHeatEspressoMachine', state);
};

eventEmittedCoffee = new EventEmittedCoffee();

module.exports = eventEmittedCoffee;

With using raw callbacks, we have to pass them (functions) around. With events, we can have many interested parties request (subscribe) to be notified when something that our interested parties are interested in happens (the event). The Observer pattern promotes loose coupling, as the thing (publisher) wanting to inform interested parties of specific events has no knowledge of it’s subscribers, this is essentially what a service is.

Resources


Async.js

Provides a collection of methods on the async object that:

  1. take an array and perform certain actions on each element asynchronously
  2. take a collection of functions to execute in specific orders asynchronously, some based on different criteria. The likes of async.waterfall allow you to pass results of a previous function to the next. Don’t underestimate these. There are a bunch of very useful routines.
  3. are asynchronous utilities

Here’s an example…

The Test

var assert = require('assert');
var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var asyncCoffee = sUTDirectory('asyncCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the async coffee machine', function (done) {

      var result = function (error, resultsFromAllAsyncSeriesFunctions) {
         var stateOutCome;
         var expectedErrorOutCome = null;
         if(!error) {
            stateOutCome = 'The state of the ordered coffee is: '
               + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description;
            stateOutCome.should.equal('The state of the ordered coffee is: beautiful shot!');
         } else {
            assert.fail(
               error,
               expectedErrorOutCome,
               'brew encountered an error. The following are the error details. message: '
                  + error.message
                  + '. The finished state of the ordered coffee is: '
                  + resultsFromAllAsyncSeriesFunctions[resultsFromAllAsyncSeriesFunctions.length - 1].description
            );
         }
         done();
      };

      asyncCoffee().brew(result)
   });
});

The System Under Test

'use strict';

var async = require('async');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   error: null
};

module.exports = function asyncCoffee() {

   var brew = function (onCompletion) {
      async.series([
         function heatEspressoMachine(heatEspressoMachineDone) {
            // No need for callbacks. We can just pass an error to the async supplied callback at any stage and the onCompletion callback will be invoked with the error and the results immediately.

            function espressoMachineHeated() {
               console.log('Espresso machine heating cycle is done.');
               heatEspressoMachineDone(state.error);
            }
            // Flick switch, check water.
            console.log('Espresso machine has been turned on and is now heating.');
            // Mutate state.
            setTimeout(
               // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
               espressoMachineHeated, espressoMachineHeatTime.milliseconds
            );
         },
         function grindDoseTampBeans(grindDoseTampBeansDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('We are now grinding, dosing, then tamping our dose.');
            grindDoseTampBeansDone(state.error);
         },
         function mountPortaFilter(mountPortaFilterDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Porta filter is now being mounted.');
            mountPortaFilterDone(state.error);
         },
         function positionCup(positionCupDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Placing cup under portafilter.');
            positionCupDone(state.error);
         },
         function preInfuse(preInfuseDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('10 second preinfuse now taking place.');
            preInfuseDone(state.error);
         },
         function extract(extractDone) {
            // Perform long running action, delegating async tasks passing callback and returning immediately.
            console.log('Cranking leaver down and extracting pure goodness.');
            // If you want to fail the order, uncomment the below line. May as well change the description too.
            // state.error = {message: 'Oh no! That extraction came out far to fast.'};
            state.description = 'beautiful shot!';
            extractDone(state.error, state);

         }
      ],
      onCompletion);
   };

   return {
      // Publicise brew.
      brew: brew
   };
};

Other Similar Useful libraries


Adding to Prototype

Check out my post on prototypes. If profiling reveals you’re spending to much memory or processing time creating the objects that contain the functions that are going to be used asynchronously you could add the functions to the objects prototype like we did with the public brew method of the EventEmitter example above.


Promises

The concepts of promises and futures which are quite similar, have been around a long time. their roots go back to 1976 and 1977 respectively. Often the terms are used interchangeably, but they are not the same thing. You can think of the language agnostic promise as a proxy for a value provided by an asynchronous actions eventual success or failure. a promise is something tangible, something you can pass around and interact with… all before or after it’s resolved or failed. The abstract concept of the future (discussed below) has a value that can be mutated once from pending to either fulfilled or rejected on fulfilment or rejection of the promise.

Promises provide a pattern that abstracts asynchronous operations in code thus making them easier to reason about. Promises which abstract callbacks can be passed around and the methods on them chained (AKA Promise pipelining). Removing temporary variables makes it more concise and clearer to readers that the extra assignments are an unnecessary step.

JavaScript Promises

A promise (Promises/A+ thenable) is an object or sometimes more specifically a function with a then (JavaScript specific) method.
A promise must only change it’s state once and can only change from either pending to fulfilled or pending to rejected.

Semantically a future is a read-only property.
A future can only have its value set (sometimes called resolved, fulfilled or bound) once by one or more (via promise pipelining) associated promises.
Futures are not discussed explicitly in the Promises/A+, although are discussed implicitly in the promise resolution procedure which takes a promise as the first argument and a value as the second argument.
The idea is that the promise (first argument) adopts the state of the second argument if the second argument is a thenable (a promise object with a then method). This procedure facilitates the concept of the “future”

We’re getting promises in ES6. That means JavaScript implementers are starting to include them as part of the language. Until we get there, we can use the likes of these libraries.

One of the first JavaScript promise drafts was the Promises/A (1) proposal. The next stage in defining a standardised form of promises for JavaScript was the Promises/A+ (2) specification which also has some good resources for those looking to use and implement promises based on the new spec. Just keep in mind though, that this has nothing to do with the EcmaScript specification, although it is/was a precursor.

Then we have Dominic Denicola’s promises-unwrapping repository (3) for those that want to stay ahead of the solidified ES6 draft spec (4). Dominic’s repo is slightly less specky and may be a little more convenient to read, but if you want gospel, just go for the ES6 draft spec sections 25.4 Promise Objects and 7.5 Operations on Promise Objects which is looking fairly solid now.

The 1, 2, 3 and 4 are the evolutionary path of promises in JavaScript.

Node Support

Although it was decided to drop promises from Node core, we’re getting them in ES6 anyway. V8 already supports the spec and we also have plenty of libraries to choose from.

Node on the other hand is lagging. Node stable 0.10.29 still looks to be using version 3.14.5.9 of V8 which still looks to be about 17 months from the beginnings of the first sign of native ES6 promises according to how I’m reading the V8 change log and the Node release notes.

So to get started using promises in your projects whether your programming server or client side, you can:

  1. Use one of the excellent Promises/A+ conformant libraries which will give you the flexibility of lots of features if that’s what you need, or
  2. Use the native browser promise API of which all ES6 methods on Promise work in Chrome (V8 -> Soon in Node), Firefox and Opera. Then polyfill using the likes of yepnope, or just check the existence of the methods you require and load them on an as needed basis. The cujojs or jakearchibald shims would be good starting points.

For my examples I’ve decided to use when.js for several reasons.

  • Currently in Node we have no native support. As stated above, this will be changing soon, so we’d be polyfilling everything.
  • It’s performance is the least worst of the Promises/A+ compliant libraries at this stage. Although don’t get to hung up on perf stats. In most cases they won’t matter in context of your module. If you’re concerned, profile your running code.
  • It wraps non Promises/A+ compliant promise look-a-likes like jQuery’s Deferred which will forever remain broken.
  • Is compliant with spec version 1.1

The following example continues with the coffee making procedure concept. Now we’ve taken this from raw callbacks to using the EventEmitter to using the Async library and finally to what I think is the best option for most of our asynchronous work, not only in Node but JavaScript anywhere. Promises. Now this is just one way to implement the example. There are many and probably many of which are even more elegant. Go forth explore and experiment.

The Test

var should = require('should');
var requireFrom = require('requirefrom');
var sUTDirectory = requireFrom('post/nodejsAsynchronicityAndCallbackNesting');
var promisedCoffee = sUTDirectory('promisedCoffee');

describe('nodejsAsynchronicityAndCallbackNesting post integration test suite', function () {
   // if you don't want to wait for the machine to heat up assign minutes: 2.
   var minutes = 32;
   this.timeout(60000 * minutes);
   it('Test the coffee machine of promises', function (done) {

      var numberOfSteps = 7;
      // We could use a then just as we've used the promises done method, but done is semantically the better choice. It makes a bigger noise about handling errors. Read the docs for more info.
      promisedCoffee().brew().done(
         function handleValue(valueOrErrorFromPromiseChain) {
            console.log(valueOrErrorFromPromiseChain);
            valueOrErrorFromPromiseChain.errors.should.have.length(0);
            valueOrErrorFromPromiseChain.stepResults.should.have.length(numberOfSteps);
            done();
         }
      );
   });

});

The System Under Test

'use strict';

var when = require('when');
var espressoMachineHeatTime = {
   // if you don't want to wait for the machine to heat up assign minutes: 0.2.
   minutes: 30,
   get milliseconds() {
      return this.minutes * 60000;
   }
};
var state = {
   description: '',
   // Other properties
   errors: [],
   stepResults: []
};
function CustomError(message) {
   this.message = message;
   // return false
   return false;
}

function heatEspressoMachine(resolve, reject) {
   state.stepResults.push('Espresso machine has been turned on and is now heating.');
   function espressoMachineHeated() {
      var result;
      // result will be wrapped in a new promise and provided as the parameter in the promises then methods first argument.
      result = 'Espresso machine heating cycle is done.';
      // result could also be assigned another promise
      resolve(result);
      // Or call the reject
      //reject(new Error('Something screwed up here')); // You'll know where it originated from. You'll get full stack trace.
   }
   // Flick switch, check water.
   console.log('Espresso machine has been turned on and is now heating.');
   // Mutate state.
   setTimeout(
      // Once espresso machine is hot, heatEspressoMachineDone will be invoked on the next turn of the event loop...
      espressoMachineHeated, espressoMachineHeatTime.milliseconds
   );
}

// The promise takes care of all the asynchronous stuff without a lot of thought required.
var promisedCoffee = when.promise(heatEspressoMachine).then(
   function fulfillGrindDoseTampBeans(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'We are now grinding, dosing, then tamping our dose.';
      // Or if something goes wrong:
      // throw new Error('Something screwed up here'); // You'll know where it originated from. You'll get full stack trace.
   },
   function rejectGrindDoseTampBeans(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillMountPortaFilter(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Porta filter is now being mounted.';
   },
   function rejectMountPortaFilter(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new Error(error.message);
   }
).then(
   function fulfillPositionCup(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return 'Placing cup under portafilter.';
   },
   function rejectPositionCup(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillPreInfuse(result) {
      state.stepResults.push(result);
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return '10 second preinfuse now taking place.';
   },
   function rejectPreInfuse(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);
   }
).then(
   function fulfillExtract(result) {
      state.stepResults.push(result);
      state.description = 'beautiful shot!';
      state.stepResults.push('Cranking leaver down and extracting pure goodness.');
      // Perform long running action, delegating async tasks passing callback and returning immediately.
      return state;
   },
   function rejectExtract(error) {
      // Deal with the error. Possibly augment some additional insight and re-throw.
      if(state.errors[state.errors.length -1] !== error.message)
         state.errors.push(error.message);
      throw new CustomError(error.message);

   }
).catch(CustomError, function (e) {
      // Only deal with the error type that we know about.
      // All other errors will propagate to the next catch. whenjs also has a finally if you need it.
      // Todo: KimC. Do the dealing with e.
      e.newCustomErrorInformation = 'Ok, so we have now dealt with the error in our custom error handler.';
      return e;
   }
).catch(function (e) {
      // Handle other errors
      e.newUnknownErrorInformation = 'Hmm, we have an unknown error.';
      return e;
   }
);

function brew() {
   return promisedCoffee;
}

// when's promise.catch is only supposed to catch errors derived from the native Error (etc) functions.
// Although in my tests, my catch(CustomError func) wouldn't catch it. I'm assuming there's a bug as it kept giving me a TypeError instead.
// Looks like it came from within the library. So this was a little disappointing.
CustomError.prototype = Error;

module.exports = function promisedCoffee() {
   return {
      // Publicise brew.
      brew: brew
   };
};

Resources


Testing Asynchronous Code

All of the tests I demonstrated above have been integration tests. Usually I’d unit test the functions individually not worrying about the intrinsically asynchronous code, as most of it isn’t mine anyway, it’s C/O the EventEmitter, Async and other libraries and there is often no point in testing what the library maintainer already tests.

When you’re driving your development with tests, there should be little code testing the asynchronicity. Most of your code should be able to be tested synchronously. This is a big part of the reason why we drive our development with tests, to make sure your code is easy to test. Testing asynchronous code is a pain, so don’t do it much. Test your asynchronous code yes, but most of your business logic should be just functions that you join together asynchronously. When you’re unit testing, you should be testing units, not asynchronous code. When you’re concerned about testing your asynchronicity, that’s called integration testing. Which you should have a lot less of. I discuss the ratios here.

As of 1.18.0 Mocha now has baked in support for promises. For fluent style of testing promises we have Chai as Promised.

Resources


There are plenty of other resources around working with promises in JavaScript. For myself I found that I needed to actually work with them to solidify my understanding of them. With Chrome DevTool async option, we’ll soon have support for promise chaining.

Other Excellent Resources

And again all of the code samples can be found at GitHub.

Exploring JavaScript Prototypes

June 28, 2014

Not to be confused with the GoF Prototype pattern that defines a lot more than the simple JavaScript prototype. Although the abstract concept of the prototype is the same.

My intention with this post is to arm our developers with enough information around JavaScript prototypes to know when they are the right tool for the job as opposed to other constructs when considering how to create polymorphic JavaScript that’s performant and easy to maintain. Often performant code and easy to maintain code are in conflict with each other. I.E. if you want code that’s fast, it’s often hard to read and if you want code that’s really easy to read, it “may” not be as fast as it could/should be. So we make trade-offs.

Make your code as readable as possible in as many places as possible. The more eyes that are going to be on it, generally the more readable it needs to be. Where performance really matters, we “may” have to carefully sacrifice some precious readability to achieve the essential performance required. This really needs measuring though, because often we think we’re writing fast code that either doesn’t matter or that just isn’t fast. So we should always favour readability, then profile your running application in an environment as close to production as possible. This removes the guess work, which we usually get wrong anyway. I’m currently working on a Node.js performance blog post in which I’ll attempt to address many things to do with performance. What I’m finding a lot of the time is that techniques that I’ve been told are essential for fast code are all to often incorrect. We must measure.

Some background

Before we do the deep dive thing, lets step back for a bit. Why do prototypes matter in JavaScript? What do prototypes do for us? Where do prototypes fit into the design philosophy of JavaScript?

What do JavaScript Prototypes do for us?

Removal of Code Duplication (DRY)

Excellent for reducing unnecessary duplication of members that will need garbage collecting

Performance

Prototypes also allow us to maximise economy of memory, thus reducing Garbage Collection (GC) activity, thus increasing performance. There are other ways to get this performance though. Prototypes which obtain re-use of the parent object are not always the best way to get the performance benefits we crave. You can see here under the “Cached Functions in the Module Pattern” section that using closure (although not mentioned) which is what modules leverage, also gives us the benefit of re-use, as the free variable in the outer scope is baked into the closure. Just check the jsperf for proof.

The Design Philosophy of JavaScript and Prototypes

Prototypal inheritance was implemented in JavaScript as a key technique to support the object oriented principle of polymorphism. Prototypal inheritance provides the flexibility of being able to choose what the more specific object is going to inherit, rather than in the classical paradigm where you’re forced to inherit all the base class’s baggage whether you want it or not.

Three obvious ways to achieve polymorphism:

  1. Composition (creating an object that composes a contract to another object)(has-a relationship). Learn the pros and cons. Use when it makes sense
  2. Prototypal inheritance (is-a relationship). Learn the pros and cons. Use when it makes sense
  3. Monkey Patching courtesy of call, apply and bind
  4. Classical inheritance (is-a relationship). Why would you? Please don’t try this at home in production ;-)

Of course there are other ways and some languages have unique techniques to achieve polymorphism. like templates in C++, generics in C#, first-class polymorphism in Haskell, multimethods in Clojure, etc, etc.

Diving into the Implementation Details

Before we dive into Prototypes…

What does Composition look like?

There are many great examples of how composing our objects from other object interfaces whether they’re owned by the composing object (composition), or aggregated from independent objects (aggregation), provide us with the building blocks to create complex objects to look and behave the way we want them to. This generally provides us with plenty of flexibility to swap implementation at will, thus overcoming the tight coupling of classical inheritance.

Many of the Gang of Four (GoF) design patterns we know and love leverage composition and/or aggregation to help create polymorphic objects. There is a difference between aggregation and composition, but both concepts are often used loosely to just mean creating objects that contain other objects. Composition implies ownership, aggregation doesn’t have to. With composition, when the owning object is destroyed, so are the objects that are contained within the owner. This is not necessarily the case for aggregation.

An example: Each coffee shop is composed of it’s own unique culture. Each coffee shop has a different type of culture that it fosters and the unique culture is an aggregation of its people and their attributes. Now the people that aggregate the specific coffee shop culture can also be a part of other cultures that are completely separate to the coffee shops culture, they could even leave the current culture without destroying it, but the culture of the specific coffee shop can not be the same culture of another coffee shop. Every coffee shops culture is unique, even if only slightly.

Programmer Show Pony
programmer show pony

Following we have a coffeeShop that composes a culture. We use the Strategy pattern within the culture to aggregate the customers. The Visit function provides an interface to encapsulate the Concrete Strategy, which is passed as an argument to the Visit constructor and closed over by the describe method.

// Context component of Strategy pattern.
var Programmer = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Context component of Strategy pattern.
var ShowPony = function () {
   this.casualVisit = {};
   this.businessVisit = {};
   // Add additional visit types.
};
// Add more persons to make a unique culture.

var customer = {
   setCasualVisitStrategy: function (casualVisit) {
      this.casualVisit = casualVisit;
   },
   setBusinessVisitStrategy: function (businessVisit) {
      this.businessVisit = businessVisit;
   },
   doCasualVisit: function () {
      console.log(this.casualVisit.describe());
   },
   doBusinessVisit: function () {
      console.log(this.businessVisit.describe());
   }
};

// Strategy component of Strategy pattern.
var Visit = function (description) {
   // description is closed over, so it's private. Check my last post on closures for more detail
   this.describe = function () {
      return description;
   };
};

var coffeeShop;

Programmer.prototype = customer;
ShowPony.prototype = customer;

coffeeShop = (function () {
   var culture = {};
   var flavourOfCulture = '';
   // Composes culture. The specific type of culture exists to this coffee shop alone.
   var whatWeWantExposed = {
      culture: {
         looksLike: function () {
            console.log(flavourOfCulture);

         }
      }
   };

   // Other properties ...
   (function createCulture() {
      var programmer = new Programmer();
      var showPony = new ShowPony();
      var i = 0;
      var propertyName;

      programmer.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.')
      );
      programmer.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.')
      );
      showPony.setCasualVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony cycles to coffee shop in lycra pretending he\'s just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.')
      );
      showPony.setBusinessVisitStrategy(
         // Concrete Strategy component of Strategy pattern.
         new Visit('Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.')
      );

      culture.members = [programmer, showPony, /*lots more*/];

      for (i = 0; i < culture.members.length; i++) {
         for (propertyName in culture.members[i]) {
            if (culture.members[i].hasOwnProperty(propertyName)) {
               flavourOfCulture += culture.members[i][propertyName].describe() + '\n';
            }
         }
      }

   }());
   return whatWeWantExposed;
}());

coffeeShop.culture.looksLike();
// Programmer walks to coffee shop wearing jeans and T-shirt. Brings dog, Drinks macchiato.
// Programmer brings software development team. Performs Sprint Planning. Drinks long macchiato.
// Show pony cycles to coffee shop in lycra pretending he's just done a hill ride. Struts past the ladies chatting them up. Orders Chai Latte.
// Show pony meets business friends in suites. Pretends to work on his macbook pro. Drinks latte.

Now for Prototype

EcmaScript 5

In ES5 we’re a bit spoilt as we have a selection of methods on Object that help with prototypal inheritance.

Object.create takes an argument that’s an object and an optional properties object which is a EcmaScript 5 property descriptor like the second parameter of Object.defineProperties and returns a new object with the first argument passed as it’s prototype and the properties described in the property descriptor (if present) added to the returned object.

prototypal inheritance
// The object we use as the prototype for hobbit.
var person = {
   personType: 'Unknown',
   backingOccupation: 'Unknown occupation',
   age: 'Unknown'
};

var hobbit = Object.create(person);

Object.defineProperties(person, {
   'typeOfPerson': {
      enumerable: true,
      value: function () {
         if(arguments.length === 0)
            return this.personType;
         else if(arguments.length === 1 && typeof arguments[0] === 'string')
            this.personType = arguments[0];
         else
            throw 'Number of arguments not supported. Pass 0 arguments to get. Pass 1 string argument to set.';
      }
   },
   'greeting': {
      enumerable: true,
      value: function () {
         console.log('Hi, I\'m a ' + this.typeOfPerson() + ' type of person.');
      }
   },
   'occupation': {
      enumerable: true,
      get: function () {return this.backingOccupation;},
      // Would need to add some parameter checking on the setter.
      set: function (value) {this.backingOccupation = value;}
   }
});

// Add another property to hobbit.
hobbit.fatAndHairyFeet = 'Yes indeed!';
console.log(hobbit.fatAndHairyFeet); // 'Yes indeed!'
// prototype is unaffected
console.log(person.fatAndHairyFeet); // undefined

console.log(hobbit.typeOfPerson()); // 'Unknown '
hobbit.typeOfPerson('short and hairy');
console.log(hobbit.typeOfPerson()); // 'short and hairy'
console.log(person.typeOfPerson()); // 'Unknown'

hobbit.greeting(); // 'Hi, I'm a short and hairy type of person.'

person.greeting(); // 'Hi, I'm a Unknown type of person.'

console.log(hobbit.age); // 'Unknown'
hobbit.age = 'young';
console.log(hobbit.age); // 'young'
console.log(person.age); // 'Unknown'

console.log(hobbit.occupation); // 'Unknown occupation'
hobbit.occupation = 'mushroom hunter';
console.log(hobbit.occupation); // 'mushroom hunter'
console.log(person.occupation); // 'Unknown occupation'

Object.getPrototypeOf

console.log(Object.getPrototypeOf(hobbit));
// Returns the following:
// { personType: 'Unknown',
//   backingOccupation: 'Unknown occupation',
//   age: 'Unknown',
//   typeOfPerson: [Function],
//   greeting: [Function],
//   occupation: [Getter/Setter] }

 

EcmaScript 3

One of the benefits of programming in ES 3, is that we have to do more work ourselves, thus we learn how some of the lower level language constructs actually work rather than just playing with syntactic sugar. Syntactic sugar is generally great for productivity, but I still think there is danger of running into problems when you don’t really understand what’s happening under the covers.

So lets check out what really goes on with….

Prototypal Inheritance

What is a Prototype?

All objects have a prototype, but not all objects reveal their prototype directly by a property called prototype. All prototypes are objects.

So, if all objects have a prototype and all prototypes are objects, we have an inheritance chain right? That’s right. See the debug image below.

All properties that you may want to add to an objects prototype are shared through inheritance by all objects sharing the prototype.

So, if all objects have a prototype, where is it stored? All objects in JavaScript have an internal property called [[Prototype]]. You won’t see this internal property. All prototypes are stored in this internal property. How this internal property is accessed is dependant on whether it’s object is an object (object literal or object returned from a constructor) or a function. I discuss how this works below. When you dereference an object in order to find a property, the engine will first look on the current object, then the prototype of the current object, then the prototype of the prototype object and so on up the prototype chain. It’s a good idea to try and keep your inheritance hierarchies as shallow as possible for performance reasons.

Prototypes in Functions

Every function object is created with a prototype property, whether it’s a constructor or not. The prototype property has a value which is a constructor property which has a value that’s actually the function. See the below example to help clear it up. ES3 and ES5 spec 13.2 say pretty much the same thing.

var MyConstructor = function () {};
console.log(MyConstructor.prototype.constructor === MyConstructor); // true

and to help with visualising, see the below example and debug. myObj and myObjLiteral are for the two code examples below the debug image.

var MyConstructor = function () {};
var myObj = new MyConstructor();
var myObjLiteral = {};

Accessing JavaScript Prototypes

 

Up above in the composition example on line 40 and 41, you can see how we access the prototype of the constructor. We can also access the prototype of the object returned from the constructor like this:

var MyConstructor = function () {};
var myObj = new MyConstructor();
console.log(myObj.constructor.prototype === MyConstructor.prototype); // true

We can also do similar with an object literal. See below.

Prototypes in Objects that are Not Functions

Every object that is not a function is not created with a prototype property (All objects do have the hidden internal [[Prototype]] property though). Now sometimes you’ll see Object.prototype talked about. Even MDN make the matter a little confusing IMHO. In this case, the Object is the Object constructor function and as discussed above, all functions have the prototype property.

When we create object literals, the object we get is the same as if we ran the expression new Object(); (see ES3 and ES5 11.1.5)
So although we can access the prototype property of functions (that may or not be constructors), there is no such exposed prototype property directly on objects returned by constructors or on object literals.
There is however conveniently a constructor property directly on all objects returned by constructors and on object literals (as you can think of their construction procedure producing the same result). This looks similar to the above debug image:

var myObjLiteral = {};
            // ES3 ->                              // ES5 ->
console.log(myObjLiteral.constructor.prototype === Object.getPrototypeOf(myObjLiteral)); // true

I’ve purposely avoided discussing the likes of __proto__ as it’s not defined in EcmaScript and there’s no valid reason to use something that’s not standard.

Polyfilling to ES5

Now to get a couple of terms used in web development well defined before we start talking about them:

  • A shim is a library that brings a new API to an environment that doesn’t support it by using only what the older environment supports to support the new API.
  • A polyfill is some code in the form of a function, module, plugin, etc that provides the functionality of a later environment (ES5 for example) if it doesn’t exist for an older environment (ES3 for example). The polyfill often acts as a fallback. The programmer writes code targeting the newer environment as though the older environment doesn’t exist, but when the code is pulled into the older environment the polyfill kicks into action as the new language feature isn’t yet implemented natively.

If you’re supporting older browsers that don’t have full support for ES5, you can still use the ES5 additions so long as you provide ES5 polyfills. es5-shim is a good choice for this. Checkout the html5please ECMAScript 5 section for a little more detail. Also checkout Kangax’s ECMAScript 5 compatibility table to see which browsers currently support which ES5 language features. A good approach and one I like to take is to use a custom build of a library such as Lo-Dash to provide a layer of abstraction so I don’t need to care whether it’ll be in an ES5 or ES3 environment. Then for anything that the abstraction library doesn’t provide I’ll use a customised polyfill library such as es5-shim to fall back on. I prefer to use Lo-Dash over Underscore too, as I think Lo-Dash is starting to leave Underscore behind in terms of performance and features. I also like to use the likes of yepnope.js to conditionally load my polyfills based on whether they’re actually needed in the users browser. As there’s no point in loading them if we have browser support now is there?

Polyfilling Object.create as discussed above, to ES5

You could use something like the following that doesn’t accommodate an object of property descriptors. Or just go with the following next two choices which is what I do:

  1. Use an abstraction like the lodash create method which takes an optional second argument object of properties and treats them the same way
  2. Use a polyfill like this one.
if (typeof Object.create !== 'function') {
   (function () {
      var F = function () {};
      Object.create = function (proto) {
         if (arguments.length > 1) {
            throw Error('Second argument not supported');
         }
         if (proto === null) {
            throw Error('Cannot set a null [[Prototype]]');
         }
         if (typeof proto !== 'object') {
            throw TypeError('Argument must be an object');
         }
         F.prototype = proto;
         return new F();
      };
   })();
};

Polyfilling Object.getPrototypeOf as discussed above, to ES5

  1. Use an abstraction like the lodash isPlainObject method (source here), or…
  2. Use a polyfill like this one. Just keep in mind the gotcha.

 

EcmaScript 6

I got a bit excited when I saw an earlier proposed prototype-for (also seen with the name prototype-of) operator: <| . Additional example here. This would have provided a terse syntax for providing an object literal with an object to use as its prototype. It looks like it must have lost traction though as it was removed in the June 15, 2012 Draft.

There are a few extra methods in ES6 that deal with prototypes, but on trawling the EcmaScript 6 draft spec, nothing at this stage that really stands out as revolutionising the way I write JavaScript or being a mental effort/time saver for me. Of course I may have missed something. I’d like to hear from anyone that has seen something interesting to the contrary?

Yes we’re getting class‘s in ES6, but they are just an abstraction giving us a terse and declarative mechanism for doing what we already do with functions that we use as constructors, prototypes and the objects (or instances if you will) that are returned from our functions that we’ve chosen to act as constructors.

Architectural Ideas that Prototypes Help With

This is a common example that I often use for domain objects that are fairly hot that use one set of accessor properties added to the business objects prototype, as you can see on line 13 of my Hobbit module (Hobbit.js) below.

First a quick look at the tests/spec to drive the development. This is being run using mocha with the help of a Makefile in the root directory of my module under test.

  • Makefile
# The relevant section.
unit-test:
	@NODE_ENV=test ./node_modules/.bin/mocha \
		test/unit/*test.js test/unit/**/*test.js
  • Hobbit-test.js
var requireFrom = require('requirefrom');
var assert = require('assert');
var should = require('should');
var shire = requireFrom('shire/');

// Hardcode $NODE_ENV=test for debugging.
process.env.NODE_ENV='test';

describe('shire/Hobbit business object unit suite', function () {
   it('Should be able to instantiate a shire/Hobbit business object.', function (done) {
      // Uncomment below lines if you want to debug.
      //this.timeout(444000);
      //setTimeout(done, 444000);

      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();

      // Properties should be declared but not initialised.
      // No good checking for undefined alone, as that would be true whether it was declared or not.

      hobbit.should.have.property('id');
      (hobbit.id === undefined).should.be.true;
      hobbit.should.have.property('typeOfPerson');
      (hobbit.typeOfPerson === undefined).should.be.true;
      hobbit.should.have.property('greeting');
      (hobbit.greeting === undefined).should.be.true;
      hobbit.should.have.property('occupation');
      (hobbit.occupation === undefined).should.be.true;
      hobbit.should.have.property('emailFrom');
      (hobbit.emailFrom === undefined).should.be.true;
      hobbit.should.have.property('name');
      (hobbit.name === undefined).should.be.true;      

      done();
   });

   it('Should be able to set and get all properties of a shire/Hobbit business object.', function (done){
      // Uncomment below lines if you want to debug.
      this.timeout(444000);
      setTimeout(done, 444000);

      // Arrange
      var Hobbit = shire('Hobbit');
      var hobbit = new Hobbit();      

      // Act
      hobbit.id = '32f4d01e-74dc-45e8-b3a8-9aa24840bc6a';
      hobbit.typeOfPerson = 'short and hairy';
      hobbit.greeting = {
         intro: 'Hi, I\'m a ',
         outro: ' type of person.'};
      hobbit.occupation = 'mushroom hunter';
      hobbit.emailFrom = 'Bilbo.Baggins@theshire.arn';
      hobbit.name = 'Bilbo Baggins';

      // Assert
      hobbit.id.should.equal('32f4d01e-74dc-45e8-b3a8-9aa24840bc6a');
      hobbit.typeOfPerson.should.equal('short and hairy');
      hobbit.greeting.should.equal('Hi, I\'m a short and hairy type of person.');
      hobbit.occupation.should.equal('mushroom hunter');
      hobbit.emailFrom.should.equal('Bilbo.Baggins@theshire.arn');
      hobbit.name.should.eql('Bilbo Baggins');

      done();
   });
});
  • Now the business object itself Hobbit.js
    -
    Now what’s happening here is that on instance creation of new Hobbit, the empty members object you see created on line 9 is the only instance data. All of the Hobbit‘s accessor properties are defined once per export of the Hobbit module which is assigned the constructor function object. So what we store on each instance are the values assigned in the Hobbit-test.js from lines 47 through 54. That’s just the strings. So very little space is used for each instance of the Hobbit function returned by invoking the Hobbit constructor that the Hobbit module exports.
// Could achieve a cleaner syntax with Object.create, but constructor functions are a little faster.
// As this will be hot code, it makes sense to favour performance in this case.
// Of course profiling may say it's not worth it, in which case this could be rewritten.
var Hobbit = (function () {
   function Hobbit (/*Optionally Construct with DTO and serializer*/) {
      // Todo: Implement pattern for enforcing new.
      Object.defineProperty (this, 'members', {
         value: {}
      });
   }

   (function definePublicAccessors (){
      Object.defineProperties(Hobbit.prototype, {
         id: {
            get: function () {return this.members.id;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.id = newValue;
            },
            configurable: false, enumerable: true
         },
         typeOfPerson: {
            get: function () {return this.members.typeOfPerson;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.typeOfPerson = newValue;
            },
            configurable: false, enumerable: true
         },
         greeting: {
            get: function () {
               return this.members.greeting === undefined ?
                  undefined :
               this.members.greeting.intro +
                  this.typeOfPerson +
                  this.members.greeting.outro;
            },
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.greeting = newValue;
            },
            configurable: false, enumerable: true
         },
         occupation: {
            get: function () {return this.members.occupation;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.occupation = newValue;
            },
            configurable: false, enumerable: true
         },
         emailFrom: {
            get: function () {return this.members.emailFrom;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.emailFrom = newValue;
            },
            configurable: false, enumerable: true
         },
         name: {
            get: function () {return this.members.name;},
            set: function (newValue) {
               // Todo: Validation goes here.
               this.members.name = newValue;
            },
            configurable: false, enumerable: true
         }
      });

   })();
   return Hobbit;
})();

// JSON.parse provides a hydrated hobbit from the DTO.
//    So you would call this to populate this DO from a DTO
// JSON.stringify provides the DTO from a hydrated hobbit

module.exports = Hobbit;
  • Now running the test
lets test

 

Flyweights using Prototypes

A couple of interesting examples of the Flyweight pattern implemented in JavaScript are by the GoF and Addy Osmani.

The GoF’s implementation of the FlyweightFactory makes extensive use of closure to store its flyweights and uses aggregation in order to create it’s ConcreteFlyweight from the Flyweight. It doesn’t use prototypes.

Addy Osmani has a free book “JavaScript Design Patterns” containing an example of the Flyweight pattern, which IMO is considerably simpler and more elegant. In saying that, the GoF want you to buy their product, so maybe they do a better job when you give them money. In this example closure is also used extensively, but it’s a good example of how to leverage prototypes to share your less specific behaviour.

Mixins using Prototypes

Again if you check out the last example of Mixins in Addy Osmani’s book, there is quite an elegant example.

We can even do multiple inheritance using mixins, by adding which ever properties we want from what ever objects we want to the target objects prototype.

This is a similar concept to the post I wrote on Monkey Patching.

Mixins support the Open/Closed principle, where objects should be able to have their behaviour modified without their source code being altered.

Keep in mind though, that you shouldn’t just expect all consumers to know you’ve added additional behaviour. So think this through before using.

Factory functions using Prototypes

Again a decent example of the Factory function pattern is implemented in the “JavaScript Design Patterns” book here.

There are many other areas you can get benefits from using prototypes in your code.

Prototypal Inheritance: Not Right for Every Job

Prototypes give us the power to share only the secrets of others that need to be shared. We have fine grained control. If you’re thinking of using inheritance be it classical or prototypal, ask yourself “Is the class/object I’m wanting to provide a parent for truly a more specific version of the proposed parent?”. This is the idea behind the Liskov Substitution Principle (LSP) and Design by Contract (DbC) which I posted on here. Don’t just inherit because it’s convenient In my “javascript object creation patterns” post I also discussed inheritance.

The general consensus is that composition should be favoured over inheritance. If it makes sense to compose once you’ve considered all options, then go for it, if not, look at inheritance. Why should composition be favoured over inheritance? Because when you compose your object from another contract of an object, your sub object (the object doing the composing) doesn’t inherit anything or need to know anything about the composed objects secrets. The object being composed has complete freedom as to how it minds it’s own business, so long as it provides a consistent contract for consumers. This gives us the much loved polymorphism we crave without the crazy tight coupling of classical inheritance (inherit everything, even your fathers drinking problem :-s).

I’m pretty much in agreement with this when we’re talking about classical inheritance. When it comes to prototypal inheritance, we have a lot more flexibility and control around how we use the object that we’re deriving from and exactly what we inherit from it. So we don’t suffer the same “all or nothing” buy in and tight coupling as we do with classical inheritance. We get to pick just the good parts from an object that we decide we want as our parent. The other thing to consider is the memory savings of inheriting from a prototype rather than achieving your polymorphic behaviour by way of composition, which has us creating the composed object each time we want another specific object.

So in JavaScript, we really are spoilt for choice when it comes to how we go about getting our fix of polymorphism.

When surveys are carried out on..

Why Software Projects Fail

the following are the most common causes:

  • Ambiguous Requirements
  • Poor Stakeholder Involvement
  • Unrealistic Expectations
  • Poor Management
  • Poor Staffing (not enough of the right skills)
  • Poor Teamwork
  • Forever Changing Requirements
  • Poor Leadership
  • Cultural & Ethical Misalignment
  • Inadequate Communication

You’ll notice that technical reasons are very low on the list of why projects fail. You can see the same point mentioned by many of our software greats, but when a project does fail due to technical reasons, it’s usually because the complexity got out of hand. So as developers when focusing on the art of creating good code, our primary concern should be to reduce complexity, thus enhance the ability to maintain the code going forward.

I think one of Edsger W. Dijkstra’s phrases sums it up nicely. “Simplicity is prerequisite for reliability”.

Stratification is a design principle that focuses on keeping the different layers in code autonomous, I.E. you should be able to work in one layer without having to go up or down adjacent layers in order to fully understand the current layer you’re working in. Its internals should be able to move independently of the adjacent layers without effecting them or being concerned that a change in it’s own implementation will affect other layers. Modules are an excellent design pattern used heavily to build medium to large JavaScript applications.

With composition, if your composing with contracts, this is exactly what you get.

References and interesting reads

 

Exploring JavaScript Closures

May 31, 2014

Just before we get started, we’ll be using the terms lexical scope and dynamic scope a bit. In computer science the term lexical scope is synonymous with static scope.

  • lexical or static scope is where name resolution of “part of a program” depends on the location in the source code
  • dynamic scope is whether name resolution depends on the program state (dependent on execution context or calling context) when the name is encountered.

What are Closures?

Now establishing the formal definition has been quite an interesting journey, with quite a few sources not quite getting it right. Although the ES3 spec talks about closure, there is no formal definition of what it actually is. The ES5 spec on the other hand does discuss what closure is in two distinct locations.

  1. “11.1.5 Object Initialiser” section under the section that talks about accessor properties This is the relevant text: (In relation to getters): “Let closure be the result of creating a new Function object as specified in 13.2 with an empty parameter list (that’s getter specific) and body specified by FunctionBody. Pass in the LexicalEnvironment of the running execution context as the Scope.
  2. “13 Function Definition” section This is the relevant text: “Let closure be the result of creating a new Function object as specified in 13.2 with parameters specified by FormalParameterList (which are optional) and body specified by FunctionBody. Pass in funcEnv as the Scope.

Now what are the differences here that stand out?

  1. We see that 1 specifies a function object with no parameters, and 2 specifies some parameters (optional). So from this we can establish that it’s irrelevant whether arguments are passed or not to create closure.
  2. 1 also mentions passing in the LexicalEnvironment, where as 2 passes in funcEnv. funcEnv is the result of “calling NewDeclarativeEnvironment passing the running execution context‘s LexicalEnvironment as the argument“. So basically there is no difference.

Now 13.2 just specifies how functions are created. Given an optional parameter list, a body, a LexicalEnvironment specified by Scope, and a Boolean flag (for strict mode (ignore this for the purposes of establishing a formal definition)). Now the Scope mentioned above is the lexical environment of the running execution context (discussed here in depth) at creation time. The Scope is actually [[Scope]] (an internal property).

The ES6 spec draft runs along the same vein.

Lets get abstract

Every problem in computer science is just a more specific problem of a problem we’re familiar with in the natural world. So often it helps to find the abstract problem that we are already familiar with in order to help us understand the more specific problem we are dealing with. Patterns are an example of this. Before I was programming as a profession I was a carpenter. I find just about every problem I deal with in programming I’ve already dealt with in physical carpentry and at a higher level still with physical architecture.

In search of the true formal definition I also looked outside of JavaScript at the language agnostic term, which should just be an abstraction of the JavaScript closure anyway. Yip… Wikipedias definition “In programming languages, a closure (also lexical closure or function closure) is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables or upvalues) of that function. A closure—unlike a plain function pointer—allows a function to access those non-local variables even when invoked outside its immediate lexical scope.

My abstract formal definition

A closure is a function containing a reference to the lexical (static) environment via the function objects internal [[Scope]] property (ES5 spec 13.2.9) that it is defined within at creation time, not call time (ES5 spec 13.2.1). The closure is closed over it’s parent lexical environment and all of it’s properties. You can access these properties as variables, but not as properties, because you don’t have access to the internal [[Scope]] property directly in order to reference it’s properties. So this example fails. More correctly (ES5 spec 8.6.2) “Of the standard built-in ECMAScript objects, only Function objects implement [[Scope]].

var outerObjectLiteral = {

   x: 10,

   foo: function () {
      console.log(x); // ReferenceError: x is not defined obviously
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

See here for an explanation on the differences between properties and variables. That’s basically it. Of course there are many ways we can use a closure and that’s often where confusion creeps in about what a closure actually is and is not. Feel free to bring your perspective on this in the comments section below.

When is a closure born?

So lets get this closure closing over something. JavaScript addresses the funarg problem with closure.

var x = 10;

var outerObjectLiteral = {   

   foo: function () {
      // Because our internal [[Scope]] property now has a property (more specifically a free variable) x, we can access it directly.
      console.log(x); // Writes 10 to the console.
   },
   invokeMe: function (funArg) {
      var x = 20;
      funArg();
   }
};

outerObjectLiteral.invokeMe(outerObjectLiteral.foo);

The closure is created on line 13. Now at line 9 we have access to the closed over lexical environment. When we print x on line 7, we get 10 which is the value of x on [[Scope]] that our closure was statically bound to at function object creation time (not the dynamically scoped x = 20). Now of course you can change the value of the free variable x and it’ll be reflected where ever you use the closed over variable because the closure was bound to the free variable x, not the value of the free variable x.

This is what you’ll see in Chrome Dev Tools when execution is on line 10. Bear in mind though that both foo and invokeMe closures were created at line 13.

Closure

Now I’m going to attempt to explain what the structure looks like in a simplified form with a simple hash. I don’t know how it’s actually implemented in the varius EcmaScript implementations, but I do know what the specification (single source of truth) tells us, it should look something like the following:

////////////////
// pseudocode //
////////////////
foo = closure {
   FormalParameterList: {}, // Optional
   FunctionBody: <...>,
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10
   }
}

The closure is born when the function is created (“the result of creating a new Function object” as stated above). Not when it’s returned by the outer function (I.E. upwards funarg problem) and not when it’s invoked as Angus Croll mentioned here under the “The [[Scope]] property” section.

Angus quotes the ES5 spec 10.4.3.5-7. On studying this section I’m pretty sure it is meant for the context of actually creating the function object rather than invoking an existing function object. The clauses I’ve detailed above (11.1.5 Object Initialiser and 13 Function Definition), confirm this.

The ES6 spec draft “14.1.22 Runtime Semantics: Evaluation” also confirms this theory. Although it’s titled Runtime Semantics, it has several points that confirm my theory… The so called runtime semantics are the runtime semantics of function object creation rather than function object invocation. As some of the steps specified are FunctionCreate, MakeMethod and MakeConstructor (not FunctionInvoke, InvokeMethod or InvokeConstructor). The ES6 spec draft “14.2.17 Runtime Semantics: Evaluation” and also 14.3.8 are similar.

Why do we care about Closure?

Without closures, we wouldn’t have the concept of modules which I’ve discussed in depth here.

Modules are used very heavily in JavaScript both client and server side (think NPM), and for good reason. Until ES6 there is no baked in module system. In ES6 modules become part of the language. The entire Node.js ecosystem exists to install modules via the CommonJS initiative. Modules on the client side most often use the Asynchronous Module Definition (AMD) implementation RequireJS to load modules, but can also use the likes of CommonJS via Browserify, which allows us to load node.js packages in the browser.

As of writing this, the TC39 committee have looked at both the AMD and CommonJS approaches and come up with something completely different for the ES6 module draft spec. Modules provide another mechanism for not allowing secrets to leak into the global object.

Modules are not new. David Parnas wrote a paper titled “On the Criteria To Be Used in Decomposing Systems into Modules” in 1972. This explores the idea of secrets. Design and implementation decisions that should be hidden from the rest of the programme.

Here is an example of the Module pattern that includes both private and public methods. my.moduleMethod has access to private variables outside of it’s VariableEnvironment (the current scope) via the Environment record which references the outer LexicalEnvironment via it’s internal [[Scope]] property.

Information hiding: state and implementation. In JavaScript we don’t have access modifiers, but we don’t need them either. We can hide our secrets with various patterns. Closure is a key concept for many of these patterns. Closure is a key building block for helping us to programme against contract rather than implementation, helping us to form consistent abstractions, giving us the ability to engage with a concept while safely ignoring some of its details. Thus hiding unnecessary complexity from consumers.

I think Steve McConnell explains this very well in his classic “Code Complete” book. Steve uses the house abstraction as his metaphor. “People use abstraction continuously. If you had to deal with individual wood fibers, varnish molecules, and steel molecules every time you used your front door, you’d hardly make it in or out of your house each day. Abstraction is a big part of how we deal with complexity in the real world. Software developers sometimes build systems at the wood-fiber, varnish-molecule, and steel-molecule level. This makes the systems overly complex and intellectually hard to manage. When programmers fail to provide larger programming abstractions, the system itself sometimes fails to make it through the front door. Good programmers create abstractions at the routine-interface level, class-interface level, and package-interface level-in other words, the doorknob level, door level, and house level-and that supports faster and safer programming.

Encapsulation: you can not look at the details (the internal implementation, the secrets).

Partial function application and Currying: I have a set of posts on this topic. Closure is an integral building block of these constructs. Part 1, Part 2 and Part 3.

Functional JavaScript relies heavily on closure.

Are there any Costs or Gotchas of using Closures?

Of course. You didn’t think you’d get all this expressive power without having to think about how you’re going to use it did you? As we’ve discussed, closures were created to address the funarg problem. In doing that, the closure references the lexical (static) scope of the outer scope. So even once the free variables are out of scope, closure will still reference them if they were saved at function creation time. They can not be garbage collected until the function that references (is closed over) the outer scope has fallen out of scope. I.E. the reference count is 0.

var x = 10;
var noOneLikesMe = 20;
var globalyAccessiblePrivilegedFunction;

function globalyScopedFunction(z) {

  var noOneLikesMeInner = 40;

  function privilegedFunction() {
    return x + z;
  }

  return privilegedFunction;

}

// This is where privilegedFunction is created.
globalyAccessiblePrivilegedFunction = globalyScopedFunction(30);

// This is where privilegedFunction is applied.
globalyAccessiblePrivilegedFunction();

Now only the free variables that are needed are saved at function creation time. We see that when execution arrives at line 7, the currently scoped closure has the x free variable saved to it, but not z, noOneLikesMe, or noOneLikesMeInner.

noOneLikesMe

When we enter innerFunction on line 10, we see the hidden [[Scope]] property has both the outer scope and the global scope saved to it.

TwoClosures

Say for example execution has passed the above code snippet. If the closed over variables can still be referenced by calling globalyAccessiblePrivilegedFunction again, then they can not be garbage collected. This is a frequently abused problem with the upwards funarg problem. If you’ve got hot code that is creating many functions, make sure the functions that are closed over free variables are dropped out of scope as soon as you no longer have a need for them. This way garbage collection can deallocate the memory used by the free variables.

Looking at how the specification would look simplified, we can see that each Environment record inherits what it knows it’s going to need from the Environment record of its lexical parent. This chaining inheritance goes all the way up the lexical hierarchy to the global function object as seen below. In this case the family tree is quite short. Remember this structure is formed at function creation time, not invocation time. the free variables (not their values) are statically baked.

////////////////
// pseudocode //
////////////////
globalyScopedFunction = closure {
   FormalParameterList: { // Optional
      z: 30 // Values updated at invocation time.
   },
   FunctionBody: {
      var noOneLikesMeInner = 40;

      function privilegedFunction() {
         return x + z;
      }

      return privilegedFunction;
   },
   Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
      x: 10 // Free variable saved because we know it's going to be used in privilegedFunction.
   },
   privilegedFunction: = closure {
      FormalParameterList: {}, // Optional
      FunctionBody: {
         return x + z;
      },
      Environment: { // ES5 10.5 VariableEnvironment's Environment record. This is actually the internal [[Scope]] property (set to the outer lexical environment).
         x: 10 // Free variable inherited from the outer Environment.
         z: 30 // Formal parameter saved from outer lexical environment.
      }
   }
}

Scope

I discuss closure here very briefly and how it can be used to create block scoped variables prior to block scoping with the let keyword in ES6, supposed to be officially approved by December 2014. I discuss scoping here in a little more depth.

Closure misunderstandings

Closures are created when a function is returned

A closure is formed when one of those inner functions is made accessible outside of the function in which it was contained” found here is simply incorrect. There are also a lot of other misconceptions found at that link. I’d advise to read with a bag of salt.

Now we’ve already addressed this one above, but here is an example that confirms that the closure is in fact created at function creation time, not when the inner function is returned. Yes, it does what it looks like it does. Fiddle with it?

(function () {

   var lexicallyScopedFunction = function () {
      console.log('We\'re in the lexicalyScopedFunction');
   };

   (function innerClosure() {
      lexicallyScopedFunction();
   }());

}());

On line 8, we get to see the closure that was created from the execution of line 11.

lexicallyScopedFunction

Closures can create memory leaks

Yes they can, but not if you let the closure go out of scope. Discussed above.

Values of free variables are baked into the Closure

Also untrue. Now I’ve put in-line comments to explain what’s happening here. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      loopCountingFunctions[i] = (function printLoopCount() {
         // What you see here is that each time this code is run, it prints the last value of the loop counter i.
         // Clearly showing that for each new printLoopCount function created and saved to the loopCountingFunctions array,
         // the variable i is saved to the Environment record, not the value of the variable i.
         console.log(i);
      });
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 3
runLoopPrinter[1](); // 3
runLoopPrinter[2](); // 3

An aside… getLoopPrinter is a global function. Once execution is on line 3 you get to see that global functions also have closure… supporting my comments above

global functions have closure too

Now in the above example, this is probably not what you want to happen, so how do we give each printLoopCount function it’s on value? Well by creating a parameter for each iteration of the loop, each with the new value. Fiddle with the below example?

var numberOfFunctionsRequired = 3;
var getLoopPrinter = function () {
   var loopCountingFunctions = new Array(numberOfFunctionsRequired);
   for (var i = 0; i < numberOfFunctionsRequired; i++) {
      (function (i) {
         // Now what happens here is each time the above loop runs this code,
         // inside this scope (the scope of this comment) i is a new formal parameter which of course
         // gets statically saved to each printLoopCount functions Environment record (or more simply each closure of printLoopCount).
         loopCountingFunctions[i] = (function printLoopCount() {
            console.log(i);
         });
      })(i)
   }
   return loopCountingFunctions;
};

var runLoopPrinter = getLoopPrinter();
runLoopPrinter[0](); // 0
runLoopPrinter[1](); // 1
runLoopPrinter[2](); // 2

As always, let me know your thoughts on this post, any thing you think I may have the wrong handle on, or anything that otherwise stood out.

References and interesting reads

Culture in the work place

April 26, 2014

What is Climate?

The ups and downs, the hot and cold.
It’s easier to change than the culture.
The mood of an organisation can be seasonal which fluctuates more than a culture.
Refers to perceptions of organizational practices reported by people who work there (Rousseau 1988). Describes the work setting by those directly involved with it.

  • Communication: How open are people?
  • Dealing with conflict: Is it constructive or dysfunctional?
  • Leadership: dictatorship or servanthood?

Why does it matter?

A positive climates increase motivation, innovation and productivity, encourage extra effort – potentially by 30%. Whereas, a negative climate inhibits it (HayGroup).

What is Culture?

In order to change a culture you first need to understand the environment in which it exists.

Often we think of the different cultural groups we participate in as the language we use, the architecture we create, visual arts, literature, music.
These are just manifestations of what culture really is.

Culture does not exist with only one person, individuals exist within a culture.

Culture rules almost all areas of our lives.
The culture is the values behind the behaviours/manifestations of individuals within a culture.
These values are learned.

Why does it matter?

My primary focus in this post is one of providing maximum benefit to our customers.

Getting the best out of our people and putting the best back into our people is secondary. I’ll explain in a section below why our customers should take primary focus and that if they do, most other aspects will fall into place.

Focusing on the Negative biases

Known as Deficit based management, this happens when businesses are tackling their biggest problems in business. I.E. focussing on the negative and how they can remove the problem or reduce it’s effects. Though this technique can be successful in dealing with impediments, removing/reducing areas of poor performance, it does have side effects causing its people to feel overworked and stressed. It produces a general negative attitude and working environment amongst workers. This actually misses some of the largest opportunities to increase the strengths of the business. Because it has us focusing on how we can remove the problems, we miss the opportunities to increase (build on) our strengths.

Focussing on the Positive

What if we focused on our top three customers and which of our strengths have helped to make them successful. Then focus on these strengths and how we can maximise these and broaden the reach of them to effect our other customers. This can help to realign where the organisation is going and bring clarity to what our goals actually are. In a section below I discuss why we shouldn’t focus on the success of our workers but rather the customer.

Organisational Culture Types

Below are the four commonly accepted organisational culture types. The two dimensional view:

Clan

Focusing on Collaboration, how the members can work together in a family-like manner. Focusing on mentoring, nurturing and working together to achieve the result.

Adhocracy

Dynamic and entrepreneurial, focus on taking risks to achieve optimal result. Innovative. Doing things first… driving your designs out with tests. Reactive, ability to move quickly with changing goals. Often appearing as unmanageable chaos. Empirical. Companies like Google embrace this type of culture in which they utilise the skills of entrepreneurial software engineers, cutting edge processes and technologies (Bruce M. Tharp: Haworth).

Hierarchy

Structured and controlled, focusing on efficiency, stability and doing things “the right way”.

Market

Results oriented. Focused on competition, achievement, getting the job done.

The Third Dimension

The third dimension comprises another three organisational culture types. A culture can be created in which it is giving, taking or matching. Attributed to the organisation and/or people within.

Taking

A taking organisation is one where they try to get the best value out of their workers. Workers will often feel used and burnt out. Workers know that they have to work extra hard to prove that they are worth it. Often the workers come to the organisation as takers as well and this helps to solidify the taking culture even more. I’m here to get what I can and then I’ll leave once I have it.

Often have a high staff turnover.

Primary focus: What am I getting, what will my reward look like? It’s all about me.

Matching

Matchers give as much as they take. They stick to the rules. This is one of the attributes of the Hierarchy culture type. They don’t do any extra work unless they’re paid for it. Don’t show much initiative. Parties take account of what they are owed. Workers often stay for a long time, don’t burn out. Don’t innovate or add value to the relationships within the culture.

Primary focus: What am I getting, what will my reward look like? I’m happy to give so long as I get in return.

Giving

A giving culture is one based around serving others. The focus is on how I can make our clients successful.

In a giving culture, a business measures their success by the satisfaction of their clients, rather than on the quantity of effort our employees are giving.
Focus clearly on the value of pleasing the client rather than measuring the value of their effort.
How can we create more value for our clients.

The motivation is targeted at the customer by all parties of the organisation. The consequence (not the focus) is the law of what goes around comes around. You receive what you give.

The organisations that do very well and at the opposite end of the spectrum do very poorly often fall into the same category of givers.

Successful givers work out how the giving will feedback so that they will be enabled to give more, rather than at the other end of the scale where the unsuccessful organisations that give, just keep giving without working out how they can sustain it.

How to change a culture of giving to one of taking or matching

Start rewarding your workers. Provide bonuses and commissions.
If your a giving culture, your focus is on benefiting your customers.
If you start rewarding your workers, their focus changes to look at whether they have the reward rather than the customers.

Often organisations setup reward systems for their employees. One in which the employees are recognised for doing good things. This moves the focus of the organisation from providing benefit to the customers to providing benefit to the employees.

How to change a culture of taking or matching to a giving culture

Stay focused on the value you are providing to your customers.
Focus on the organisations vision of how you’re making the customers lives better.
The mission statement needs to be centred around your customers not your employees or the organisation. Employ people that have the same vision of serving the organisations customers rather than the organisation itself. Don’t reward your workers, but talk about how your workers effected your customers in a positive way.
Don’t tell your customers what they want, you can tell them what they need if they don’t know, because you are the specialist.
Remember you are in business to serve your customers.
The measure of your organisations success should be your clients feedback. Ask your customers what they want. Gather their feedback and insert it into your organisation.
Share the success of your customers rather than your employees. Fix your vision externally rather than looking inward.

Primary focus: What can I give, how can I give.

Effecting Change

Org charts, in difference, don’t show how influence takes place in a business. In reality businesses don’t function through the organizational hierarchy but through its hidden social networks.
People do not resist change or innovation, but they resist the insecurity created by change beyond their influence.
Have you heard the argument that “the quickest way to introduce a new approach is to mandate its use”?
A level of immediate compliance may be achieved, but the commitment won’t necessarily be (Fearless Change 2010).
If you want to bring change, the most effective way is from the bottom up. In saying that, bottom-up takes longer and is harder. Like anything. No pain, no gain. Or as my wife puts it… it’s the difference between instant coffee and espresso.
Top-down change is imposed on people and tries to make change occur quickly and deals with the problems (rejection, rebellion) only if necessary.
Bottom-up change triggered from a personal level focused on first obtaining trust, loyalty, respect (from serving (servant leadership)), and the right to speak (have you served your time, done the hard yards)?

Because the personal relationship and involvement is not usually present with top-down, people will appear to be doing what you mandated, but secretly, still doing things the way they always have done.

The most effective way to bring change is on a local and personal level once you have built good relationships of trust. Anyone can effect change. The most effective change agents are level 5 leaders. These can be found anywhere in an organisation. Not just at the top. Level 5 leaders are:

  1. They are very confident in them selves. Actively seek out successors and enable them to take over.
  2. They are humble, modest and self sacrificing.
  3. They have “unwavering resolve.”
  4. They are work horses rather than show ponies.
  5. They give credit to others for their success and take full responsibility for poor results. They attribute much of their success to ‘good luck’ rather than personal greatness.
  6. They often don’t step forward when a leader is asked for.

Often I’ve thought that if I have an idea I’m sure is better than the existing way of doing things and I can explain logically why it’s better, then people will buy it. All too often this just isn’t the case. People base their decisions on emotions and then justify them with facts.

What I’ve come to realise is that it doesn’t matter how much power or authority you think you have. There is no reason if your able to build a relationship of trust with your peers or even your bosses, that you can not lead them to accept your ideas. The speed at which this may happen is governed by acts, influences and facts such as:

  • Your level of drive tempered with patience
  • The quality of your relationships and the level of trust others have in you
  • To what level do you hold captive their emotions?
  • A genuine appreciation and respect of your people and a belief in them
  • Understanding that people and their acceptance levels are different and how they differ
  • Gentleness
  • Knowing what it means to be a servant leader and being one
  • Mastery of Communication
  • Ability to work well with others
  • A need or problem to be solved
  • Realisation that you shouldn’t attempt to solve everything at once
  • Have you earnt the right to speak (done the hard yards)?
  • The level of support and desire to embrace change that the culture you work within provides
  • The people you want to accept your ideas

This post was leveraged in my talk at AgileNZ 2014. Slide deck here.


Follow

Get every new post delivered to your Inbox.

Join 261 other followers