Up and Running with Kali Linux and Friends

March 29, 2014

When it comes to measuring the security posture of an application or network, the best defence against an attacker is offence. What does that mean? It means your best defence is to have someone with your best interests (generally employed by you), if we’re talking about your asset, assess the vulnerabilities of your asset and attempt to exploit them.

In the words of Offensive Security (Creators of Kali Linux), Kali Linux is an advanced Penetration Testing and Security Auditing Linux distribution. For those that are familiar with BackTrack, basically Kali is a new creation based on Debian rather than Ubuntu, with significant improvements over BackTrack.

When it comes to actually getting Kali on some hardware, there is a multitude of options available.

All externally listening services by default are disabled, but very easy to turn on if/when required. The idea being to reduce chances of detecting the presence of Kali.

I’ve found the Kali Linux documentation to be of a high standard and plentiful.

In this article I’ll go over getting Kali Linux installed and set-up. I’ll go over a few of the packages in a low level of detail (due to the share number of them) that come out of the box. On top of that I’ll also go over a few programmes I like to install separately. In a subsequent article I’d like to continue with additional programmes that come with Kali Linux as there are just to many to cover in one go.

System Requirements

  1. Minimum of 8 GB disk space is required for the Kali install
  2. Minimum RAM 512 MB
  3. CD/DVD Drive or USB boot support

Supported Hardware

Officially supported architectures

i386, amd64, ARM (armel and armhf)

Unofficial (but maintained) images

You can download official Kali Linux images for the following, these are maintained on a best effort basis by Offensive Security.

  • VMware (pre-made vm with VMware tools installed)

ARM images

  • rk3306 mk/ss808
    CPU: dual-core 1.6 GHz A9
    RAM: 1 GB
  • Raspberry Pi
  • ODROID U2
    CPU: quad-core 1.7 GHz
    RAM: 2GB
    Ethernet: 10/100Mbps
  • ODROID X2
    CPU: quad-core Cortex-A9 MPCore
    RAM: 2GB
    USB 2: 6 ports
    Ethernet: 10/100Mbps
  • MK802/MK802 II
  • Samsung Chromebook
  • Galaxy Note 10.1
  • CuBox
  • Efika MX
  • BeagleBone Black

Create a Customised Kali Image

Kali also provides a simple way to create your own ISO image from the latest source. You can include the packages you want and exclude the ones you don’t. You can customise the kernel. The options are virtually limitless.
The default desktop environment is Gnome, but Kali also provides an easy way to configure which desktop environment you use before building your custom ISO image.
The alternative options provided are: KDE, LXDE, XFCE, I3WM and MATE.
Kali has really embraced the Debian ethos of being able to be run on pretty well any hardware with extreme flexibility. This is great to see.

Installation

You should find most if not all of what you need here. Just follow the links specific to your requirements.
As with BackTrack, the default user is “root” without the quotes. If your installing, make sure you use a decent password. Not a dictionary word or similar. It’s generally a good idea to use a mix of upper case, lower case characters, numbers and special characters and of a decent length.

I’m not going to repeat what’s already documented on the Kali site, as I think they’ve done a pretty good job of it already, but I will go over some things that I think may not be 100% clear at first attempt. Also just to be clear, I’ve done this on a Linux box.

Now once you have down loaded the image that suites your target platform,
you’re going to want to check its validity by verifying the SHA1 checksums. Now this is where the instructions can be a little confusing. You’ll need to make sure that the SHA1SUMS file that contains the specific checksum you’re going to use to verify the checksum of the image you downloaded, is in fact the authentic SHA1SUMS file. instructions say “When you download an image, be sure to download the SHA1SUMS and SHA1SUMS.gpg files that are next to the downloaded image (i.e. in the same directory on the server).”. You’ve got to read between the lines a bit here. A little further down the page has the key to where these files are. It’s buried in a wget command. Plus you have to add another directory to find them. The location was here. Now that you’ve got these two files downloaded in the same directory, verify the SHA1SUMS.gpg signature as follows:

$ gpg --verify SHA1SUMS.gpg SHA1SUMS
gpg: Signature made Thu 25 Jul 2013 08:05:16 NZST using RSA key ID 7D8D0BF6
gpg: Good signature from "Kali Linux Repository <devel@kali.org>"

You’ll also get a warning about the key not being certified with a trusted signature.

Now verify the checksum of the image you downloaded with the checksum within the (authentic) SHA1SUMS file
Compare the output of the following two commands. They should be the same.

# Calculate the checksum of your downloaded image file.
$ sha1sum [name of your downloaded image file]
# Print the checksum from the SHA1SUMS file for your specific downloaded image file name.
$ grep [name of your downloaded image file] SHA1SUMS

Kali also has a live USB Install including persistence to your USB drive.

Community

IRC: #kali-linux on FreeNode. Stick to the rules.

What’s Included

> 300 security programmes packaged with the operating system:
Before installation you can view the tools included in the Kali repository.
Or once installed by issuing the following command:

# prints complete list of installed packages.
dpkg --get-selections | less

To find out a little more about the application:

dpkg-query -l '*[some text you think may exist in the package name]*'

Or if you know the package name your after:

dpkg -l [package name]

Want more info still?

man [package name]

Some of the notable applications installed by default

Metasploit

Framework that provides the infrastructure to create, re-use and automate a wide variety of exploitation tasks.

If you require database support for Metasploit, start the postgresql service.

# I like to see the ports that get opened, so I run ss -ant before and after starting the services.
ss -ant
service postgresql start
ss -ant

ss or “socket statistics” which is a new replacement programme for the old netstat command. ss gets its information from kernel space via Netlink.
Start the Metasploit service:

ss -ant
service metasploit start
ss -ant

When you start the metasploit service, it will create a database and user, both with the names msf3, providing you have your database service started. Now you can run msfconsole.

Start msfconsole:

msfconsole

The following is an image of terminator where I use the top pane for stopping/starting services, middle pane for checking which ports are opened/closed, bottom pane for running msfconsole. terminator is not installed by default. It’s as simple as apt-get install terminator

metasploit

You can find full details of setting up Metasploits database and start/stopping the services here.
You can also find the Metasploit frameworks database commands simply by typing help database at the msf prompt.

# Print the switches that you can run msfconsole with.
msfconsole -h

Once your in msf type help at the prompt to get yourself started.

There is also a really easy to navigate all encompassing set of documentation provided for msfconsole here.
You can also set-up PostgreSQL and Metasploit to launch on start-up like this:

update-rc.d postgresql enable
update-rc.d metasploit enable

Offensive Security also has a Metasploit online course here.

Armitage

Just as it was included in BackTrack, which is no longer supporting Armitage, you’ll also find Armitage comes installed out of the box in version 1.0.4 of Kali Linux. Armitage is a GUI to assist in metasploit visualisation. You can find the official documentation here. Offensive Security has also done a good job of providing their own documentation for Armitage over here. To get started with Armitage, just make sure you’ve got the postgresql service running. Armitage will start the metasploit service for you if it’s not already running. Armitage allows your red team to collaborate by using a single instance of Metasploit. There is also a commercial offering developed by Raphael Mudge’s company “Strategic Cyber LLC” which also created Armitage, called Cobalt Strike. Cobalt Strike currently costs $2500 per user per year. There is a 21 day trial though. Cobalt Strike offers a bunch of great features. Check them out here. Armitage can connect to an existing instance of Metasploit on another host.

NMap

Target use is network discovery and auditing. Provides host information for anything it can access from a network. Also now has a scripting engine that can execute arbitrary custom tasks.
I’m guessing we’ve probably all used NMap? ZenMap which Kali Linux also provides out of the box Is a gui for NMap. This was also included in BackTrack.

Intercepting Web Proxies

Burp Suite

I use burp quite regularly and have a few blog posts where I’ve detailed some of it’s use. In fact I’ve used it to reverse engineer the comms between VMware vSphere and ESXi to create a UPS solution that deals with not only virtual hosts but also the clients.

WebScarab

I haven’t really found out what webscarab’s sweet spot is if it has one. I’d love to know what it does better than burp, zap and w3af combined? There is also a next generation version which according to the google code repository hasn’t had any work done on it since March 2011, where as the classic version is still receiving fixes. The documentation has always seemed fairly minimalistic also.

In terms of web proxy/interceptors I’ve also used fiddler which relies on the .NET framework and as mono is not installed out of the box on Kali, neither is fiddler.

OWASP Zed Attack Proxy (ZAP)

Which is an OWASP flagship project, so it’s free and open source. Cross platform. It was forked from the Paros Proxy project which is not longer supported. Includes automated, passive, brute force and port scanners. Traditional and AJAX spiders. Can even find unlinked files. Provides fuzzing, port scanning. Can be run without the UI in headless mode and can be accessed via a REST API. Supports Anti CSRF tokens. The Script Console that is one of the add-ons supports any language that JSR (Java Specification Requests) 223 supports. That’s languages such as JavaScript Groovy, Python, Ruby and many more. There is plenty of info on the add-ons here. OWASP also provide directions on how to write your own extensions and they provide some sample templates. Following is the list of current extensions, which can also be managed from within Zap. “Manage Add-ons” menu → Marketplace tab. Select and click “Install Selected”

OWASP Zap

The idea is to first set Zap up as a proxy for your browser. Fetch some web pages (build history). Zap will create a history of URLs. You then right click the item of interest and click Attack->[one of the spider options], then click the play button and watch the progress bar. which will crawl all the pages you have access to according to your permissions. Then under the Analyse menu → Scan Policy… Setup your scan policy so your only scanning what you want to scan. Then hit Scan to assess your target application. Out of the box, you’ve got many scan options. Zap does a lot for you. I’m really loving this tool OWASP!

As usual with OWASP, zap has a wealth of documentation. If zap doesn’t provide enough out of the box, extend it. OWASP also provide an API for zap.

FoxyProxy

Although nothing to do with Kali Linux and could possibly be in the IceWeasel add-ons section below, I’ve added it here instead as it really reduces friction with web proxy interception. FoxyProxy is a very handy add-on for both firefox and chromium. Although it seems to have more options for firefox, or at least they are more easily accessible. It allows you to set-up a list of proxies and then switch between them as you need. When I run chromium as a non root user I can’t change the proxy settings once the browser is running. I have to run the following command in order to set the proxy to my intermediary before run time like this:

chromium-browser --temp-profile –proxy-server=localhost:3001

Firefox is a little easier, but neither browsers allow you to build up lists of proxies and then switch them in mid flight. FoxyProxy provides a menu button, so with two clicks you can disable the add-on completely to revert to your previous settings, or select any or your predefined proxies. This is a real time saver.

Vulnerability Scanners

Open Vulnerability Assessment System (OpenVAS)

Forked from the last free version (closed in 2005) of Nessus. OpenVAS plugins are written in the same language that Nessus uses. OpenVAS looks for known misconfigurations and vulnerabilities common in out of date software. In fact it covers the following OWASP Top 10 items:

  • No.5 Security Misconfiguration
  • No.7 Missing Function Level Access Control (formerly known as “failure to restrict URL access”)
  • No.9 Using Components with Known Vulnerabilities.

OpenVAS also has some SQLi and other probes to test application input, but it’s primary purpose is to scan networks of machines with out of date software and bad configurations.
Tests continue to be added. Now currently at 32413 Network Vulnerability Tests (NVTs) details here.

OpenVAS

Greenbone Security Desktop (gsd) who’s package is a GUI that uses the Greenbone Security Manager, OpenVAS Manager or any other service that offers the OpenVAS Management Protocol (omp) protocol. Currently at version 1.2.2 and licensed under the GPLv2. The Greenbone Security Assistant (gsad) is currently at version 4.0.0. The Germany government also sponsor OpenVAS.
From the menu: Kali Linux → Vulnerability Analysis → OpenVAS, we have a couple of short-cuts visible. openvas-gsd is actually just the gsd package and openvas-setup which is the set-up script.

Before you run openvas-gsd, you can either:

  1. Run openvas-setup which will do all the setup which I think is already done on Kali. At the end of this, you will be prompted to add a password for a user to the Admin role. The password you add here is for a new user called “admin” (of course it doesn’t say that, so can be a little confusing as to what the password is for).
  2. Or you can just run the following command, which is much quicker because you don’t run the set-up procedure:
openvasad -c 'add_user' -n [a new administrative username of your choosing] -r Admin

You’ll be prompted to add a new password. Make sure you remember it.
Check out the man page for further options. For example the -c switch is a shortened –command and it lists a selection of commands you can use.
I think -n is for –name although not listed in the man page. -r switch is –role. Either User or Admin.

The user you’ve just added is used to connect the gsd to the:

  1. openvasmd (OpenVAS Manager daemon) which listens on port 9390
  2. openvassd (OpenVAS Scanner daemon) which listens on port 9391
  3. gsad (Greenbone Security Assistant daemon) which listens on port 9392. This is a web app, which also listens on port 443
  4. openvasad (OpenVAS Administrator daemon) which listens on 9393

The core functionality is provided by the scanner and the manager. The manager handles and organises scan results. The gsad or assistant connects to the manager and administrator to provide a fully featured user interface. There is also a CLI (omp) but I haven’t been able to get this going on Kali Linux yet. You’ll also find that the previous link has links to all the man pages for OpenVAS. You can read more about the architecture and how the different components fit together.

I’ve also found that sometimes the daemons don’t automatically start when gsd starts. So you have to start them manually.

openvasmd && openvassd && gsad && openvasad

You can also use the web app https://127.0.0.1/omp

Then try logging in to the openvasmd. When your finished with gsd you can kill the running daemons if you like. I like to keep an eye on the listening ports when I’m done to keep things as quite as possible.

Check the ports.

ss -anp

Optional to see the processes running, but not necessary.

ps -e
kill -9 <PID of openvasad> <PID of gsad> <PID of openvassd> <PID of openvasmd>

There are also plenty of options when it comes to the report. This can be output in HTML, PDF, XML, Emailed and quite a few others. The reports are colour coded and you can choose what to have put in them. The vulnerabilities are classified by risk: High, Medium, Low, OpenVAS can take quite a while to scan as it runs so many tests.

This is how to get started with gsd.

Web Vulnerability Scanners

This is the generally accepted criteria of a tool to be considered a Web Application Security Scanner.

SkipFish

A high performance active reconnaissance tool written in C. From the documentation “Multiplexing single-thread, fully asynchronous network I/O and data processing model that eliminates memory management, scheduling, and IPC inefficiencies present in some multi-threaded clients.”. OK. So it’s fast.
which prepares an interactive sitemap by carrying out a recursive crawl and probes based on existing dictionaries or ones you build up yourself. Further details in the documentation linked below.
Doesn’t conform to most of the criteria outlined in the above Web Application Security Scanner criteria.
SkipFish v2.05 is the current version packaged with Kali Linux.
SkipFish v2.10b (released Dec 2012)
Free and you can view the source code. Apache license 2.0
Performs a similar role to w3af.
Project details can be found here.
You can find the tests here.
How do you use it though? This is a good place to start. Instead of reading through the non-existent doc/dictionaries.txt, I think you can do as well by reading through /usr/share/skipfish/dictionaries/README-FIRST.
The other two documentation sources are the man page and skipfish with the -h option.

Web Application Attack and Audit Framework (w3af)

Andres Riancho has created a masterpiece. The main behavior of this application is to assess and identify vulnerabilities in a web application by sending customised HTTP requests. Results can be output in quite a few formats including email. It can also proxy, but burp suite is more focused on this role and does it well.
Can be run with a gui: w3af_gui or from the terminal: w3af_console. Written in Python and Runs on Linux BSD or Mac. Older versions used to work on Windows, but it’s not currently being tested on Windows. Open source on GitHub and released under the GPLv2 license.
You can write your own plug-ins, but check first to make sure it doesn’t already exist. The plugins are listed within the application and on the w3af.org web site along with links to their source code, unit tests and descriptions. If it doesn’t appear that the plug-in you want exists, contact Andres Riancho to make sure, write it and submit a pull request. Also looks like Andres Riancho is driving the development TDD style, which means he’s obviously serious about creating quality software. Well done Andres!
w3af provides the ability to inject your payloads into almost every part of the HTTP request by way of it’s fuzzing engine. Including: query string, POST data, headers, cookie values, content of form files, URL file-names and paths.
There’s a good set of documentation found here and you can watch the training videos. I’m really looking forward to using this in anger.

w3af

Nikto

Is a web server scanner that’s not overly stealthy. It’s built on “Rain Forest Puppies” LIbWhisker2 which has a BSD license.
Nikto is free and open source with GPLv3 license. Can be run on any platform that runs a perl interpreter. It’s source can be found here. The first release of Nikto was in December of 2001 and is still under active development. Pull requests encouraged.
Suports SSL. Supports HTTP proxies, so you can see what Nikto is actually sending. Host authentication. Attack encoding. Update local databases and plugins via the -update argument. Checks for server configuration items like multiple index files and HTTP server options. Attempts to identify installed web servers and software.
Looks like the LibWhisker web site no longer exists. Last release of LibWhisker was at the beginning of 2010.
Nikto v2.1.4 (Released Feb 20 2011) is the current version packaged with Kali Linux. Tests for multiple items, including > 6400 potentially dangerous files/CGIs. Outdated versions of > 1200 servers. Insecurities of specific versions of > 270 servers.
Nikto v2.1.5 (released Sep 16 2012) is the latest version. Tests for multiple items, including > 6500 potentially dangerous files/CGIs. Outdated versions of > 1250 servers. Insecurities of specific versions of > 270 servers.
Just spoke with the Kali developers about the old version. They are now building a package of 2.1.5 as I write this. So should be an apt-get update && apt-get upgrade away by the time you read this all going well. Actually I can see it in the repo now. Man those guys are responsive!
Most of the info you will need can be found here.

SQLNinja

sqlninja: Targets Microsoft SQL Servers. Uses SQL injection vulnerabilities on a web app. Focuses on popping remote shells on the target database server and uses them to gain a foothold over the target network. You can set-up graphical access via a VNC server injection. Can upload executables by using HTTP requests via vbscript or debug.exe. Supports direct and reverse bindshell. Quite a few other methods of obtaining access. Documentation here.

Text Editors

  1. Vim. Shouldn’t need much explanation.
  2. Leafpad. This is a very basic graphical text editor. A bit like Windows Notepad.
  3. Gvim. This is the Graphical version of Vim. I’ve mostly used sublime text 2 & 3, gedit on Linux, but Gvim is really quite powerful too.

Note Keeping

  1. KeepNote. Supported on Linux, Windows and MacOS X. Easy to transport notes by zipping or copying a folder. Notes stored in HTML and XML.
  2. Zim Desktop Wiki.

Other Notable Features

  • Offensive Securities Kali Linux is free and always will be. It’s also completely open (as it’s based on debian) to modification of it’s OS or programmes.
  • FHS compliant. That means the file system complies to the Linux Filesystem Hierarchy Standard
  • Wireless device support is vast. Including USB devices.
  • Forensics Mode. As with BackTrack 5, the Kali ISO also has an option to boot into the forensic mode. No drives are written to (including swap). No drives will be auto mounted upon insertion.

Customising installed Kali

Wireless Card

I had a little trouble with my laptop wireless card not being activated. Turned out to be me just not realising that an external wi-fi switch had to be turned on. I had wireless enabled in the BIOS. The following where the steps I took to resolve it:
Read Kali Linux documentation on Troubleshooting Wireless Drivers  and found the card listed with lspci. Opened /var/log/dmesg with vi. Searched for the name of the card:

#From command mode to make search case insensitive:
:set ic
#From command mode to search
/[name of my wireless card]

There were no errors. So ran iwconfig (similar to ifconfig but dedicated to wireless interfaces). I noticed that the card was definitely present and the Tx-Power was off. I then thought I’d give rfkill a spin and it’s output made me realise I must have missed a hardware switch somewhere.

rfkill

Found the hard switch and turned it on and we now have wireless.

Adding Shortcuts to your Panel

[Alt]+[right click]->[Add to Panel...]

Caching Debian Packages

If you want to:

  1. save on bandwidth
  2. have a large number of your packages delivered at your network speed rather than your internet speed
  3. have several debian based machines on your network

I’d recommend using apt-cacher-ng. If not already, you’ll have to set this up on a server and add the following file to each of your debian based machines.
/etc/apt/apt.conf with the following contents and set it’s permissions to be the same as your sources.list:

Acquire::http::Proxy “http://[ip address of your apt-cacher server]:3142”;

IceWeasel add-ons

  • Firebug
  • NoScript
  • Web Developer
  • FoxyProxy (more details mentioned above)
  • HackBar. Somewhat useful for (en/de)coding (Base64, Hex, MD5, SHA-(1/256), etc), manipulating and splitting URLs

SQL Inject Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. SQL Inject Me is a component of the Exploit-Me suite. Allows you to test all or any number of input fields on all or any of a pages forms. You just fill in the fields with valid data, then test with all the tools attacks or with the top <number> that you’ve defined in the options menu. It then looks for database errors which are rendered into the returned HTML as a result of sending escape strings, so doesn’t cater for blind injection. You can also add remove escape strings and resulting error strings that SQL Inject Me should look for on response. The order in which each escape string can be tried can also be changed. All you need to know can be found here.

XSS Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. XSS Me is also a component of the Exploit-Me suite. This tool’s behaviour is very similar to SQL Inject Me (follows the POLA) which makes using the tools very easy. Both these add-ons have next to no learning curve. The level of entry is very low and I think are exactly what web developers that make excuses for not testing their own security need. The other thing is that it helps developers understand how these attacks can be carried out. XSS Me currently only tests for reflected XSS. It doesn’t attempt to compromise the security of the target system. Both XSS Me and SQL Inject Me are reconnaissance tools, where the information is the vulnerabilities found. XSS Me doesn’t support stored XSS or user supplied data from sources such as cookies, links, or HTTP headers. How effective XSS Me is in finding vulnerabilities is also determined by the list of attack strings the tool has available. Out of the box the list of XSS attack strings are derived from RSnakes collection which were donated to OWASP who now maintains it as one of their cheatsheets.. Multiple encodings are not yet supported, but are planned for the future. You can help to keep the collection up to date by submitting new attack strings.

Chromium

Because it’s got great developer tools that I’m used to using. In order to run this under the root account, you’ll need to add the following parameter to /etc/chromium/default between the quotes for CHROMIUM_FLAGS=””
--user-data-dir
I like to install the following extensions: Cookies, ScriptSafe

Terminator

Because I like a more powerful console than the default. Terminator adds split screen on top of multi tabs. If you live at the command line, you owe it to yourself to get the best console you can find. So far terminator still fits this bill for me.

KeePass

The password database app. Because I like passwords to be long, complex, unique for everything and as secure as possible.

Exploits

I was going to go over a few exploits we could carry out with the Kali Linux set-up, but I ran out of time and page space. In fact there are still many tools I wanted to review, but there just isn’t enough time or room in this article. Feel free to subscribe to my blog and you’ll get an update when I make posts. I’d like to extend on this by reviewing more of the tools offered in Kali Linux

Input Sanitisation

This has been one of my pet topics for a while. Why? Because the lack of it is so often abused. In fact this is one of the primary techniques for No.1 (Injection) and No.3 (XSS) of this years OWASP Top 10 List (unchanged from 2010). I’d encourage any serious web developers to look at my Sanitising User Input From Browser. Part 1” and Part 2
Part 1 deals with the client side (untrused) code.
Part 2 deals with the server side (trusted) code.
I provide source code, sources and discuss the following topics:

  1. Minimising the attack surface
  2. Defining maximum field lengths (validation)
  3. Determining a white list of allowable characters (validation)
  4. Escaping untrusted data
  5. External libraries, cheat sheets, useful code and sites, I used. Also discuss the less useful resources and why.
  6. The point of validating client side when the server side is going to do it again anyway
  7. Full set of server side tests to test the sanitisation is doing what is expected

Automating Specification by Example for .NET Web Applications

February 22, 2014

If you or your organisation:

  1. are/is constrained to running your .NET tests (unit, acceptance) on-site rather than in the cloud
  2. would like some guidance on how to set-up Continuous Integration

read on.

Introduction

Purpose

Remember, an acceptance test system as a tool is only as good as the specification provided by it’s humans. The most important ingredients there-for is the relationships between the people creating the tests and the interactions performed by those people. Or as the Agile Manifesto states: Value “Individuals and interactions over processes and tools”. In order for an acceptance test system to be successful, the relationships of the Developers creating the increment and the interactions between them and the stake holders must be in good shape first. Once this is in order, you can take the next step and find some tools that will assist in creating working software that does what the stake holders want it to do.

It’s my intention that the following details will help you to create a system that automates “Specification by Example”.

The purpose of providing an automated Specification by Example Implementation, A.K.A Automated Acceptance Test System, is clearly explained here.

Do not fall into the trap of inverting the test triangle. Instead invest where it matters.

Scope

Create a system that can be triggered from

  1. Every developers workstation
  2. A build on the build machine, preferably from a best of bread build tool. TFS is not a best of bread build tool and if you want to get serious about Continuous Integration (CI), nightly builds, continuous deployment, I’d recommend not going down the path of TFS. Even Microsoft uses Git. Doesn’t that tell you something? Do you see TFS here? Last time I evaluated build tools, Jenkins previously named Hudson came out on top.

jenkins

The system will include

  1. An acceptance test framework that will run all the acceptance tests
  2. A Unit test framework. UI tests need to be run in parallel on a collection of VM’s (See the section on supported browsers for why). There are three immediately obvious approaches we could take here.
    1. We could try and rely on a unit test framework to distribute the tests. MSTest 2012 doesn’t provide the ability to run tests in parallel, but 2010 does. In order to have 2012 run tests in parallel, you can force it to use the 2012 test settings file. Only a maximum of 5 tests can be run concurrently though. Not a great option, considering it’s not going to be supported going forward.
    2.  My ParallelBrowser. If this link is not active and you’re interested in this, contact me.
    3. PNUnit. An example of how this works is here under the “PNunit Framework for writing selenium test cases” heading. I wrote the ParallelBrowser before Selenium had good support for running the same tests on multiple supported browsers. Both my ParallelBrowser and this option are reasonable options, but I’d go for the latter now. This way someone else can maintain the parallel aspect. As unless people are interested in ParallelBrowser I won’t be doing any further work on it.
  3. A Web User Interface Test Framework that will be driven by the acceptance test framework. Selenium in this case.
  4. A set of tests that run Selenium tests. These will of course need to be thread-safe.
  5. As per the Supported Browsers section, a collection of VM’s with our supported browsers installed.
    1. Each with a standalone selenium server setup with a role of webdriver. Details further on.
  6. A stand-alone selenium server setup with a role of hub

High Level Flow

Many organisations bound to .NET seem to be locked into using sub-standard tooling like TFS for their build. If you are in this predicament and can not break free, I’d suggest once all the unit tests, integration tests have run, then have the build kick off a psake script to:

  1. Clean out the existing target web app
  2. Deploy the newly built and tested web app
  3. Drop the database
  4. Create database by using latest DDL and DML scripts pulled from source control
  5. Apply any specific configurations
  6. Stop and start the target web server
  7. Run the acceptance tests which will include any Web UI tests.

If it’s within your power to choose a real CI Tool to run in-house, there are a handful of very solid contenders. A good proportion of which are free and open source.

Audience

Who ever is setting up the system. Often a developer or two. It’s important to make sure more than one person knows how it all hangs together, otherwise you have a single point of failure.

Chosen Tools

Evaluation Criterion I used

  • Who is the creator? I favour teams rather than individuals, as individuals move on often leaving projects stranded?
  • Does it do what you need it to do?
  • Does it suite the way you and your team want to work?
  • Does it integrate well with all of your other chosen components? This is based on communicating with those that have used the offerings more so than using Proof Of Concepts (POC).
  • Works with the versions of dependencies you currently use.
  • Cost in money. Is it free? Are there catches once you get further down the road? Usually open source projects are marketed as is. No catches
  • Cost in time. Is the set-up painful? Customisation feedback? Upgrade feedback?
  • How well does it appear to be supported? What do the users say?
  • Documentation. Is there any / much? What is its quality?
  • Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have valid reasons).
  • Release schedule. How often are releases being made? When was the last release?
  • Intuition. How does it feel. If you have experience in making these sorts of choices, lean on it. Believe it or not, this should probably be No. 1

The following tools have been my choice based on the above criterion.

Acceptance Test Framework

The following offerings are all free and open source.

If you’re not using User Stories and/or Test Conditions, the context/specification offerings provide greater flexibility than the xBehave style frameworks. As most Scrum teams use User Stories for their Product Backlog items and drive their acceptance tests with test conditions, xBehave offerings are a great choice. In saying that, there is probably no reason why both couldn’t be used where it makes sense to do so. In this section I’ve provided the results of evaluating the current xSpec and xBehave offerings for .NET ordered by best first for the categories.

xBehave (test conditions)

SpecFlow

specflow

  • Sourcecode: https://github.com/techtalk/SpecFlow/
  • Age: Over 4 years
  • Actively maintained: Yes
  • Large number of active committers
  • Community: Lively
  • Visual Studio Plug-in has been downloaded 70 times as many times as NBehave
  • Documentation: Excellent
  • Integrates well with Selenium (I’ve setup a couple of systems using SpecFlow and it’s been a joy to work with). The stake holders loved the visibility it provided too. I discussed it here in a recent presentation.
NBehave
  • Not a lot of activity
  • Only two committers
StoryQ
  • Only two coordinators
  • Well established framework

xSpec (context/specification)

Machine.Specification (MSpec)
NSpec

Web User Interface Test Framework

selenium

For me when I look at this category of tools for .NET, Selenium is always at the top and it just keeps getting better. If anyone has any questions around Selenium, feel free to contact me or leave a comment on this post. I can’t guarantee I’ll have the answer, but I’ll try. All the documentation can be found here. I would recommend installing the Selenium IDE for initially recording tests and be sure to check-out the IDE plug-ins. All the documentation you’ll need for the IDE is here. Once you get familiar with the code it generates, you will not use it much. I would recommend using the newer Web drivers rather than the selenium server by itself. The user group is very active and looks like a good place to ask questions also. Although I haven’t needed to as there is a huge amount of documentation that’s great.

The tools I would use are detailed here. Specifically we would be using

  1. Selenium 2 (aka WebDriver)
  2. The IDE for recording tests initially
  3. Selenium Server which is used by WebDriver and RC (now considered legacy) now includes built-in grid capabilities.

Supported Browsers

What I’ve done in the past is have each of our supported versions from each supported browser vendor installed on a single VM. So each VM has all the vendors browsers installed, but just a single version obviously.

Mid Level Flow

These are the same points listed above under “High Level Flow

1. Build Kicks off PSake Script

psake

The choice to use PSake over the likes of NAant, Rake and the other build scripting languages is reasonably straight forward for me. PSake (PowerShell build scripting language) gives us access to the full .NET environment. NAnt with all it’s angle brackets, was never a very nice scripting language to use. Rake is excellent and a possible option if you have ruby installed. If you don’t, why install it if you have .NET? There are many resources for PowerShell on the inter-webs. The wiki for PSake is good.

In the case where you may have a TFS Build run, I would suggest once all the unit tests and integration tests have run, then the build kicks off a possibly pre-build and post-build psake script to perform the following operations. This is how you do this. Oh, before you try to actually run a PSake script, download and import the module, or install the NuGet package. So once you have your PSake scripts running, just start adding PowerShell scripts to do the following work. PSake is just syntactic sugar around PowerShell, so anything you can do with PS, you can do with PSake.

2. Clean out the existing target web application

Using your PSaki script, use the Web Deploy cmdlets. You will find everything you need here for it. You can also install the NuGet package.

3. Deploy the newly built and unit tested web application

As above, just use the Web Deploy cmdlets.

4. Drop the database

As above, just use the Web Deploy cmdlets.

5. Create database by using latest DDL and DML scripts pulled from source control

Database update via Application

Kind of related, but not specific to CI.

Depending on your needs, there are quite a few ways you could do this.

One way of doing this is to have your application utilise a library that determines which version of the database the application needs and be able to update the database accordingly. This library would use similar or the same upgrade scripts that we would use in this test process.

Your applications should create (if non existent) and update database on run. So all the DDL, DML code per database lives in a library. Each application that uses a specific database, references the databases DDL code library. Script all stored procedures, views, functions, triggers they’re recreated as part of a deployment scrip.

When the application is deployed, and the database created or updated, anything that must be there for the application to run out of the box should be part of the scripts, and of course versioned. This includes the part of our data that is constant or configuration data. Tables, stored procedures, views, functions and triggers. For the variable part of your data, you will need a synthetic data generation plan for testing.

Database Process for Versioning

Also related, but not specific to CI.

DBA, Devs, Product Owner and consultants must be aware of the process.

When any schema, constant data, configuration data, test data is updated… the (version controlled) scripts must also be updated, else the updates will get overwritten.

As part of the nightly build, if your supporting multiple versions of your application, you could also hydrate the collection of database versions, then run the appropriate upgrade scripts against each one, to verify the upgrades work. If any don’t, the build fails.

Create set of well defined processes that:

  1. In most cases, looks after itself
  2. Upgrades existing databases if they are not on the latest version, to the latest version
  3. Creates databases for those applications that don’t have a database
  4. Informs the user on deployment if the database is corrupt, or can not be upgraded
  5. Outlines who is responsible for, and who may update the DDL and DML scripts for your projects
  6. Clearly documents that any changes made to any databases by un-authorised personal will more than likely be overwritten.

A User Story for this might look something like the following:

As the team, we need to create a set of well defined processes that clearly outline what is required in regards to setting up the development teams database versioning, creation, upgrade systems and processes strategy for our organisations databases. So that all team personal are aware of the benefits and dangers of making changes to the databases, and understand the change process.

Possibly useful tools

1. DB Ghost
2. http://www.red-gate.com/products/sql-development/sql-source-control/index-2
3. http://www.sqlaccessories.com/SQL_Data_Examiner/

6. Apply any specific configurations

As above, just use the Web Deploy cmdlets.

7. Stop and start the target web server

As above, just use the Web Deploy cmdlets.

8. Run the acceptance tests which will include any Web UI tests

As above, just use the Web Deploy cmdlets.

  1. Start each VM that hosts a set of browsers you want to use to farm your tests out to. From memory, you do not need to start each browser. There are of course many ways to do this. PS provides the following cmdlets Start-VM and Stop-VM. These would be my first options.
  2. Start the selenium standalone server. All details found here. Or just work through the “Distributed Testing with Selenium Grid” chapter until you get to the “Creating and executing Selenium script in parallel with TestNG” heading, at which point switch to this documentation to replace TestNG with PNUnit.

If I’ve failed to explain anything in enough detail for you, drop me a message below and I’ll do my best to help :-)

Essentials for Creating and Maintaining a High Performance Development Team

January 25, 2014

How and Why Many Software Development Shops Fail

What I see a lot of, is organisations hiring code monkeys rather than professionals. Either they hire:

  • the cheapest talent they can get their hands on. Now they want the best, but how much they have to pay the developer is the most important factor to them.

or

  • the person that completes feature implementations as fast as possible (sometimes known or thought of as rock stars). Often young developers without a large amount of experience which causes the more Professional Developers to slow down a bit and think tasks through a little more.

Now, both approaches are short sighted. They hire code monkeys rather than professionals. Code monkeys write code fast and incur technical debt that is hidden at first, but over time slows the Development Team down until it can barely move.

The scenario

Code Monkey finishes his task much faster than Professional Developer.

code monkey

Code Monkey is solely focused on completing his task as fast as possible. He cuts some code and declares the task done. Professional Developer thinks the problem through, does a little research to satisfy himself that his proposed approach  is in fact the most appropriate approach for the problem. He organises a test condition workshop which solidifies requirements and drives out design defects via active stake holder participation. He drives his low level design with TDD. Makes sure he follows the coding standards, thus making future maintenance to his code easier, as it’s much easier to read. Asks for a pair to review his code or perhaps requests a fellow team member to sit with him and pair programme for a bit on some complex areas of the code base. Makes sure his code is being run in the continuous integration suite and his acceptance tests (which have been driving his feature) are passing. Checks that his work complies with the Definition of Done. You do have a Definition of Done right?

Now what the Product Owner or software development manager often fails to understand is that it’s the slow (Professional) developer that is creating code that can be maintained and extended at a sustainable pace. Professional Developer is investing time and effort into creating a better quality of code than the developer (Code Monkey) that appears to be producing code faster. The Product Owner and/or manager don’t necessarily see this, in which case Code Monkey clearly looks to be the superior developer right (the rock star)? What also often happens is that Code Monkey rides on Professional Developers quality and adds his spaghetti code on top, thus making Code Monkey look like a god.

The Product Owner sees output immediately by Code Monkey that “appears” to be working very fast. He doesn’t see the quality being created by Professional Developer that “appears” to be working slower.

Time goes by. Sprint Review roles around. The stake holders love the new features that have been implemented and now want some additions and refinements. They ask the Product Owner to add some more User Stories into the Product Backlog. The Development Team pull these stories into a Sprint and start work. New functionality is added on top of the code that Professional Developer wrote previously. New functionality is added on top of the code that Code Monkey wrote previously.

Sprint Review roles around again and the stake holders are happy with the new features that have been added on top of Professional Developers code. Of course they have no idea that the underlying code was crated by Professional Developer. Now the stake holders have been using the software that had the new features added on top of Code Monkeys spaghetti code and they are starting to notice other areas of the application that are no longer behaving the way they are supposed to. This continues to happen and the stake holders are oblivious to the fact that it’s due to the code that Code Monkey is writing. They still think he’s a rock star because he appears to pump out code so fast.

So… while Professional Developer seems to be slowing The Team down and clearly Code Monkey is simply amazing because he delivers his features so much faster. The actual truth is exactly the opposite. Professional Developer is creating SOLID code and running at a pace that’s sustainable (a key principle of the agile manifesto).

The code that Professional Developer wrote is easier to modify and extend as it’s design is superior, due to being well thought out and driven by tests. His code satisfies the acceptance tests, so everyone knows it meets the living specification. It’s faster to add changes to his code because it’s easier to read and there are less surprises. If any other team member changes Professional Developers code which makes it no longer conform to the specification, the Accpetance Tests around his code fail, thus providing instant feedback to the developer making the change.

It’s the practices of Professional Developer that:

  1. provide the entire Development Team assurity that the software satisfies the requirements of the specification at all times.
  2. allow The Development Team to run at a sustainable pace.
  3. provide confidence in ongoing future estimations due to less surprises.
  4. produce code that everyone wants to work with.
  5. produce less error prone software that does what it says it will do on the box.

So… next time you as a Product Owner, Manager, or person responsible for hiring, is looking for talent, be very careful what your measuring. Don’t favour speed over excellent attitude. I created some ideas on what to look for in a Professional Developer here.

Scrum Teams can Fail Too

Velocity of the Development Team starts high then declines. Often it’s hard for people (including the Product Owner) to pin-point why this is happening. The Scrum Team may have started out delivering at a consistently high cadence. They appeared to be really on fire.

The code base is small but growing fast. As it starts to get larger, the Development Team starts to feel the weight of a lot of code that’s been hacked together in a rush. This causes the teams ability to release software fast to wane. A Scrum Team can get to this point quite fast, as they are a high performance team. When you get to this point, almost every change to the code base is hard. Make one change and something else fails. Routines are hundreds of lines long. Developers have to understand hundreds of lines of code in order to make a small change. Names are not as meaningful as they should be. Routines have multiple levels of abstraction, so multiple levels of code need to be understood to make a single change. Inheritance is over used, thus creating unnecessarily tight coupling. There are many aspects of the code that have become terrible to work with.

How does this happen?

How does the Product Owner know that the quality of code being created is not good? The Product Owner isn’t generally a developer so doesn’t know and even if he is, he’s not in the code day in, day out. Also it’s not generally the most important concern of the Product Owner, rather getting new functionality out the door is, so this is what the Development Team are rewarded for. When they pass a Sprint, the Product Owner is happy and praises the Development Team. The Product Owner has no idea that the quality of code is not as good as it needs to be to sustain a code base that is easy to extend.

So the Development Team does what ever it needs right now, to make sure they deliver right now (the current Sprint). Quality becomes secondary, because no one is rewarding them for it. This is a lack of professionalism on the Development Teams part. Bear in mind though, that each developer is competing against every other to appear as though they have produced the most. After all, that’s what they get rewarded for. Often what this means is they are working too fast and not thinking enough about what they are doing, thus the quality of the code-base is deteriorating, like in the example above with Code Monkey.

I’ve personally seen this on close to 100% of all non Scrum projects I’ve been involved with. Scrum Teams are sometimes better off because they have other practices in place that ensure quality remains high, but these practices are not prescribed by Scrum.

So… What do we do?

We not only reward the Development Team for delivering features fast but we also reward them for the sort of practices that Professional Developer (from our example above) performs.

How do we do this?

We add the practices that Professional Developer from above performs to our Definition of Done.

The Product Owner runs through the Acceptance Criteria which should be included on every Product Backlog item (preferably during the Sprint) indicating acceptance. Also running through the Definition of Done… querying the Development Team that each point has in fact been done for the Backlog item in question. This of course should be done by the developers themselves first. This provides the Team with confidence that the Sprint Backlog item is actually complete. Essentially, the work is not Done, until all the Acceptance Criteria points and Definition of Done points are checked off. This way the Development Team is being rewarded for delivering fast and also delivering high quality features that do what the stake holders expect them to do. No nasty surprises.

On top of what our solo Professional Developer did above, we should:

  1. Measure test speed and reward fast running tests
  2. Measure cyclomatic Complexity
  3. Run static code analysis and use productivity enhancing tools. This is not cheating, it’s allowing the developers to work faster and create cleaner code. This can even be set-up as pre-commit hooks etc on source control.
  4. Code reviews need to be based on the coding standards and guidelines.
  5. Encourage developers to commit regularly, thus their code is being run against the entire test suite, providing confidence that their code plays nicely with everyone else’s code. Commit frequency can be measured.
  6. The Development team should shame developers when they break the CI build. Report on how long builds stay broken for and shame when the duration is longer than an agreed on time.
  7. Most of these practices can be added to the Definition of Done, this way Developers can and should be rewarded for doing these activities. Even better, you can automate most of these practices.

Evaluation of AngularJS, EmberJS, BackboneJS + MarionetteJS

December 28, 2013

This post will continue to be modified for at least a month from the publish date. I just didn’t want to wait another month before publishing, so people can start to get some use out of it early. If you have resources, comments, anything you think that could be useful to others, please add a comment and if it makes sense, I may add it to the post. This will also be used as a resource for the attendees to the CHC.JS MV* Battle Royale meet-up.

Recently I’ve undertaken the task of reviewing some JavaScript MV* frameworks to help organise/structure the client side code within an application I’m currently working on. This is about the third time I’ve done this. Each time has been for a different type of application with completely different requirements, frameworks and libraries to consider.
Unlike Angular and Ember, Backbone is a small library. Marionette adds quite a lot of extra functionality and provides some nice abstractions on top . All mentioned frameworks/libraries are free and open source.

I found a useful tool for helping with the selection process about a year ago. It’s called TodoMVC and it contains a generous collection of applications all satisfying the requirements of a single specification (a small web app that allows the person using it to add todo notes etc.). So basically they all do the same thing, but use a different JavaScript framework or library to do it. It’s still being maintained. Addy Osmani’s blog post on the project is here.

The idea is that you can work through a decent size selection of applications that all do the same thing.
This assists the R&D developer or architect to make informed decisions on which JavaScript framework or library will suite their purposes, if any.
There are also a couple of Todo apps (vanillajs and jquery) that don’t use a framework at all.
There’s a template to use as a starting point, so you can create your own.

Just bear in mind though, that the TodoMVC app doesn’t really show case what Ember and Angular has to offer.

On Addy’s post There are a collection of good points on how to create your selection criterion under the heading “Our Suggested Criteria For Selecting A Framework”.

I’ve heard a few times that “all you really need to do in order to make an informed decision on which framework or library to go with is just write a small app for each of the frameworks, do a bit of reading and maybe watch a few screen casts. Shouldn’t take more than a day”. I disagree with this. I don’t think there is any way you can learn all or most of the pros and cons of each framework in a day or even two. Depending on how much time you have, my recommended approach would be to go through the following activities in the following order (give or take). Spending as little or as much time as you have, ideally in a few iterations, for each of the offerings you’re investigating.

  1. Listen to a pod-cast (say, on your way to/from a clients or even in your sleep. Good time savers.)
  2. Read some of the documentation
  3. Watch a screen-cast on each one
  4. Play with some examples
  5. Evaluate on features you (definitely or may) want verses features available. Features need to be learned. If you don’t need them, you will probably be better going with the offering that doesn’t have the features you don’t need, but has the architecture to add them (thinking Backbone) if/when you do need them.
  6. Are the features implemented in an architecture that you believe is good (I.E. are the layers muddied)?
  7. Read some blog posts, tutorials.
  8. Read some opinions and evaluate for yourself.
  9. Start testing it’s limits
  10. Decide whether you like its opinions imposed
  11. Does it impose enough or to many opinions for you and your team

As the JavaScript MV* landscape is constantly and very quickly changing, the outcome of your evaluation will have a short use by date.

This is my attempt to distil the attributes of the discussed offerings. I’ve attempted to come at this with an open mind. Hopefully this will help save some work for those that come after me. lists are sorted in the order of most useful to me. I make no apologies for the abundance of links, as I’ve also used this for a resource collection point and hope that this post will fall into the category of a “one stop shop” for what I consider to “currently” be the top three contenders in the client side MV* line-up. In saying that though, there are other strong contenders like Meteor not discussed here, as it’s more than just a client side MV* framework. Without further ado, here they are…

Angular.js

AngularJS

Intro

Opinionated framework that has Models, Views and Controllers, but does not conform to the MVC pattern.

Core Team

Igor Minár, Miško Hevery, Vojta Jína.
All work at google.

Backed by the commercial giant Google (you decide whether that’s a good thing).

Community

Conferences

  1. ng-conf

Statistics

  • Version: 1.2.5
  • Payload Size: Depends on handlebars development version 85kb
    1. development version 716.7kb
    2. minified 99.8kb
    3. minified and compressed < 36kb
  • Age: Initial Github commits: January 2010

Performance

See Backbone Performance below.

Documentation

Pod-casts

  1. Angular.js

Screen-casts

  1. AngularJS on YouTube
  2. EggHead.io Lessons

Blog Posts, Tutorials, etc.

  1. Learning AngularJS in one day
  2. Angular docs Tutorial

Features

  • Directives: used via Non HTML compliant tags, attributes, comment and class names. Although there are options to make it compliant:  via the class (not recommended) and data attributes.
  • scope. The first half of this video shows how the scope may be confusing to those new to Angular. If I can not tell how code will work without running it, it violates the Principle of Least Astonishment (PoLA). It seems quite clunky to me.

Positive

  1. Good for long running and complex applications with deep nested view hierarchies
  2. Two-way data binding
  3. All tests run against IE8 (good for those that are locked into legacy MS)
  4. Test driven (and more vocal about it than Ember)
  5. Payload is about 1/3 smaller than Ember

Negative

  1. Steep learning curve compared to Backbone, but not as steep as Ember.
  2. Dirty checking to keep views and models in sync is costly. Ember keeps sync in a more elegant way. Possible perceived downside to this is Ember models have to inherit from DS.Model (next point addresses this as a positive though). Also discussed here under the “Performance issues” heading.
  3. Models are Plain Old JavaScript Objects (POJO’s). Doesn’t have to be anything special. Now there’s an argument here that attempts to explain this as being a selling point of Angular, but in reality what happens is a violation of the Uniform Access Principle, thus creating tight coupling. How’s that? Well now the view needs to know too much about the model’s members. Discussed in more detail here. For example if one of the models properties is a function, the view has to know this. So you see this sort of thing in the view {{area()}} (so we’re pulling our JavaScript into our view.) where as with Ember because it’s models are well defined and you can use computed properties on them, all the view needs to specify is an identifier, then you’ll see this sort of thing in the view {{area}}. The Ember model then creates a computed property with the same name. The opposing view is that in ES5 you can just hide your functions etc. behind property getters and setters. Most developers take the path of least resistance, so I think most will be doing it the wrong way.

Interesting Plug-ins

  1. ?

Useful Tools

  • “AngularJS Batarang” for Chrome browser (it’s an extension)

Ember.js

EmberJS

Intro

Opinionated framework that has Models, Views and Controllers, but does not conform to the MVC pattern.

Core Team

Yehuda Katz, formerly of Rails and SproutCore projects.
Tom Dale, Peter Wagenet, Trek Glowacki, Erik Bryn, Kris Selden, Stefan Penner, Leah Silber, Alex Matchneer.

Backed by the JavaScript community.

Community

Conferences

Statistics

  • Version: 1.2.0
  • Payload Size:
    1. development version 1.1MB
    2. production version 1.0MB
    3. minified and GZipped 67kb
  • Age: Initial Github commits: April 2011

Performance

See Backbone Performance below.

Documentation

Pod-casts

  1. JavaScript Jabber Ember Tools
  2. JavaScript Jabber Ember.js (also covers some backbone)
  3. JavaScript Jabber Ember.js & Discourse
  4. EmberWatch

Screen-casts

  1. Building an Ember.js Application
  2. Ember101
  3. EmberWatch
  4. tutsplus
  5. EmberWatch 

Blog Posts, Tutorials, etc.

Features

  • the ember-application class gets added to the root element (body) in the ember JavaScript file. I was wondering how this class was magically added to the markup. Couldn’t find any documentation on it, so had to look through the JavaScript.

Positive

  1. Good for long running and complex applications with deep nested view hierarchies
  2. Aggregates model data changes and update the DOM late in the RunLoop.
  3. Well defined models and computed properties (See Angular negative point around this).
  4. Test driven

Negative

  1. Steepest learning curve out of the three. Why? Because there’s more in it. If you need it, great! Maybe you don’t. If not, is the extra learning worth using it? Part of the “more in it” may also be around the elegant way things have been designed, I.E. more constraints to push the users down the right path, thus higher chance of less friction and pain in the future of your application, that is of course if your application does things they way Ember says they must be done. I’m seeing some of these things in the likes of the well defined models and computed properties.
  2. Payload is the largest out of all three.

Interesting Plug-ins

  1. ?

Useful Tools

  • “Ember Inspector” for Chrome browser (it’s an extension)
  • ember-tools. Listen to and/or read the pod-cast linked to above. Provides file organisation, scaffolding, template pre-compilation, generators, CommonJS (that’s node.js style) modules. and other goodies. Useful for setting up your project to conform to the Ember conventions, so you don’t end up fighting them.

Angular versus Ember views

Pod-casts

  1. Angular vs Ember Cage Match NDC

Screen-casts

  1. Angular vs Ember Cage Match NDC

Blog Posts, Tutorials, etc.

  1. Evil Trout Ember versus Angular (possible bias toward Ember)
  2. Why AngularJS beat EmberJS

My Thoughts

Both Frameworks Appear to be Targeting a Similar Problem Space

Don’t believe everything you read. Test it before you buy it. I’ve come across quite a few articles that are just incorrect. Even by reputable people. Sometimes because the frameworks have changed how they do things and/or their documentation has changed. So don’t just take it all at face value. The concept of MVC has changed over the past decade. Although concepts have changed, a pattern doesn’t change, that’s why it’s a pattern. Something everybody familiar with a pattern understands. If an implementation starts to change, then it no longer conforms to the pattern and should not be named after the pattern, as this just brings confusion. Microsoft’s ASP.NET MVC framework is a perfect example of this. It does not follow the MVC pattern (documented in 1979) and so should never have been named MVC. Ask me in the comments to explain if your not aware of how this is. In the MVC pattern Models are not injected into views by Controllers. With the MVC pattern, Views listen to events from a Model (The View is actually oblivious to the Model) which the Controller has hooked up, since the Controller knows about both the View and the Model. This may not be your understanding of MVC? More than likely this is due to certain frameworks being labelled as MVC when they are not, thus bringing the confusion. The following image provided by Gang-Of-Four depicts the MVC pattern.

MVC

Angular Doesn’t Pretend JavaScript has Real Classes

Personally I find both frameworks have opinions that make me nauseous.
Like Angular’s scope and Embers class-hierarchy abstraction. Yes Harmony will have pseudo classes for the classical programmers that struggle with JavaScripts declarative prototypal inheritance. (disclaimer: my roots are in classical OO languages) The way I feel about it: Say a whole lot of JavaScript programmers start using a classical OO language and decide they don’t like the way it does classical inheritance, so the classical object oriented language authorities decide to add syntactic sugar on top of the language to make it’s classical inheritance “look” more like prototypal inheritance for those that struggle with the classical paradigm. Now seriously, why would you muddy the language to cater for those that are not prepared to spend the time learning how it works?

Another and probably the most obvious reason why JavaScript didn’t have classes, is so that object hierarchies could be built up via composition (only inheriting what is actually needed) rather than having to inherit every member needlessly from a base class (essentially knowing far more than is actually needed).  Once you have to re-factor your way out of a code base that has abused inheritance thus creating very tightly coupled code by violating one of the object-oriented design principles (information hiding), the perils of over using inheritance will become very clear.

I’m open to exploring what the other client side JavaScript frameworks and libraries have to offer and I’d love to hear from everyone that’s had experience with them.

Angular and Ember do a Lot For You

With all the bells and whistles, both frameworks impose strong opinions that you must follow in order to make the magic (in a lot of the cases convention) work. Once you’ve learnt Angular and/or Ember, productivity is maximised. But… you must be building your application the way the framework creators want you to. At this stage, I’m not supper comfortable with that. This is where Backbone and friends comes in to its own.

Backbone.jsMarionette.js

BackboneJS + MarionetteJS

Intro

Backbone is an unopinionated library that has Models, Views but no Controllers out of the box. That’s right, a library rather than a framework because your code needs to know about it, rather than it knowing about and executing your code. It does not follow the MVC, MVP or MVVM patterns. It’s views and routers act similarly to a controller. Marionette brings the controller to Backbone (if you want or need it), thus you can keep your router doing what it should be doing (just routing, with no controller logic).

What I find strange is that a Backbone view contains a model. I’m not sure I’d even call this a MV* library, as it may introduce confusion.

Backbone’s sweet spot is providing the user with brief and casual interaction. Doesn’t provide help or guidance with deallocating memory and detaching events. Assumptions are that the user isn’t going to be using this application all day without closing the browser window. Although in saying that, there are many applications that use Backbone for this type of thing, but they must provide explicit code to release event handlers. Marionette provided some help here for older versions of Backbone. and Backbone has improved things with newer versions. You will still need to keep in mind that event handlers need to be released though (Backbone’s view.remove takes care of this now). Marionette provides abstractions to deal with these like the close method which provides a place to add clean-up code and then calls Backbone’s remove. Failing to remove event handlers are the largest cause of memory leaks in Backbone.

Core Team

Backbone: Jeremy Ashkenas

Marionette: Derick Bailey

Community

IRC: #marionette on FreeNode. Little activity.

Conferences

  1. BackboneConf

Statistics

  • Version: 1.1.0
  • Payload size: Depends on Underscore development version 43kb or minified and gzipped 4.9kb
    1. Backbone development version 59kb
    2. Backbone minified and gzipped 6.4kb
  • Age: Backbone: Initial Github commits: September 2010

Performance

The second half of this video shows the difference between Backbone and Ember performance. What I’ve seen to date, is that in terms of performance, Backbone leads, second is Ember, third is Angular. You need to decide how much performance matters to your situation and whether or not it’s “good enough” for the framework/library you choose.

Documentation

Pod-casts

  1. Marionette.js
  2. JavaScript Jabber Ember.js (also covers some backbone)
  3. Backbone.js

Screen-casts

  1. How to build modular Backbone applications using MarionetteJS
  2. Tuts+ Intro to Marionette
  3. Plugging in MarionetteJS. This resource is about adding Marionette to a MongoDB document explorer. Also features source code.
  4. Github
  5. BackboneConf 2013 Talks

Blog Posts, Tutorials, etc.

  1. Github
  2. backbone and ember
  3. Marionette Wiki

Books

  1. Backbone Fundamentals

Features

  • ?

Positive

  1. Free to use any templating engine. You can use underscore as it’s the only dependency of backbone, or any other of your choosing.
  2. A lot of excellent documentation
  3. Very flexible in how you may want to use it
  4. Minimalist library
  5. Easy to learn (not a lot of it).
  6. Payload including dependencies is the smallest out of all three. About 9 times smaller than Ember.

Negative

  1. No two way data-binding. Although if you want/need it, you could use the likes of the data binding offerings below in the Interesting Plug-ins section.
  2. No provision for handling nested views. This is where the likes of Marionette’s Backbone.BabySitter comes in
  3. More work required to build large scale applications than the likes of Angular or Ember (just a library after all).
  4. If your large complex application is written in Backbone, chances are you have added a lot of boiler plate code. Any new developers coming onto the project will have to get up to speed on this code. If your large complex application uses Angular or Ember and the new developers coming onto the project have worked with these frameworks, they more than likely won’t have to learn the boiler plate code that they would have to with the likes of Backbone, because it’s part of the framework.

Interesting Plug-ins

  1. There is a similar offering: backbone.layoutmanager which I haven’t really looked into, but according to Derick Bailey (Marionette BDFL) is more of a framework where as Marionette is a library.
  2. Two way data binding with Rivets.jsKnockback.jsbackbone.stickit
    NYTimes backbone.stickit “is a Backbone data binding plug-in that binds Model attributes to View elements with a myriad of options for fine-tuning a rich application experience”. What looks to be nice about this is that unlike most model binding plug-ins I’ve seen, it doesn’t require you to add any extra tags like Angular to your view. In fact your views are not contaminated at all.
  3. Backbone.routefilter plug-in allows you to add behaviour that will be executed immediately before and/or after a route (Backbone.Router or Marionette.AppRouter) executes.

Useful Tools

  • “Backbone Debugger” for Chrome browser (it’s an extension)
  • Frameworks that leverage backbone and provide more functionality
    1. chaplinJS
    2. thoraxJS (adds handlebars integration plus other functionality)

Now a few more concepts that I think are important to know about if your serious about using a client side JavaScript MV* framework/library and in regards to module loading, this applies to the server side also.

Templating

Blog Posts etc.

  1. net tuts+ Best Practices When Working With JavaScript Templates
  2. net tuts+ An Introduction to Handlebars

Some Offerings

I covered some of the template engines here under “Templating Engines evaluated”, or just use the likes of the Template Engine Chooser

Coupling Domain with Framework

As Boris Smus has said and I think it’s right on the money (although I disagree with his comments around JavaScript class as per my comments above):
Once you bite the bullet and decide to invest in a framework, you often have no easy way to move your code out of it.
If you pick Backbone, but decide mid-cycle that it’s not for you, you are in for a world of hurt:
If you have core functionality that you want to release, release it in pure JavaScript, not as a jQuery plug-in, or some MV* module.

Because there are so many JavaScript frameworks coming and going, and we don’t want to invest to heavily into any one of them,
we really need to keep our investment separate from the library/framework code.

To avoid library/framework and class-system lock-in, a good approach in regards to JavaScript MV* libraries/frameworks,
Is to keep the core functionality separate from the user interface code, thus giving us two separate layers.
This gives us flexibility to swap user interfaces as they come and go, yet still keep the majority of our code in an API layer.
The API layer being a logical single layer, but can be modularised, and loaded as needed, AMD style.
With this separation, we can implement the two layers in the following manner.

1) Build the base layer using pure JavaScript prototypal inheritance.
This is the part you write with the intention of keeping and possibly using parts for other projects also.
This base layer will implement an API that you will want to spend a bit of time getting right.
This is the code that will make the most use of unit tests.
To get the separation clear in your head, think of the user interface code as a client that uses this API as if it was service API sitting on the server.
This way you can avoid creating leaky abstractions.

2) Use an MV* library/framework to implement the user interface, and call into the base layer directly.
This lets you move quickly and focus entirely on writing the user interface.
This architecture should facilitate building your user interface on a solid foundation and avoid investing heavily into an offering that you may want to swap out further down the track.

Modules

In most browsers, just including a script tag will cause the rest of the page to stop rendering until the script has loaded then executed.
Which is why if loading scripts synchronously, they should be concatenated, minified, compressed and included at the bottom.
Loading scripts asynchronously don’t block, which is why you can load multiple scripts in parallel where ever you want (any more than 2-3 concurrently and performance will degrade). Make sure to concatenate your scripts though.

What we see as our projects get larger, is that scripts start to have many dependencies in a way that may overlap and nest.

The simplest way to load asynchronously is to create a script tag and inject it into an existing DOM element on your page.
Because the DOM element already exists, the rendering is not blocked.
See the first code example here

// Create a new script element.
var script = document.createElement('script');

// Find an existing script element on the page (usually the one this code is in).
var firstScript = document.getElementsByTagName('script')[0];

// Set the location of the script.
script.src = "http://example.com/myscript.min.js";

// Inject with insertBefore to avoid appendChild errors.
firstScript.parentNode.insertBefore( script, firstScript );

If you want or need to get serious about script loading (which you’re probably going to have to do at some stage), use a best-of-breed script loader. This will also push you down the path of defining modular JavaScirpt (AKA modules).

Next we look at employing script loaders to load our modules…

Formats available for Writing and Using Modular JavaScript

Asynchronous Module Definition (AMD)

For writing modular JavaScript in the browser. To save re-writing what’s already been done… http://addyosmani.com/writing-modular-js/ see “AMD” section, explains it well. What does AMD actually give us? http://requirejs.org/docs/whyamd.html#amd Separation of Concerns, essentially placing value on interface rather than implementation. Mapping of module IDs to different paths. Lots more. Allows asynchronous loading of modules and their dependencies, which is something we need on the client side, but is not generally a requirement for the server side. For getting started, see “Getting Started With Modules” under the AMD section here. Also check out the AMD specification and of course the most common AMD implementation: RequireJS. Then at some stage you’re probably going to want to concatenate and minify your modules and that’s where the likes of r.js comes in. r.js also has a node.js adapter which allows you to use node’s implementation of  require.

Tom Dale (core team member on Ember) also has some interesting ideas around why he thinks AMD is not the answer.

CommonJS API (Optimised for the server)

Although we have the likes of browserify a CommonJS module implementation that can run in the browser or browser-build… makes CommonJS modules available in the browser and is very fast. Ryan Florence discusses module loaders in the pod-cast listed above “JavaScript Jabber Ember Tools” where he decided to move to CommonJS rather than RequireJS for his Ember Tools mostly due to speed. So it’s horses for courses. Decide what your requirements are, then decide which module loader satisfies the most of them. Also see “writing modular js” under the “CommonJS” section.
Providing a rich standard library. The intention is that an application developer will be able to write an application using the CommonJS APIs and then run that application across different JavaScript interpreters and host environments. With CommonJS-compliant systems, you can use JavaScript to write:

  • Server-side JavaScript applications
  • Command line tools
  • Desktop GUI-based applications
  • Hybrid applications (Titanium, Adobe AIR)

Why it doesn’t excel in the browser “out of the box”: http://requirejs.org/docs/whyamd.html#commonjs
ES Harmony (Modules implemented in the language. were not quite there yet, but the current offerings look to be a pretty good step in the right direction).

http://addyosmani.com/writing-modular-js/ (specifically “ES Harmony” section) discusses where TC39 are going in regards to implementing modules in ES.next.

So AMD and CommonJS can be used on server side or client side. In some cases one will work better for you than the other. You’ll need to do your homework as to what to use in which scenarios. Both have advantages and disadvantages that may work for or against you.

I’m keen to get a discussion going here on peoples experiences with the three MV* offerings mentioned. Especially those that have experience with two or more.

Evaluation of .Net Mocking libraries

December 14, 2013

I’ve recently undertaken another round of evaluating .NET mocking (fake/substitute/dummy/stub/ or what ever you want to call them now) libraries. Interestingly the landscape has changed quite a bit since last time I went through this exercise, which was about two years ago. The outcome of the previous investigation is at the bottom of this post.

Evaluation criterion

  1. Who is the creator. I’ve favoured teams rather than individuals, as individuals move on, then where does that leave the product? RhinoMocks is a prime example of this. It’s was an excellent library. maybe a new owner, maybe not.
  2. Does it do what we need it to do?
  3. Are there any integration problems with all of our other chosen components? Works with .Net versions the development team are using. Any other complaints around integration?
  4. Cost in money. Is it free? Are there catches once you get further down the road? Usually open source projects are marketed as is. No catches
  5. Cost in time. Is the set-up painful? Customisation feedback? Upgrade feedback?
  6. How well does it appear to be supported? What do the users say?
  7. Documentation. Is there any / much? What is it’s quality?
  8. Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have a valid reason).
  9. Release schedule. How often are releases being made? When was the last release?
Following is the collection of libraries I looked at. Numbering from highest scorers to lowest. All have NuGet packages:

How the Playing Field Looks Today

NSubstitute (new style)

Free and open source.
Source code: https://github.com/nsubstitute/NSubstitute/
BDFL has 534 commits. Next highest is 30.
4.5 years old. Recent activity.
Stackoverflow 69 tagged questions
Has an active Google discussion group
Regular releases
Documentation looks very good.
Very easy to read, well thought out syntax.

FakeItEasy (new style)

Free and open source.
Source code: https://github.com/FakeItEasy/FakeItEasy/
Nice spread across contributors. No single point of failure.
Almost 4 years old.
Plenty of current activity. About 30% more than NSubstitute
Stackoverflow 85 tagged questions
Regular releases
Documentation looks OK.
Syntax looks OK.

JustMock

Not free and closed source.
If you happen to have a Telerik Devcraft bundle you’ll be entitled to one free JustMock license. Not much help if you want to use all the features across the team.
There is a light free version which has most/all of the features that most development teams would require.
It would have to be head and shoulders above the rest to warrant paying for it. Going on the feature set I don’t think it is, but I haven’t used it. Plus I have more confidence in the right open source offerings.
$US400 license per user.
Light edition is free, but I don’t see any reason why they couldn’t remove this offering or put a price tag on it.
NuGet package
Are we prepared to invest building code around this with the possibility of it becoming not free?
Lite vs full: http://www.telerik.com/freemocking.aspx#comparison
Doesn’t appear to be a lot of community around the free edition.

Moq

Free and open source.
Source code: https://github.com/Moq/moq4
Last release was 2013-11-18 previous to that it was 2.5 years ago.
Very small learning curve

Rhino Mocks

Free and open source.
Source code: https://github.com/hibernating-rhinos/rhino-mocks
Last activity: 3 years ago.
Has a new owner (MIke Meisinger), but I haven’t seen any new work yet.
There were also NMock and TypeMock which didn’t evaluate high enough this time or last time.
-
if it walks like a duck and quacks like a duck

How the Playing Field Looked Two Years Ago

Rhino Mocks

Free and open source.
Very full featured.
Easy enough to use.
logical and consistent syntax.
Most up to date documentation (best place to start)
somewhat out of date documentation, but more of it than the above link.
Community, Download, More code examples here.
Example of the old record/playback syntax as opposed to the new AAA syntax.
Keeping up to date on the progress of Rhino Mocks.
The most popular mocking framework two years ago.

Moq

Clean discoverable API design and lack of complicated record/playback model, which is nice.
Have used this, and haven’t had any issues I couldn’t get around.
Very easy to learn and use.

TypeMock

Commercial product (expensive, so not really viable).
Ability to mock anything including statics, privates and events on multiple languages.

NMock

Appears to be abandoned

I’ve just started using NSubstitute and have used Rhino Mocks, Moq and NMock previously.
Feel free to offer your experiences on the mocking libraries you have used and comparisons. I’d love to hear your experiences with these and other mocking libraries.

Up and Running with Sass (scss) and Less in Visual Studio

November 26, 2013

I recently evaluated the support for the top two CSS preprocessors (Sass and Less) for the environment my client team and myself are currently constrained to (Visual Studio 2012).

I setup Less and Sass and decided to go with Sass. Below I outline what the process was, briefly what each has to offer and why the decision went the Sass way. Both Sass and Less are super sets of CSS, so you can just rename your CSS file extension to .less if using Less or .scss (file extension for the newer Sass format (stands for Sassy CSS)) if you decide to use Sass and save the file. A CSS file will be automatically generated. Now you can just start changing the CSS to Sass or Less to start using all the new functionality you have at your disposal. There is no preprocessor lock-in. If I decided to change to Less or even back to CSS, you’d just do the same thing with the generated CSS file.

If you’re not up to speed on what a CSS preprocessor is or why you would want to use one on your web project, now’s a good time to find out before we carry on with the two setup procedures. Go and checkout Sass and Less.

Less

Less was originally written in Ruby, but later re-written in JavaScript.

Less setup is reasonably straight forward, but didn’t work straight away like the last time I used it. Turns out it was just an install order thing.
In Visual Studio…
Install “Web Essentials 2012” through TOOLS -> Extensions and Updates
Then

Install Web Developer Tools 2012.2. You may have them already installed. Install them again if you do.
Web Essentials actually provides us with a lot of other features I’d promote like JSHint (a better JSLint) and quite a few other goodies.

In Visual Studio TOOLS -> Options -> Web Essentials -> LESS
You’ll see something like this…

Less Options

This is how I set up the options. I usually turn of the “Show preview window”, but it’s fine to have it on to start with, so you can see the .css getting generated side by side with the Less.There are other extensions that support Less also, but I think this was the easiest setup.

Sass (scss)

Written in Ruby.

I tried the SassyStudio extension. It has

  1. syntax highlighting
  2. region outlining
  3. some intellisense support
  4. CSS generation.

I also tried Mindscape Web Workbench which also has CoffeeScript and LESS support. It has what SassyStudio has plus

  1. (by the look of it, better intellisense support)
  2. warnings of syntax errors
  3. warnings of unknown variables and mixins
  4. go to variable or mixin definition
  5. CSS file minification (pro edition only. You probably don’t need this as there are plenty of other ways to minify)
  6. more customisation capabilities than SassyStudio.

The Setup

Sass (scss) Options

You may also need to Install Web Developer Tools 2012.2 as I did for the Less trial above. You may as well do this anyway to make sure you’re on the latest version.
In Visual Studio…
Install “Mindscape Web Workbench” through TOOLS -> Extensions and Updates.

Better Sass (or more correctly with the new format scss) extensions will keep appearing I’d say, so when they do it’d probably be worth looking at them. The only thing so far I don’t like about Mindscape Web Workbench is that it has a small “Go Pro” label at the bottom of the scss file. I don’t think it’ll bother anyone to much though.

Adding a scss file

Right click on the folder you want to add the new file to -> Add -> New Scss File… -> Save

Add new scss file

Once you’ve created a new scss file or renamed a CSS file to have the .scss extension and saved it, for now we can commit both of these files to source control. You’ll need to setup a preprocessor on the build server in order to be able to not have the generated CSS files in source control.

Why I chose Scss over Less

Sass has similar functionalities to Less (nested rules, variables, mixins, functions, inheritance, operators), but it can be used with Compass and Susy. Compass provides a framework of functions and add-ons built on top of Sass. Compass automatically handles image spriting, writes cleaner code, provides page layout tools, resets and lots of other useful features. Susy is a responsive grid add-on for Compass. Mindscape Web Workbench also provides support for Compass . So if we want to take advantage of these sometime, they are available.

Sass (scss) Resources

Up and Running with Express on Node.js … and friends

July 27, 2013

This is a result of a lot of trial and error, reading, notes taken, advice from more knowledgeable people than myself over a period of a few months in my spare time. This is the basis of a web site I’m writing for a new business endeavour.

Web Frameworks evaluated

  1. ExpressJS Version 3.1 I talked to quite a few people on the #Node.js IRC channel and the preference in most cases was Express. I took notes around the web frameworks, but as there were not that many good contenders, and I hadn’t thought about pushing this to a blog post at the time, I’ve pretty much just got a decision here.
  2. Geddy Version 0.6

MV* Frameworks evaluated

  1. CompoundJS (old name = RailwayJS) Version 1.1.2-7
  2. Locomotive Version 0.3.6. built on Express

At this stage I worked out that I don’t really need a server side MV* framework, as Express.js routes are near enough to controllers. My mind may change on  this further down the track, if and when it does, I’ll re-evaluate.

Templating Engines evaluated

  1. jade Version 0.28.2, but reasonably mature and stable. 2.5 years old. A handful of active contributors headed by Chuk Holoway. Plenty of support on the net. NPM: 4696 downloads in the last day, 54 739 downloads in the last week, 233 570 downloads in the last month (as of 2013-04-01). Documentation: Excellent. The default view engine when running the express binary without specifying the desired view engine. Discussion on LinkedIn. Discussed in the Learning Node book. Easy to read and intuitive. Encourages you down the path of keeping your logic out of the view. The documentation is found here and you can test it out here.
  2. handlebars Version 1.0.10 A handful of active contributors. NPM: 191 downloads in the last day, 15 657 downloads in the last week, 72 174 downloads in the last month (as of 2013-04-01). Documentation: Excellent: nettuts. Also discussed in Nicholas C. Zakas’s book under Chapter 5 “Loose Coupling of UI Layers”.
  3. EJS Most of the work done by the Chuk Holoway (BDFL). NPM: 258 downloads in the last day, 13 875 downloads in the last week, 56 962 downloads in the last month (as of 2013-04-01). Documentation: possibly a little lacking, but the ASP.NET syntax makes it kind of intuitive for developers from the ASP.NET world. Discussion on LinkedIn. Discussed in the “Learning Node” book by Shelley Powers. Plenty of support on the net. deoxxa from #Node.js mentioned: “if you’re generating literally anything other than all-html-all-the-time, you’re going to have a tough time getting the job done with something like jade or handlebars (though EJS can be a good contender there). For this reason, I ended up writing node-ginger a while back. I wouldn’t suggest using it in production at this stage, but it’s a good example of how you don’t need all the abstractions that some of the other libraries provide to achieve the same effects.”
  4. mu (Mustache template engine for Node.js) NPM: 0 downloads in the last day, 46 downloads in the last week, 161 downloads in the last month (as of 2013-04-01).
  5. hogan-express NPM: 1 downloads in the last day, 183 downloads in the last week, 692 downloads in the last month (as of 2013-04-01). Documentation: lacking

Middleware AKA filters

Connect

Details here https://npmjs.org/package/connect express.js shows that connect().use([takes a path defaulting to '/' here], andACallbackHere) http://expressjs.com/api.html#app.use the body of andACallbackHere will only get executed if the request had the sub directory that matches the first parameter of connect().use

Styling extensions etc evaluated

  1. less (CSS3 extension and (preprocessor) compilation to CSS3) Version 1.4.0 Beta. A couple of solid committers plus many others. runs on both server-side and client-side. NPM: 269 downloads in the last day, 16 688 downloads in the last week, 74 992 downloads in the last month (as of 2013-04-01). Documentation: Excellent. Wiki. Introduction.
  2. stylus (CSS3 extension and (preprocessor) compilation to CSS3) Worked on since 2010-12. Written by the Chuk Holoway (BDFL) that created Express, Connect, Jade and many more. NPM: 282 downloads in the last day, 16 284 downloads in the last week, 74 500 downloads in the last month (as of 2013-04-01).
  3. sass (CSS3 extension and (preprocessor) compilation to CSS3) Version 3.2.7. Worked on since 2006-06. Still active. One solid committer with lots of other help. NPM: 12 downloads in the last day, 417 downloads in the last week, 1754 downloads in the last month (as of 2013-04-01). Documentation: Looks pretty good. Community looks strong: #sass on irc.freenode.net. forum. - less, stylus, sass comparison on nettuts. -
  • rework (processor) Version 0.13.2. Worked on since 2012-08. Written by the Chuk Holoway (BDFL) that created Express, Connect, Jade and many more. NPM: 77 downloads in the last week, 383 downloads in the last month (as of 2013-04-01). As explained and recommended by mikeal from #Node.js its basically a library for building something like stylus and less, but you can turn on the features you need and add them easily.  No new syntax to learn. Just CSS syntax, enables removal of prefixes and provides variables. Basically I think the idea is that rework is going to use the likes of less, stylus, sass, etc as plugins. So by using rework you get what you need (extensibility) and nothing more.

Responsive Design (CSS grid system for Responsive Web Design (RWD))

There are a good number of offerings here to help guide the designer in creating styles that work with the medium they are displayed on (leveraging media queries).

Keeping your Node.js server running

Development

During development nodemon works a treat. Automatically restarts node when any source file is changed and notifies you of the event. I install it locally:

$ npm install nodemon

Start your node app wrapped in nodemon:

$ nodemon [your node app]

Production

There are a few modules here that will keep your node process running and restart it if it dies or gets into a faulted state. forever seems to be one of the best options. forever usage. deoxxa’s jesus seems to be a reasonable option also, ningu from #Node.js is using it as forever was broken for a bit due to problems with lazy.

Reverse Proxy

I’ve been looking at reverse proxies to forward requests to different process’s on the same machine based on different domain names and cname prefixes. At this stage the picks have been node-http-proxy and NGinx. node-http-proxy looks perfect for what I’m trying to do. It’s always worth chatting to the hoards of developers on #Node.js for personal experience. If using Express, you’ll need to enable the ‘trust proxy’ setting.

Adding less-middleware

I decided to add less after I had created my project and structure with the express executable.
To do this, I needed to do the following:
Update my package.json in the projects root directory by adding the following line to the dependencies object.
“less-middleware”: “*”

Usually you’d specify the version, so that when you update in the future, npm will see that you want to stay on a particular version, this way npm won’t update a particular version and potentially break your app. By using the “*” npm will download the latest package. So now I just copy the version of the less-middleware and replace the “*”.

Run npm install from within your project root directory:

my-command-prompt npm install
npm WARN package.json my-apps-name@0.0.1 No README.md file found!
npm http GET https://registry.npmjs.org/less-middleware
npm http 200 https://registry.npmjs.org/less-middleware
npm http GET https://registry.npmjs.org/less-middleware/-/less-middleware-0.1.11.tgz
npm http 200 https://registry.npmjs.org/less-middleware/-/less-middleware-0.1.11.tgz
npm http GET https://registry.npmjs.org/less
npm http GET https://registry.npmjs.org/mkdirp
npm http 200 https://registry.npmjs.org/mkdirp
npm http 200 https://registry.npmjs.org/less
npm http GET https://registry.npmjs.org/less/-/less-1.3.3.tgz
npm http 200 https://registry.npmjs.org/less/-/less-1.3.3.tgz
npm http GET https://registry.npmjs.org/ycssmin
npm http 200 https://registry.npmjs.org/ycssmin
npm http GET https://registry.npmjs.org/ycssmin/-/ycssmin-1.0.1.tgz
npm http 200 https://registry.npmjs.org/ycssmin/-/ycssmin-1.0.1.tgz
less-middleware@0.1.11 node_modules/less-middleware
├── mkdirp@0.3.5
└── less@1.3.3 (ycssmin@1.0.1)

So you can see that less-middleware pulls in less as well.
Now you need to require your new middleware and tell express to use it.
Add the following to your app.js in your root directory.

var lessMiddleware = require('less-middleware');

and within your function that you pass to app.configure, add the following.

app.use(lessMiddleware({
   src : __dirname + "/public",
   // If you want a different location for your destination style sheets, uncomment the next two lines.
   // dest: __dirname + "/public/css",
   // prefix: "/css",
   // if you're using a different src/dest directory, you MUST include the prefix, which matches the dest public directory
   // force true recompiles on every request... not the best for production, but fine in debug while working through changes. Uncomment to activate.
   // force: true
   compress : true,
   // I'm also using the debug option...
   debug: true
}));

Now you can just rename your css files to .less and less will compile to css for you.
Generally you’ll want to exclude the compiled styles (.css) from your source control.

The middleware is made to watch for any requests for a .css file and check if there is a corresponding .less file. If there is a less file it checks to see if it has been modified. To prevent re-parsing when not needed, the .less file is only reprocessed when changes have been made or there isn’t a matching .css file.
less-middleware documentation

Bootstrap

Twitters Bootstap is also really helpful for getting up and running and comes with allot of helpful components and ideas to get you kick started.
Getting started.
Docs
.

Bootstrap-for-jade

As I decided to use the Node Jade templating engine, Bootstrap-for-Jade also came in useful for getting started with ideas and helping me work out how things could fit together. In saying that, I came across some problems.

ReferenceError: home.jade:23

body is not defined
    at eval (eval at <anonymous> (MySite/node_modules/jade/lib/jade.js:171:8), <anonymous>:238:64)
    at MySite/node_modules/jade/lib/jade.js:172:35
    at Object.exports.render (MySite/node_modules/jade/lib/jade.js:206:14)
    at View.exports.renderFile [as engine] (MySite/node_modules/jade/lib/jade.js:233:13)
    at View.render (MySite/node_modules/express/lib/view.js:75:8)
    at Function.app.render (MySite/node_modules/express/lib/application.js:506:10)
    at ServerResponse.res.render (MySite/node_modules/express/lib/response.js:756:7)
    at exports.home (MySite/routes/index.js:19:7)
    at callbacks (MySite/node_modules/express/lib/router/index.js:161:37)
    at param (MySite/node_modules/express/lib/router/index.js:135:11)
GET /home 500 22ms

I found a fix and submitted a pull request. Details here.

I may make a follow up post to this titled something like “Going Steady with Express on Node.js … and friends’”

JavaScript Object Creation Patterns

July 6, 2013

What are the differences in creating an object by way of simple function invocation, vs using a constructor vs creating an object using the object literal notation vs function application?

To make sure we’re all on the same page, a quick refresher of what an object actually is in JavaScript…

What is an object in JavaScript?

  • An object is an unordered mutable keyed collection of properties. Each property is either a named data property, a named accessor property, or an internal property. I discussed JavaScript properties in depth here.
  • The ECMAScript language types are Undefined, Null, Boolean, Number, String and Object.
  • The simple types (primitives) of JavaScript are members of one of the following built-in types: Undefined, Null, Boolean (true and false), Number, and String.
  • All other values are objects. Function, String, Number, RegExp etc all indirectly inherit Object via their prototype property, which has a hidden link to Object.
  • All objects have a prototype. Very good explanation of JavaScript prototypes here by Angus Croll.
  • An object created from a function has a “prototype” property (an Object) (seen below in the red box) (whether invoked as a constructor or function). It has a property which is a constructor function (seen below in blue box) and a hidden property (a link) to the actual Object.prototype (seen below in the pink box).
  • An object created by means of an object literal inherits straight from (is linked to) Object.prototype. So the “prototype” property doesn’t exist, but there are other ways to access it. Problem is… how to access it varies from browser to browser.

internals of a function object

internalsOfAfunctionobject

Every function is also created with two additional hidden properties: the functions context and the code that implements the functions behaviour.

Object creation

On invocation, every function receives 2 hidden parameters. this and the arguments array (which provides access to all the arguments that were supplied to the function on invocation). The value assigned to this is determined by how the function was invoked. We’ll look at this in the following sections.

Object creation via function invocation

Pros

Best suited for creation of one-time on-demand objects.

Cons

If a function is not the property of an object literal, when you invoke it, this will be bound to the global object. Often not what you’re expecting. A mistake in the design of the language. As you can see in the following code, the this of the local scope (kimsGlobalFunction)

aFunctionInItsOwnScope

correct this is global

Now here you can see when we invoke the local function, the this is bound to the global object. That’s a mistake in the language. When innerFunction is invoked, the this of that function is also bound to the global object.


InsideaFunctionsOwnScope

Line 18 above alerts undefined, because that doesn’t belong to the global object.


NotCorrectThisIsGlobal

-

Object creation via constructor

What is a constructor in JavaScript?

It’s a function, nothing more. It’s how it is invoked that determines it as a constructor.

What does it look like?

(function () {
   // Create a constructor function called MyFunc.
   var MyFunc = function (aString) {
      // The following variable is private.
      var privateString = aString;
      // Access it with a privileged method.
      this.publicString = function () {
         return privateString;
      }
   };

   // Prefixing with new means we're now using the function as a constructor.
   // So we use PascalCase rather than camelCase, so users of MyFunc don't invoke without the new prefix.
   var myFunc = new MyFunc('sponge bob');
   alert(myFunc.publicString());
}());

Pros

Great for re-use. Creating a constructor and assigning members to it’s prototype, mean that every time you create an object from the constructor using the new prefix, the new object uses the same prototypes members. This can save big time on memory if you are creating many objects with the constructor.

Cons

Con 1

What happens when someone invokes a constructor function directly without the new prefix?
The this of the function will not be bound to the new object, but rather to the global object.
so instead of augmenting your new myFunc object, you will be clobbering the global object.
myFunc‘s this would refer to the global object

What counter measures do we have at our disposal to make sure this doesn’t happen?
Naming conventions.
Enforcing new is used with JavaScript constructor functions? pg 45 – 46 of Stoyan Stefanov’s JavaScript Patterns book addresses this. Problem is, those patterns all have significant flaws. So, you really need to weigh up the pros and cons. Maturity of your development team should also play a role in your decision here. It may be worth taking a safer route and employing an object literal if developers are likely to omit the new prefix on an intended constructor invocation.

Object creation via object literal

What does it look like?

Line 12 executes the creation and return of an object with a property called publicString.

// In its simplest form:
var myObject = {};

(function () {
   var myFunc = (function () {
      // private members
      var privateString = 'Spong Bob';
      // implement the public part
      return {
         publicString: function () {
            return privateString;
         }
      };
   }());

   alert(myFunc.publicString());
}());

Pros

Pro 1

The this is bound to where you’d expect it to be (adherence to Principle of least astonishment (POLA).
When a function is stored as a property of an object literal, it’s a method. When a method is invoked with this pattern, this is bound to the object. In the below section of code, when execution is on line 05

(function kims() {

   var myObjectLiteral = {
      myProp: 'a property value',
      myFunc: function () {
         alert(this.myProp);
      }
   };
   myObjectLiteral.myFunc();
}());

Here you can see that the value of myFunc‘s this argument is in fact the myObjectLiteral.

this value

In which case because a function is an object, and a variable is a property, then the same must apply to invoking a function of a function? No. The vital word here is “literal”… As above… “When a function is stored as a property of an object literal

Pro 2

Best suited for creation of one-time on-demand objects.

Cons

As with storing members in a function, be it a function you intend using new with (a constructor), or just by invoking the function, members will not be shared between instances of the prototype. This means that if you create 200’000 myObj object literals as I did in the test above, you will have 200’000 separate add functions in memory. The same goes for adding the add function to the constructor without adding it to the constructors prototype.

Object creation via function application

There is one more object creation pattern. Function application. I’ve discussed this in depth in the following three posts. Check them out.

http://blog.binarymist.net/2012/04/29/extending-currying-and-monkey-patching-part-1/

http://blog.binarymist.net/2012/05/14/extending-currying-and-monkey-patching-part-2/

http://blog.binarymist.net/2012/05/27/extending-currying-and-monkey-patching-part-3/

Speed Testing

Object creation is significantly slower using constructors with no prototype than it is using object literals. Now I did some more testing around this and got some surprising results. Using Chromium, V8 is doing some severe optimisation with object creation using constructors. The more members I added to my test constructor and object literal, the more it became noticeable.

var runTestTimes = function (iterations) {
   var test = function () {
      var constructorIterations = 1000000;
      var objLiteralIterations = 1000000;
      var constructorStart;
      var objLiteralStart;
      var constructorTime;
      var objLiteralTime;
      var MyFunc = function () {
         var simpleString = 'simple string';
         var add = function () {
            return 1+1;
         };
         add();
      };

      constructorStart = new Date();
      while (constructorIterations--) {
         var myFunc = new MyFunc();
      }
      constructorTime = new Date - constructorStart;

      objLiteralStart = new Date();
      while (objLiteralIterations--) {
         var myObj = {
            simpleString: 'simple string',
            add: function () {
               return 1+1;
            }
         };
         myObj.add();
      }
      objLiteralTime = new Date - objLiteralStart;

      console.log('constructor: ' + constructorTime + ' object literal: ' + objLiteralTime);
   };
   while (iterations--) {
      test();
   }
};
runTestTimes(10);

Yields the following results:

constructor: 32 object literal: 19
constructor: 25 object literal: 18
constructor: 25 object literal: 17
constructor: 25 object literal: 18
constructor: 26 object literal: 17
constructor: 25 object literal: 18
constructor: 25 object literal: 17
constructor: 25 object literal: 17
constructor: 25 object literal: 18
constructor: 25 object literal: 17

By simply adding another function to the constructor and the object literal, the results in chromium swung to favour the constructor.

// add this function to the constructor.
var subtract = function () {
   return 1-1;
};

// add this function to the object literal.
subtract: function () {
   return 1-1;
}

Reducing the number of iterations from 1’000’000 to 200’000 because the same code run in Firefox crashed it… Yielded the following in Chromium:-

constructor: 5 object literal: 6
constructor: 3 object literal: 5
constructor: 3 object literal: 5
constructor: 2 object literal: 5
constructor: 3 object literal: 4
constructor: 2 object literal: 5
constructor: 2 object literal: 5
constructor: 3 object literal: 4
constructor: 3 object literal: 5
constructor: 2 object literal: 5-

and yielded the following using the Firefox JavaScript engine SpiderMonkey:-

constructor: 701 object literal: 21
constructor: 729 object literal: 17
constructor: 705 object literal: 17
constructor: 721 object literal: 18
constructor: 727 object literal: 20
constructor: 723 object literal: 18
constructor: 726 object literal: 18
constructor: 727 object literal: 20
constructor: 728 object literal: 17
constructor: 736 object literal: 18-

When I moved the add and subtract functions from the constructor to the constructors prototype, the speed results in chromium didn’t yield any noticeable difference. in Firefox

the average went from 854ms to 836ms. The change looked like the following:

var MyFunc = function () {
   var simpleString = 'simple string';
};

MyFunc.prototype.add = function () {
   return 1+1;
};

MyFunc.prototype.subtract = function () {
   return 1-1;
};

I decided to create some tests on jsperf to provide repeatable results. What’s interesting is you don’t have to change a lot to get completely different results, so if you’re concerned about performance, it really pays to test it. I think the tests on jsperf are probably a bit more truthful. here they are.

Summary

There are many more pros and cons of each invocation pattern. I’ve listed the ones that I think are the most important to understand. There is no right or wrong pattern to use for everything. Consider your target audience and what the majority of them may be using in terms of browsers. Consider the maturity of your development team. Benchmark the different approaches, but don’t fall into the trap of micro optimisation, or optimising for a single browser or JavaScript engine unless that’s all your users are using. Choose the pattern that provides the most wins for the given situation. I didn’t test in I.E, but as you can see, the JavaScript engines I did test with do things very differently.

Tools like

  1. Google Page Speed
  2. Google Speed Tracer
  3. Chromiums Profiler

Will help you focus on the areas that matter most.

There are of course many other areas to look at when it comes to “is your app delivering an acceptable user experience”.
Take a few steps back from your situation. You only have so much time. Spend it wisely.
There are many good wins to be had for little cost. Yahoos YSlow has a bunch.
Many books also address this in depth:

  1. Even Faster Web Sites by Steve Souders
  2. High Performance Web sites by Steve Souders
  3. High Performance JavaScript by Nicholas Zakas

There is also a good read on how V8′s full and optimising JIT compilers optimise JavaScript.
I’ve found that most of it’s intuitive and if your using good design and coding principles, in “most” cases your safe, but it’s still worth the read.

  1. As developers we try not to change classes on the fly. deleting or adding properties to hot objects in JavaScript negatively effects the optimising compiler.
  2. We don’t use floats when we only need ints.
  3. In JavaScript, use Arrays when the property names are small sequential integers. Otherwise, use an object. JavaScript Arrays are not like arrays in most other languages. They are simply objects with some array like characteristics.
  4. Assign your array elements as early as possible. 
  5. Don’t delete elements in an array, or leave elements empty.

JavaScript is a very dynamic language. Use its dynamic nature cautiously if you want performant code. Most importantly, favour read time convenience over write time. Your code is going to be read many more times than it’s written.

At this stage, V8 is way ahead of the other JavaScript engines in terms of performance. Node.js uses the V8 engine and enjoys the same incredible performance.

Inheritance

Prototypal inheritance is more OO than classical inheritance.
With prototypal inheritance, a child object only needs to inherit the parent objects specific properties pertinent to it.
With classical inheritance, a child object inherits all the parent objects members, even the ones that it should have no knowledge of.

Hopefully most of us already know to favour composition (aggregation) over inheritance. Be it classical or prototypal. I’ve explained some techniques of how this can be done effectively in JavaScript in the above three posts.

Angus Croll is a master in explaining these concepts, so be sure to check out his post here.

Even the Java creator James Gosling says he’d do away with classes or classical inheritance if he could write the language again.
Inheritance can be an anti pattern as it’s tight coupling. Sub classes inherit everything no matter what. Prototypal is opt-in.
One of the Fluent Conference talks by Eric Elliott on why we should steer away from classical inheritance goes to say the following:

Classical Inheritance is Obsolete
“Those who are unaware they are walking in darkness will never seek the light.” —Bruce Lee
In “Design Patterns”, the Gang of Four recommend two important principles of object oriented design:
1) Program to an interface, not an implementation.
2) Favour object composition over class inheritance.
In a sense, the second principle could follow from the first, because inheritance exposes the parent class to all child classes. The child classes are all programming to an imple‐ mentation, not an interface. Classical inheritance breaks the principle of encapsulation, and tightly couples the child classes to its ancestors.
Why is the seminal work on Object Oriented design so distinctly anti-inheritance? Because inheritance causes several problems:
Tight coupling. Inheritance is the tightest coupling available in OO design. Descendant classes have an intimate knowledge of their ancestor classes.
Inflexible hierarchies. Single parent hierarchies are rarely capable of describing all possible use cases. Eventually, all hierarchies are “wrong” for new uses—a problem that necessitates code duplication.
Complicated multiple inheritance. It’s often desirable to inherit from more than one parent. That process is inordinately complex and its implementation is inconsistent with the process for single inheritance, which makes it harder to read and understand.
Brittle architecture. Because of tight coupling, it’s often difficult to refactor a class with the “wrong” design, because much existing functionality depends on the existing design.
The Gorilla / Banana problem. Often there are parts of the parent that you don’t want to inherit. Subclassing allows you to override properties from the parent, but it doesn’t allow you to select which properties you want to inherit.

Additional Sources:
ECMA-262 edition 5.1
JavaScript The Good Parts.
JavaScript The Definitive Guide.
Eric Elliott’s talk at the fluent conference May 28-30, 2013.

Ideas for more effective meetings and presentations

June 22, 2013

I’ve been reading a book lately called Fearless Change. A very insightful friend of mine: Charles Bradley introduced me to it.

Fearless Change

It’s got some great ideas in Chapter 5 “Meetings and More” that I’m looking to adopt and use. There are a bunch of useful patterns that can be applied to all sorts of meetings, presentations etc. Here are some of the patterns discussed that stood out and my interpretation of them, mixed in with some of my own ideas and experiences.

Piggyback

Utilise existing resources, energy, practices, processes, events and/or momentum to host or leverage what you want to get across to the people receiving your idea. Marketing your idea as an add-on or improvement to a process or practise that already exists. Piggybacking your idea on top of something existing and possibly successful is far more likely to be accepted than if your idea is something entirely new and to be greeted with caution and suspicion. Often useful for getting around an organisations red tape.

Brown Bag

Most people will be familiar with this one. Most of us in the IT industry are very busy and struggle to find time to attend optional meetings where we may learn something. This is about holding your meeting in the middle of the day when people are often eating their lunch. Often a good time to add an event where they can listen while they’re eating. Although not always accepted, so it’s a good idea to test the water and check whether people will be interested in this type of event.

Do Food

Research has shown that we become fonder of people and things we experience while we are eating. Food does a great job of binding people together and increasing the feeling of group membership.
Eating together is something the human race has done since the beginning of time and is interpreted as a sign of friendship. Even mentioning that food and/or beverages will be available at an event will just about always draw more people. It also shows that someone is prepared to spend money on the food which shows a giving attitude, I.E putting your money where your mouth is.
Generally when people find out that there will be food at an event, the level of excitement increases. The food doesn’t have to be elaborate or expensive either. Be aware of people that may be on diets or other wise struggle with this idea and be considerate. Offer healthy alternatives.

Right Time

Considering the time of day that you hold your event can determine how much people remember, how engaged they are. People are busy and have deadlines, investigate and try and work around these to obtain maximum attendance and interest. After projects have been delivered and the stress is off a little before the next project ramps up. Of course you’re never going to get the perfect time for everyone, so just find a slot that’s most suitable to the greatest number of people likely wanting to attend. Try not to spring dates and times on people with little advance warning. Remind attendees several times as the meeting approaches.

Plant the Seeds

Plant the Seeds

Bring reading material to your meeting. People like to have something to flick through while they are listening. Also at least another sense is engaged (touch and visual). The more senses we can engage, the higher the chance of remembrance. Leave them in an easily accessible place so people don’t feel uncomfortable picking them up, or hand them out. You may bring some photo copies of the information you’re intending to get across that people can take away with them. This will also increase the chance of adoption and remembrance. It’ll also increase discussions after the meeting.
There is often a sense of obligation to somehow repay your kindness when given a token gesture. You could also bring a collection of books you have on the subject matter that people can flick through while your presenting. People are also often persuaded by mass media materials, they provide validation of your idea and reinforce.
If you have specific reading you want people to go through, don’t just provide links (although links are important as it saves people typing out printed URL’s), but also provide the printed mater in a leaflet form.
Make sure your presentation is easily accessible. Make sure any such material that has your name on it is prominently displayed. Make sure your audience knows your available afterwards for questions and encourage them. What I thought was really interesting is that often seeds sown by reading materials will only stick with a small number, but in amongst that small number may be key people for connecting with other like minded individuals and propagating your idea.

External Validation

Associating supporting information from alternative sources validates what you’re trying to get across. Bringing sources from respected authorities or gurus that support your idea will significantly increase the chance of your idea being welcomed and accepted.
People love success stories. If you can associate your idea with that success, your chances are increased. Initially external sources are the most powerful. If you can present in a venue that your colleagues associate with or respect, your chances will be improved.
Getting a guru that people respect to publicly affirm your idea will work wonders. Getting a leader in your organisation that agrees with your idea to publicly voice his acceptance. If you can show a publication that references something you’ve done you get kudos.

Next Steps

Next Steps

Often when we attend a training session we come away wondering how we’re going to put into practice what we just heard.
Work out a plan of attack for what attendees can do once they’ve left the meeting. How are they going to apply your idea? Provide practical next steps. Don’t overload them though. What are they going to do when they get stuck?
Near the end of your meeting invite all to join in a brain storming session on how they can actually put into practice what’s been learnt. When people actively join in and provide their own ideas on what they have learnt, they will create a loose plan that will help get them started that they’ll be able to take away.
Resist telling your attendees what they need to do. Rather help lead them to create their own plan. They will know more about their specific areas of need than you do. Facilitate/empower the group to discover their own plan. Create a list of ideas and action items, just like you would if you were leading a Retrospective. Posting the list to all attendees as a reminder. Encourage your attendees to go over the information provided again within 24 hours and again within the next 7 days to help remember.

e-Forum

Provide a bulletin board for people interested in the subject to communicate, share and bounce their ideas off of each other. This is good for those that are not able to attend the event but still want to be part of what’s happening. Provides a flexible meeting place. You as the person promoting the idea can keep all interested parties up to date on what’s happening and how things are progressing. Setting up a mailing list will help you understand who is on-board and to what degree. A facebook event for short lived discussions, or a page, or even encourage people to discuss the idea in the comments section of your blog. Maybe a Google+ page? You’re going to have to maintain it of course.

Stay in Touch

Catch up with the people that show interest in your idea/s. Take a genuine interest in what they do, their related troubles and concerns. Unless you understand what the problems are that they face on a regular basis, how are you going to help solve them or at least die trying? Build relationships with those you intend to help. You must know them in order to provide useful answers. Maintain relationships with your key supporters and people that have lots of other related connections. Don’t loose sight of the fact that the people you’re attempting to help have feelings too and like to be heard.

Group Identity

meetings

Assigning an identity ((code)name) to a project helps people and organisations realise it’s existence. Patterns are named… Why? So when the name is mentioned, those that are familiar with the pattern know exactly what you’re talking about. A pattern can paint a thousand words or lines of code for that matter.
Meetings held at a regular interval have their own identity, assigning a name helps reference in conversation. An identity also gives the appearance of effort being asserted which helps build energy and momentum. Adding a mailing list or e-Forum helps generate a Group Identity.

Not mentioned in the book

Make things interesting and memorable. The more weird and out of the ordinary you can be with the ideas you’re trying to get across, the more likely your participants are to remember. For example, I did a presentation a while ago where I brought some carpentry tools along and some building materials. I asked for a volunteer to help with my demonstration and everyone was very interested in my demo. It’s these sorts of things that people remember.

I’ll probably update this as time goes on and I learn more and find things that work. If you’ve got other interesting ideas you’ve found that improve these sorts of events and/or thoughts about what’s been mentioned here, please comment.

By Kim Carter

Reassembly of the Eee PC 901

June 8, 2013

This is a follow on from “Upgrade Linux Eee PC 901 4GB SSD

As usual, this is just the previous section but in reverse. Hopefully you won’t have any screws left when it’s back together.

Now the reassembly is just as hard if not harder. You have to be as careful as possible not to put your statically charged body parts all over the circuitry of the Eee PC.
I found that the screw holes on the new SSD didn’t line up with the holes on the motherboard. So if you have a very small round file you can file them bigger, or some very small side cutters, clip them bigger. That’s what I did as I didn’t have a file small enough to get through the holes. Now the screw heads wouldn’t hold the SSD down any more. Zip ties to the rescue.

Now when you put the motherboard on the plastic base, you need to plug the cable for the bottom row of LED’s back in and the CPU fan (you can see these in my previous post). Then put the top two screws back in.

Next step is to plug the mouse cable into its socket (highlighted by the red box below). this is really fiddly.

Plugging the touch pad in

Then plug the cable that takes the signal from the 4 top buttons above the keyboard into its socket.

Next is to plug the keyboard ribbon back in to its socket. I found the method of least friction to be through the top piece of sheet metal. What I did here was to hold the ribbon down flat and keep a little pressure on top of the ribbon and directed into the socket while with long nose pliers a push on each side in sequence until the ribbon was all the way in. Again this is quite fiddly.

PlugInKeyboardRibbon

Put the top half of the plastic chassis back on in reverse to how it was removed… top first, then work your way down the sides. Put the six screws back into the panel under the keyboard. Flip the device, install the 16GB SSD and the RAM module. Put all the screws back in and that should be it.


Follow

Get every new post delivered to your Inbox.

Join 198 other followers