Archive for the ‘Scrum’ Category

Holistic Info-Sec for Web Developers

July 24, 2015

Quick update: Fascicle 0 is now considered Done. Available as an ebook on LeanPub and hard copy on Amazon.

Holistic InfoSec for Web Developers

 

Fascicle 1 is now content complete.

Most of my spare energy is going to be going into my new book for a while. I’m going to be tweeting as I write it, so please follow @binarymist. You can also keep up with my release notes at github. You can also discuss progress or even what you would find helpful as a web developer with a focus on information security, where it’s all happening.

HolisticInfoSecForWebDevelopers

I’ve split the book up into three fascicles to allow the content to be released sooner.

 

Automating Specification by Example for .NET Web Applications

February 22, 2014

If you or your organisation:

  1. are/is constrained to running your .NET tests (unit, acceptance) on-site rather than in the cloud
  2. would like some guidance on how to set-up Continuous Integration

read on.

Introduction

Purpose

Remember, an acceptance test system as a tool is only as good as the specification provided by it’s humans. The most important ingredients there-for is the relationships between the people creating the tests and the interactions performed by those people. Or as the Agile Manifesto states: Value “Individuals and interactions over processes and tools”. In order for an acceptance test system to be successful, the relationships of the Developers creating the increment and the interactions between them and the stake holders must be in good shape first. Once this is in order, you can take the next step and find some tools that will assist in creating working software that does what the stake holders want it to do.

It’s my intention that the following details will help you to create a system that automates “Specification by Example”.

The purpose of providing an automated Specification by Example Implementation, A.K.A Automated Acceptance Test System, is clearly explained here.

Do not fall into the trap of inverting the test triangle. Instead invest where it matters.

Scope

Create a system that can be triggered from

  1. Every developers workstation
  2. A build on the build machine, preferably from a best of bread build tool. TFS is not a best of bread build tool and if you want to get serious about Continuous Integration (CI), nightly builds, continuous deployment, I’d recommend not going down the path of TFS. Even Microsoft uses Git. Doesn’t that tell you something? Do you see TFS here? Last time I evaluated build tools, Jenkins previously named Hudson came out on top.

jenkins

The system will include

  1. An acceptance test framework that will run all the acceptance tests
  2. A Unit test framework. UI tests need to be run in parallel on a collection of VM’s (See the section on supported browsers for why). There are three immediately obvious approaches we could take here.
    1. We could try and rely on a unit test framework to distribute the tests. MSTest 2012 doesn’t provide the ability to run tests in parallel, but 2010 does. In order to have 2012 run tests in parallel, you can force it to use the 2012 test settings file. Only a maximum of 5 tests can be run concurrently though. Not a great option, considering it’s not going to be supported going forward.
    2.  My ParallelBrowser. If this link is not active and you’re interested in this, contact me.
    3. PNUnit. An example of how this works is here under the “PNunit Framework for writing selenium test cases” heading. I wrote the ParallelBrowser before Selenium had good support for running the same tests on multiple supported browsers. Both my ParallelBrowser and this option are reasonable options, but I’d go for the latter now. This way someone else can maintain the parallel aspect. As unless people are interested in ParallelBrowser I won’t be doing any further work on it.
  3. A Web User Interface Test Framework that will be driven by the acceptance test framework. Selenium in this case.
  4. A set of tests that run Selenium tests. These will of course need to be thread-safe.
  5. As per the Supported Browsers section, a collection of VM’s with our supported browsers installed.
    1. Each with a standalone selenium server setup with a role of webdriver. Details further on.
  6. A stand-alone selenium server setup with a role of hub

High Level Flow

Many organisations bound to .NET seem to be locked into using sub-standard tooling like TFS for their build. If you are in this predicament and can not break free, I’d suggest once all the unit tests, integration tests have run, then have the build kick off a psake script to:

  1. Clean out the existing target web app
  2. Deploy the newly built and tested web app
  3. Drop the database
  4. Create database by using latest DDL and DML scripts pulled from source control
  5. Apply any specific configurations
  6. Stop and start the target web server
  7. Run the acceptance tests which will include any Web UI tests.

If it’s within your power to choose a real CI Tool to run in-house, there are a handful of very solid contenders. A good proportion of which are free and open source.

Audience

Who ever is setting up the system. Often a developer or two. It’s important to make sure more than one person knows how it all hangs together, otherwise you have a single point of failure.

Chosen Tools

Evaluation Criterion I used

  • Who is the creator? I favour teams rather than individuals, as individuals move on often leaving projects stranded?
  • Does it do what you need it to do?
  • Does it suite the way you and your team want to work?
  • Does it integrate well with all of your other chosen components? This is based on communicating with those that have used the offerings more so than using Proof Of Concepts (POC).
  • Works with the versions of dependencies you currently use.
  • Cost in money. Is it free? Are there catches once you get further down the road? Usually open source projects are marketed as is. No catches
  • Cost in time. Is the set-up painful? Customisation feedback? Upgrade feedback?
  • How well does it appear to be supported? What do the users say?
  • Documentation. Is there any / much? What is its quality?
  • Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have valid reasons).
  • Release schedule. How often are releases being made? When was the last release?
  • Intuition. How does it feel. If you have experience in making these sorts of choices, lean on it. Believe it or not, this should probably be No. 1

The following tools have been my choice based on the above criterion.

Acceptance Test Framework

The following offerings are all free and open source.

If you’re not using User Stories and/or Test Conditions, the context/specification offerings provide greater flexibility than the xBehave style frameworks. As most Scrum teams use User Stories for their Product Backlog items and drive their acceptance tests with test conditions, xBehave offerings are a great choice. In saying that, there is probably no reason why both couldn’t be used where it makes sense to do so. In this section I’ve provided the results of evaluating the current xSpec and xBehave offerings for .NET ordered by best first for the categories.

xBehave (test conditions)

SpecFlow

specflow

  • Sourcecode: https://github.com/techtalk/SpecFlow/
  • Age: Over 4 years
  • Actively maintained: Yes
  • Large number of active committers
  • Community: Lively
  • Visual Studio Plug-in has been downloaded 70 times as many times as NBehave
  • Documentation: Excellent
  • Integrates well with Selenium (I’ve setup a couple of systems using SpecFlow and it’s been a joy to work with). The stake holders loved the visibility it provided too. I discussed it here in a recent presentation.
NBehave
  • Not a lot of activity
  • Only two committers
StoryQ
  • Only two coordinators
  • Well established framework

xSpec (context/specification)

Machine.Specification (MSpec)
NSpec

Web User Interface Test Framework

selenium

For me when I look at this category of tools for .NET, Selenium is always at the top and it just keeps getting better. If anyone has any questions around Selenium, feel free to contact me or leave a comment on this post. I can’t guarantee I’ll have the answer, but I’ll try. All the documentation can be found here. I would recommend installing the Selenium IDE for initially recording tests and be sure to check-out the IDE plug-ins. All the documentation you’ll need for the IDE is here. Once you get familiar with the code it generates, you will not use it much. I would recommend using the newer Web drivers rather than the selenium server by itself. The user group is very active and looks like a good place to ask questions also. Although I haven’t needed to as there is a huge amount of documentation that’s great.

The tools I would use are detailed here. Specifically we would be using

  1. Selenium 2 (aka WebDriver)
  2. The IDE for recording tests initially
  3. Selenium Server which is used by WebDriver and RC (now considered legacy) now includes built-in grid capabilities.

Supported Browsers

What I’ve done in the past is have each of our supported versions from each supported browser vendor installed on a single VM. So each VM has all the vendors browsers installed, but just a single version obviously.

Mid Level Flow

These are the same points listed above under “High Level Flow

1. Build Kicks off PSake Script

psake

The choice to use PSake over the likes of NAant, Rake and the other build scripting languages is reasonably straight forward for me. PSake (PowerShell build scripting language) gives us access to the full .NET environment. NAnt with all it’s angle brackets, was never a very nice scripting language to use. Rake is excellent and a possible option if you have ruby installed. If you don’t, why install it if you have .NET? There are many resources for PowerShell on the inter-webs. The wiki for PSake is good.

In the case where you may have a TFS Build run, I would suggest once all the unit tests and integration tests have run, then the build kicks off a possibly pre-build and post-build psake script to perform the following operations. This is how you do this. Oh, before you try to actually run a PSake script, download and import the module, or install the NuGet package. So once you have your PSake scripts running, just start adding PowerShell scripts to do the following work. PSake is just syntactic sugar around PowerShell, so anything you can do with PS, you can do with PSake.

2. Clean out the existing target web application

Using your PSaki script, use the Web Deploy cmdlets. You will find everything you need here for it. You can also install the NuGet package.

3. Deploy the newly built and unit tested web application

As above, just use the Web Deploy cmdlets.

4. Drop the database

As above, just use the Web Deploy cmdlets.

5. Create database by using latest DDL and DML scripts pulled from source control

Database update via Application

Kind of related, but not specific to CI.

Depending on your needs, there are quite a few ways you could do this.

One way of doing this is to have your application utilise a library that determines which version of the database the application needs and be able to update the database accordingly. This library would use similar or the same upgrade scripts that we would use in this test process.

Your applications should create (if non existent) and update database on run. So all the DDL, DML code per database lives in a library. Each application that uses a specific database, references the databases DDL code library. Script all stored procedures, views, functions, triggers they’re recreated as part of a deployment scrip.

When the application is deployed, and the database created or updated, anything that must be there for the application to run out of the box should be part of the scripts, and of course versioned. This includes the part of our data that is constant or configuration data. Tables, stored procedures, views, functions and triggers. For the variable part of your data, you will need a synthetic data generation plan for testing.

Database Process for Versioning

Also related, but not specific to CI.

DBA, Devs, Product Owner and consultants must be aware of the process.

When any schema, constant data, configuration data, test data is updated… the (version controlled) scripts must also be updated, else the updates will get overwritten.

As part of the nightly build, if your supporting multiple versions of your application, you could also hydrate the collection of database versions, then run the appropriate upgrade scripts against each one, to verify the upgrades work. If any don’t, the build fails.

Create set of well defined processes that:

  1. In most cases, looks after itself
  2. Upgrades existing databases if they are not on the latest version, to the latest version
  3. Creates databases for those applications that don’t have a database
  4. Informs the user on deployment if the database is corrupt, or can not be upgraded
  5. Outlines who is responsible for, and who may update the DDL and DML scripts for your projects
  6. Clearly documents that any changes made to any databases by un-authorised personal will more than likely be overwritten.

A User Story for this might look something like the following:

As the team, we need to create a set of well defined processes that clearly outline what is required in regards to setting up the development teams database versioning, creation, upgrade systems and processes strategy for our organisations databases. So that all team personal are aware of the benefits and dangers of making changes to the databases, and understand the change process.

Possibly useful tools

1. DB Ghost
2. http://www.red-gate.com/products/sql-development/sql-source-control/index-2
3. http://www.sqlaccessories.com/SQL_Data_Examiner/

6. Apply any specific configurations

As above, just use the Web Deploy cmdlets.

7. Stop and start the target web server

As above, just use the Web Deploy cmdlets.

8. Run the acceptance tests which will include any Web UI tests

As above, just use the Web Deploy cmdlets.

  1. Start each VM that hosts a set of browsers you want to use to farm your tests out to. From memory, you do not need to start each browser. There are of course many ways to do this. PS provides the following cmdlets Start-VM and Stop-VM. These would be my first options.
  2. Start the selenium standalone server. All details found here. Or just work through the “Distributed Testing with Selenium Grid” chapter until you get to the “Creating and executing Selenium script in parallel with TestNG” heading, at which point switch to this documentation to replace TestNG with PNUnit.

If I’ve failed to explain anything in enough detail for you, drop me a message below and I’ll do my best to help 🙂

Essentials for Creating and Maintaining a High Performance Development Team

January 25, 2014

How and Why Many Software Development Shops Fail

What I see a lot of, is organisations hiring code monkeys rather than professionals. Either they hire:

  • the cheapest talent they can get their hands on. Now they want the best, but how much they have to pay the developer is the most important factor to them.

or

  • the person that completes feature implementations as fast as possible (sometimes known or thought of as rock stars). Often young developers without a large amount of experience which causes the more Professional Developers to slow down a bit and think tasks through a little more.

Now, both approaches are short sighted. They hire code monkeys rather than professionals. Code monkeys write code fast and incur technical debt that is hidden at first, but over time slows the Development Team down until it can barely move.

The scenario

Code Monkey finishes his task much faster than Professional Developer.

code monkey

Code Monkey is solely focused on completing his task as fast as possible. He cuts some code and declares the task done. Professional Developer thinks the problem through, does a little research to satisfy himself that his proposed approach  is in fact the most appropriate approach for the problem. He organises a test condition workshop which solidifies requirements and drives out design defects via active stake holder participation. He drives his low level design with TDD. Makes sure he follows the coding standards, thus making future maintenance to his code easier, as it’s much easier to read. Asks for a pair to review his code or perhaps requests a fellow team member to sit with him and pair programme for a bit on some complex areas of the code base. Makes sure his code is being run in the continuous integration suite, that his acceptance tests (which have been driving his feature) are passing and the (security) regression tests are not regressing. Checks that his work complies with the Definition of Done. You do have a Definition of Done right?

Now what the Product Owner or software development manager often fails to understand is that it’s the slow (Professional) developer that is creating code that can be maintained and extended at a sustainable pace. Professional Developer is investing time and effort into creating a better quality of code than the developer (Code Monkey) that appears to be producing code faster. The Product Owner and/or manager don’t necessarily see this, in which case Code Monkey clearly looks to be the superior developer right (the rock star)? What also often happens is that Code Monkey rides on Professional Developers quality and adds his spaghetti code on top, thus making Code Monkey look like a god.

The Product Owner sees output immediately by Code Monkey that “appears” to be working very fast. He doesn’t see the quality being created by Professional Developer that “appears” to be working slower.

Time goes by. Sprint Review roles around. The stake holders love the new features that have been implemented and now want some additions and refinements. They ask the Product Owner to add some more User Stories into the Product Backlog. The Development Team pull these stories into a Sprint and start work. New functionality is added on top of the code that Professional Developer wrote previously. New functionality is added on top of the code that Code Monkey wrote previously.

Sprint Review roles around again and the stake holders are happy with the new features that have been added on top of Professional Developers code. Of course they have no idea that the underlying code was crated by Professional Developer. Now the stake holders have been using the software that had the new features added on top of Code Monkeys spaghetti code and they are starting to notice other areas of the application that are no longer behaving the way they are supposed to. This continues to happen and the stake holders are oblivious to the fact that it’s due to the code that Code Monkey is writing. They still think he’s a rock star because he appears to pump out code so fast.

So… while Professional Developer seems to be slowing The Team down and clearly Code Monkey is simply amazing because he delivers his features so much faster. The actual truth is exactly the opposite. Professional Developer is creating SOLID code and running at a pace that’s sustainable (a key principle of the agile manifesto).

The code that Professional Developer wrote is easier to modify and extend as it’s design is superior, due to being well thought out and driven by tests. His code satisfies the acceptance tests, so everyone knows it meets the living specification. It’s faster to add changes to his code because it’s easier to read and there are less surprises. If any other team member changes Professional Developers code which makes it no longer conform to the specification, the Accpetance Tests around his code fail, thus providing instant feedback to the developer making the change.

It’s the practices of Professional Developer that:

  1. provide the entire Development Team assurity that the software satisfies the requirements of the specification at all times.
  2. allow The Development Team to run at a sustainable pace.
  3. provide confidence in ongoing future estimations due to less surprises.
  4. produce code that everyone wants to work with.
  5. produce less error prone software that does what it says it will do on the box.

So… next time you as a Product Owner, Manager, or person responsible for hiring, is looking for talent, be very careful what you’re measuring. Don’t favour speed over excellent attitude. I created some ideas on what to look for in a Professional Developer here.

Scrum Teams can Fail Too

Velocity of the Development Team starts high then declines. Often it’s hard for people (including the Product Owner) to pin-point why this is happening. The Scrum Team may have started out delivering at a consistently high cadence. They appeared to be really on fire.

The code base is small but growing fast. As it starts to get larger, the Development Team starts to feel the weight of a lot of code that’s been hacked together in a rush. This causes the teams ability to release software fast to wane. A Scrum Team can get to this point quite fast, as they are a high performance team. When you get to this point, almost every change to the code base is hard. Make one change and something else fails. Routines are hundreds of lines long. Developers have to understand hundreds of lines of code in order to make a small change. Names are not as meaningful as they should be. Routines have multiple levels of abstraction, so multiple levels of code need to be understood to make a single change. Inheritance is over used, thus creating unnecessarily tight coupling. There are many aspects of the code that have become terrible to work with.

How does this happen?

How does the Product Owner know that the quality of code being created is not good? The Product Owner isn’t generally a developer so doesn’t know and even if he is, he’s not in the code day in, day out. Also it’s not generally the most important concern of the Product Owner, rather getting new functionality out the door is, so this is what the Development Team are rewarded for. When they pass a Sprint, the Product Owner is happy and praises the Development Team. The Product Owner has no idea that the quality of code is not as good as it needs to be to sustain a code base that is easy to extend.

So the Development Team does what ever it needs right now, to make sure they deliver right now (the current Sprint). Quality becomes secondary, because no one is rewarding them for it. This is a lack of professionalism on the Development Teams part. Bear in mind though, that each developer is competing against every other to appear as though they have produced the most. After all, that’s what they get rewarded for. Often what this means is they are working too fast and not thinking enough about what they are doing, thus the quality of the code-base is deteriorating, like in the example above with Code Monkey.

I’ve personally seen this on close to 100% of all non Scrum projects I’ve been involved with. Scrum Teams are sometimes better off because they have other practices in place that ensure quality remains high, but these practices are not prescribed by Scrum.

So… What do we do?

We not only reward the Development Team for delivering features fast but we also reward them for the sort of practices that Professional Developer (from our example above) performs.

How do we do this?

We add the practices that Professional Developer from above performs to our Definition of Done.

The Product Owner runs through the Acceptance Criteria which should be included on every Product Backlog item (preferably during the Sprint) indicating acceptance. Also running through the Definition of Done… querying the Development Team that each point has in fact been done for the Backlog item in question. This of course should be done by the developers themselves first. This provides the Team with confidence that the Sprint Backlog item is actually complete. Essentially, the work is not Done, until all the Acceptance Criteria points and Definition of Done points are checked off. This way the Development Team is being rewarded for delivering fast and also delivering high quality features that do what the stake holders expect them to do. No nasty surprises.

On top of what our solo Professional Developer did above, we should:

  1. Measure test speed and reward fast running tests
  2. Measure cyclomatic Complexity
  3. Run static code analysis and use productivity enhancing tools. This is not cheating, it’s allowing the developers to work faster and create cleaner code. This can even be set-up as pre-commit hooks etc on source control.
  4. Code reviews need to be based on the coding standards and guidelines.
  5. Encourage developers to commit regularly, thus their code is being run against the entire test suite, providing confidence that their code plays nicely with everyone else’s code. Commit frequency can be measured.
  6. The Development team should shame developers when they break the CI build. Report on how long builds stay broken for and shame when the duration is longer than an agreed on time.
  7. Most of these practices can be added to the Definition of Done, this way Developers can and should be rewarded for doing these activities. Even better, you can automate most of these practices.

Software Engineer Interview Quick Question Set

May 11, 2013

Ice breakers

  • Tell us a little bit about yourself and what drives you?
  • Ask a question from their CV that is positive, ‘what was your greatest success in your current or last role’
  • What’s your ideal job?
  • Can you give us one thing you really enjoyed in your last job?
  • What about one thing that you didn’t enjoy as much?
    How did you solve that?

Testing

  • How can you implement unit testing when there are dependencies between a business layer and a data layer, or the presentation layer and the business layer?
  • The development team is getting near release date. They start saying things like, we’re going to need a sprint to test. What would your reaction be?

Maintenance

  • What measures have you taken to make your software products more easily maintainable?
  • What is the most expensive part of the SDLC?
    (hint: reading others code)

Design and architecture

  • Can you explain some design patterns, and where you have used them?

Scrum

  • Have you used scrum before? (If the answer is no, move on)
  • If you were taken on as a team member and the team was failing Sprint after Sprint. What would you do?
  • What would you do if you were part of a Scrum Team and your manager asked you to do a piece of work not in the Scrum Backlog?
    (hint: manager needs to consult PO. Something has to be removed from Sprint backlog in order for something to be added)

Construction

  • When do you use an abstract class and when do you use an interface?
  • How do you make sure that your code is both safe and fast?
  • Can you describe the process you use for writing a piece of code, from requirements to delivery?

Software engineering questions

  • What are the benefits and drawbacks of Object Orientated Design?
    (hint: polymorphism inheritance encapsulation)
  • What books have you read on software engineering that you thought were good?
  • Explain the terms YAGNI, DRY, SOLID?
    (hint You Aint Gonna Need It. Build what you need as you need it, aggressively refactoring as you go along; don’t spend a lot of time planning for grandiose, unknown future scenarios. Good software can evolve into what it will ultimately become. Every piece of code is code we have to test. If the code is not needed, why are we spending time on it?)

Functional design questions

  • Which controls would you use when a user must select multiple items from a big list, in a minimal amount of space?
  • How would you design editing twenty fields for a list of 10 items? And editing 3 fields for a list of 1000 items?

Specific technical requirements

  • When, where and how do you optimize code?

Web questions

  • How would you mitigate SQL injection?
    (hint: looking for multi layered sanitisation. parameterised SQL. Least privileged account for data access)
  • Have you used XSS and can you provide us an example?
  • What JavaScript libraries have you used?
  • What are some of the irritating limitations of CSS?
  • How would you remove the ASP.NET_SessionId cookie from a MVC controllers Response?
    (hint: Response.Cookies["ASP.NET_SessionId"].Expires = DateTime.Now;)

JavaScript

  • How does JavaScript implement inheritance?
    (hint: via Object’s prototype property)

Service Oriented

  • What are the 3 things a WCF end point must have, or what is the ABC of a WCF service?
    (hint:
    Address – where the WCF service is hosted.
    Binding – that specifies the protocol and its myriad of options.
    Contract – service contract defines what service operations are available to the client for consumption.
    )

C# / .Net questions

  • What’s the difference between public, private, protected and internal modifiers?
  • What are the main differences between the .NET 2.0 and 4.0 garbage collector?
    (hint: background GC was introduced)
  • Describe the different ways arguments can be passed in C#
    (hint: pass val by val, pass val by ref, pass ref by val, pass ref by ref)
  • We have a Base class, we have a child class that inherits BaseClass. Does the child class inherit the base class’s private members?
    (hint: this is normally good for a laugh)
  • Have you ever worked with a deadlock and how did it occur?
  • When should locks be used in concurrent programming?
    (hint:
    when synchronization cannot be performed in any other way. This is rare. With careful thought and planning, there is just about always a better way. There are many ways to synchronise without using locks. System.Threading.Interlocked class generally supported by the processor
    )
  • What are some of your favourite .NET features?

Finally, this question is from Google; can you quickly tell us something that we don’t know anything about? It can be anything.

Software Engineer Interview Process and Questions

April 27, 2013

A short time ago, I was tasked with finding the right software engineer/s for the organisation I was working for. I settled on a process, a set of background questions,  a set of practical programming exercises and a set of verbal questions. Later on I cut the set of verbal questions down to a quicker set. In this post, I’ll be going over the process and the full set of verbal questions. In a subsequent post I’ll go over the quicker set.

The Process

  1. We sent them an email with a series of questions.
    Technical and non-technical.
    They have two days to reply with answers.
    The programming exercises are not covered here.
    If they passed this…
  1. We would get them in for an interview.
    Technical and non-technical questions would be asked.
    They would be put on the spot and asked to speak to the development team about a technical subject that they were familiar with.
    The development team would quiz them on whatever comes to mind.
    Once the candidate had left, the development team would collaborate on what they thought of the candidate and whether or not they would be a good fit for the team.
    The team would take this feedback and discuss whether the candidate should be given a trial. 
    Step 2 could be broken into two parts depending on how many questions and their intensity, you wanted to drill the candidate with.

The following set of tests will confirm whether the candidate satisfies the points we have asked for in the job description.

The non functional (soft) qualities listed on the Job add would need to be kept in mind during the interview events.

Qualities such as:

  • Quality focus
  • Passion
  • Personality
  • Commitment to the organisations needs
  • A genuine sense of excitement about the technologies we work with

Email test

  1. Send Screening.pdf
  2. Send InterviewQuestions.doc

Now with the following questions, with many of them there is not necessarily a right or wrong answer. Many of them are just to gauge how the candidate thinks and whether or not they hold the right set of values.

Ice breakers

  • Would you like to be the team leader or team member?
  • Tell me about a conflict at a previous job and how you resolved it.
  • (Summary personality item: Think to yourself, “If we hire this person, would I want to spend four hours driving in a car with them?”)

Design and architecture

  • What’s the difference between TDD and BDD and why do they matter?
  • What is Technical Debt. How do you deal with it once in it? How do you stay out of it?
  • How would you deal with a pair when reviewing their code, when they have not followed good design principles?
  • What would you do if a fellow team member reviewed your code and suggested you change something you had designed that followed good design principles, to something inferior?
  • Can you explain how the Composite pattern works and where you would use it?
  • Can you describe several class construction techniques?
    What are two design patterns that are focused on class construction, and how do they work?
    (hint: Builder, Factory Method).
  • How would you model the animal kingdom (with species and their behaviour) as a class system?
    (hint GoF design pattern. Abstract Factory)
  • Can you name a number of non-functional (or quality) requirements?
  • What is your advice when a customer wants high performance, high usability and high security?
  • What is your advice when a customer wants high performance, Good design, Cheap?
    (hint: pick 2)
  • What do low coupling and high cohesion mean? What does the principle of encapsulation mean to you?
  • Can you think of some concurrency patterns?
    (hint: Asynchronous Results, Background Worker, Compare/Exchange pattern via Interlocked.CompareExchange)
  • How would you manage conflicts in a web application when different people are editing the same data?
  • Where would you use the Command pattern?
  • Do you know what a stateless business layer is? Where do long-running transactions fit into that picture?
    (hint: if you have long-running transactions, you are going to have to manage state somehow. How would you do this?)
  • What kinds of diagrams have you used in designing parts of an architecture, or a technical design?
  • Can you name the different tiers and responsibilities in an N-tier architecture?
    (hint: presentation, business, data)
  • Can you name different measures to guarantee correctness and robustness of data in an architecture?
    (hint: for example transactions, thread synchronisation)
  • What does the acronym ACID stand for in relation to transactions?
    (hint: atomicity, consistency, isolation, durability)
  • Can you name any differences between object-oriented design and component-based design?
    (hint: objects vs services or documents)
  • How would you model user authorization, user profiles and permissions in a database?(hint: Membership API)

Scrum questions

  • Have you used Scrum before? (If the answer is no, not much point in asking the rest of these questions).
  • If you were taken on as a team member and the team was failing Sprint after Sprint. What would you do?
  • What are the Scrum events and the purpose of them?
    (hint: Daily Scrum, Sprint Planning Meetings 1 & 2, Sprint Review and Sprint Retrospective)
  • What would you do if you were part of a Scrum Team and your manager asked you to do a piece of work not in the Scrum Backlog?
  • Who decides what Product Backlog Items should be pulled into a Sprint?
  • What is the DoD and what is it useful for?
  • Where and how do changing requirements fit into scrum?

Construction questions

  • How do you make sure that your code can handle different kinds of error situations?
    (hint: TDD, BDD, testing…)
  • How do you make sure that your code is both safe and fast?
  • When would you use polymorphism and when would you use delegates?
  • When would you use a class with static members and when would you use a Singleton class?
  • Can you name examples of anticipating changing requirements in your code?
  • Can you describe the process you use for writing a piece of code, from requirements to delivery?
  • Explain DI / IoC. Are there any differences between the two? If so, what are they?
    (hint: DI is one method of following the Dependency Inversion Principle (DIP) or IoC)

Software engineering skills

  • What is Object Oriented Design? What are the benefits and drawbacks?
    (hint: polymorphism inheritance encapsulation)
  • What is the role of interfaces in design?
  • What books have you read on software engineering that you thought were good?
  • What are important aspects of GUI design?
  • What Object Relational Mapping tools have you used?
  • What are the differences between Model-View-Controller, Model-View-Presenter and Model-View-ViewModel
    Can you draw MVC and MVP?
    (hint: doted lines are pub/sub)

MVCM-V-VM

  • What is the difference between Mocks, Stubs, Fakes and Dummies?
  • (hint:
    Mocks are objects pre-programmed with expectations which form a specification of the calls they are expected to receive. Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test.
    Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ‘sent’, or maybe only how many messages it ‘sent’.
    Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
    Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.)
  • Describe the process you would take in setting up CI for our company?
  • We’re going to design the new IMDB.
    On the whiteboard, what would the table that holds the movies look like?
    Every movie has actors, how would the Actors table look?
    Actors star in many movies, any adjustments?
    We need to track Characters also. Any adjustments to the schema?

Relational Database

  • What metrics, like cyclomatic complexity, do you think are important to track in code?

Functional design questions

  • What are metaphors used for in functional design? Can you name some successful examples?
    (hint: Partial Function Application, Currying)
  • How can you reduce the user’s perception of waiting when some routines take a long time?
  • Which controls would you use when a user must select multiple items from a big list, in a minimal amount of space?
  • How would you design editing twenty fields for a list of 10 items? And editing 3 fields for a list of 1000 items?
  • Can you name some limitations of a web environment vs. a Windows environment?

Specific technical requirements

  • What software have you used for bug tracking and version control?
  • Which branching models have you used?
    (hint: No Branches, Release, Maintenance, Feature, Team)
  • What have you used for unit testing, integration testing, UA testing, UI testing?
  • What build tools are you familiar with?
    (hint: Nant, Make, Rake, PSake)

Web questions

  • Would you use a black list or white list? Why?
  • Can you explain XSS and how it works?
  • Can you explain CSRF? and how it works?
  • What is the difference between GET and POST in web forms? How do you decide which to use?
  • What do you know about HTTP.
    (hint: Application Layer of OSI model (layer 7), stateless)
  • What are the HTTP methods sometimes called verbs?
    (hint: there are 9 of them. HEAD, GET, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, PATCH)
  • How do you get the current users name from an MVC Controller?
    (hint: The controller has a User property which is of type IPrinciple which has an Identity property of type IIdentity, which has a Name property)
  • What JavaScript libraries have you used?
  • What is the advantage of using CSS?
  • What are some of the irritating limitations of CSS?

JavaScript questions

  • How does JavaScript implement inheritance?
    (hint: via Object’s prototype property)
  • What is the difference between "==" and "===", "!=" and "!=="?
    (hint: If the two operands are of the same type and have the same value, then “===” produces true and “!==” produces false. The evil twins do the right thing when the operands are of the same type, but if they are of different types, they attempt to coerce the values. The rules by which they do that are complicated and unmemorable.
    If you want to use "==", "!=" be sure you know how it works and test well.
    By default use “===” and “!==“. )
    These are some of the interesting cases:
'' == '0'          // false
0 == ''            // true
0 == '0'           // true
false == 'false'   // false
false == '0'       // true
false == undefined // false
false == null      // false
null == undefined  // true
' \t\r\n ' == 0    // true
  • On the whiteboard, could you show us how to create a function that takes an object and returns a child object?
if (typeof Object.create !== ‘function’) {
   Object.create = function (o) {
      var F = function () {};
      F.prototype = o;
      return new F();
   };
}
var child = Object.create(parent);
  • When is “this” bound to the global object?
    (hint: When the function being invoked is not the property of an object)
  • With the following code, how does myObject.pleaseSetValue set myObject.value?
var myObject = {
	value: 0
};

myObject.setValue = function () {
	var that = this; // don’t show this

	var pleaseSetValue = function () {
		that.value = 10; // don’t show this
	};
	pleaseSetValue ();
}
myObject.setValue();
document.writeln(myObject.value); // 10

Service Oriented questions

  • Can you think of any Advantages and Disadvantages in using SOA over an object oriented n-tier model?
  • What’s the simplest way to make a service call from within a web page and how many lines could you do this in?
  • What scales better, per-call services or per-session and why?
    (hint: maintaining service instances (maintaining state) in memory or any entities for that matter quickly blows out memory and other resources.)
  • What is REST’s primary objective?
  • How many ways can you create a WCF proxy?
    (hint:
    Add Service Reference via Visual Studio project
    Using svcutil.exe
    Create proxy on the fly with… new ChannelFactory<IMyContract>().CreateChannel();
    )
  • What do you need to turn on on the service in order to create a proxy?
    (hint: enable an HTTP-GET behaviour, or MEX endpoint)

C# / .Net questions

  • What’s the difference between public, private, protected and internal modifiers?
    Which ones can be used together?
  • What’s the difference between static and non-static methods?
  • What’s the most obvious difference in IL with static constructors?
    (hint: static method causes compiler to not mark type with beforefieldinit, thus giving lazy initialisation.)
  • How have you used Reflection?
  • What does the garbage collector clean up?
    (hint: managed resources, not unmanaged resources. Such as files, streams and handles)
  • Why would you implement the the IDisposable interface?
    (hint: clean up resources deterministically. Clean up unmanaged resources.)
  • Where should the Dispose function be called from?
    (hint: the objects finalizer)
  • Where is an objects finalizer called from?
    (hint: the GC)
  • If you call an objects Dispose method, what System method should you also make sure is called?
    (hint: System.GC.SuppressFinalize)
  • Why should System.GC.SuppressFinalize be called?
    (hint: finalization is expensive)
  • Are strings mutable or immutable?
    (hint: immutable)
  • What’s the most significant difference between struct’s and class’s?
    (hint: struct : value type, class : reference type)
  • What are the other differences between struct’s and class’s?
    (hint: struct’s don’t support inheritance (all value types are sealed) or finalizers)
    (hint: struct’s can have the same fields, methods, properties and operators)
    (hint: struct’s can implement interfaces)
  • Where are reference types stored? Where are value types stored?
    (hint:
    bit of a trick question. Ref on the heap, val on the stack (generally)
    The reference part of reference type local variables is stored on the stack.
    Value type local variables also on the stack.
    Content of reference type variables is stored on the heap.
    Member variables are stored on the heap.
    )
  • Where is the yield key word used?
    (hint: within an iterator)
  • What are some well known interfaces in the .net library that iterators provide implementation for?
    (hint: IEnumerable<T> )
  • Are static methods thread safe?
    (hint: a new stack frame is created with every method call. All local variables are safe… so long as they are not reference types being passed to another thread or being passed to another thread by ref.)
  • What is the TPL used for?
    (hint: a set of API’s in the System.Threading and System.Threading.Tasks namespaces simplifying the process of adding parallelism and concurrency to applications.)
  • What rules would you consider when choosing a lock object?
    (hint: keep the scope as tight as possible (private), so other threads cannot change its value, thus causing the thread to block.
    Declare as readonly, as its value should not be changed.
    Must not be a value type.
    If the lock keyword is used on a value type, the compiler will report an error.
    If used with System.Threading.Monitor, an exception will occur at runtime, because Monitor.Exit receives a boxed copy of the original variable.
    Never lock on “this”.)
  • Why would you declare a field as volatile?
    (hint: So that the order of the operations performed on the variable are not optimised to a different order.)
  • Are reads and writes to a long (System.Int64) atomic? Are reads and writes to a int (System.Int32) atomic?
    (hint: The runtime guarantees that a type whose size is no bigger than a native integer will not be read or written only partially. This is in the CLI spec and the C# 4.0 spec.)
  • Before invoking a delegate instance just before the null check is performed, What’s a good way to make sure no other threads can set your delegate to null between when the check occurs and when you invoke it?
    (hint:
    assign reference to heap allocated memory to stack allocated implements thread safety.
    Assign your delegate instance to a second local delegate variable.
    This ensures that if subscribers to your delegate instance are removed (by a different thread) between checking for null and firing the invocation, you won’t fire a NullReferenceException.)
void OnCheckChanged(EventArgs e) {
	// assign reference to heap allocated memory to
	// stack allocated implements thread safety

	// CheckChanged is a member declared as…  public event EventHandler CheckChanged;
	EventHandler threadSafeCheckChanged = CheckChanged;
	if (threadSafeCheckChanged != null)  {
		// fire the event off
		foreach(EventHandler handler in threadSafeCheckChanged.GetInvocationList()) {
			try {
				handler(this, e);
			}
			catch(Exception e) {
				// handling code
			}
		}
	}
}
  • What is a deadlock and how does one occur? Can you draw it on the white board?
    (hint: two or more threads wait for each other to release a synchronization lock.
    Example:
    Thread A requests a lock on _sync1, and then later requests a lock on _sync2 before releasing the lock on _sync1.
    At the same time,
    Thread B requests a lock on _sync2, followed by a lock on _sync1, before releasing the lock on _sync2.
    )
  • How many ways are there to implement an interface member, and what are they?
    (hint: two. Implicit and explicit member implementation)
  • How do I declare an explicit interface member?
    (hint: prefix the member name with the interface name)
public class MyClass : SomeBaseClass ,IListable, IComparable {
    // …
    public intCompareTo(object obj) {
        // …
    }

    #region IListable Members
    string[] Ilistable.ColumnValues {

        get {
            // …
            return values;
        }
    }
    #endregion
}
  • Write the above on a white board, then ask the following question. If I want to make a call to an explicit member implementation like the above, How do I do it?
string[] values;
    MyClass obj1, obj2;

    // ERROR:  Unable to call ColumnValues() directly on a contact
    // values = obj1.ColumnValues;

    // First cast to IListable.
    values = ((IListable)obj2).ColumnValues;
  • What is wrong with the following snippet?
    (hint: possibility of race condition.
    If two threads in the program both call GetNext simultaneously, two threads might be given the same number. The reason is that _curr++ compiles into three separate steps:
    1. Read the current value from the shared _curr variable into a processor register.
    2. Increment that register.
    3. Write the register value back to the shared _curr variable.
    Two threads executing this same sequence can both read the same value from _curr locally (say, 42), increment it (to, say, 43), and publish the same resulting value. GetNext thus returns the same number for both threads, breaking the algorithm. Although the simple statement _curr++ appears to be atomic, this couldn’t be further from the truth.)
// Each call to GetNext should hand out a new unique number
static class Counter {
    internal static int _curr = 0;
    internal static int GetNext() {
        return _curr++;
    }
}
  • What are some of your favourite .NET features?

Data structures

  • How would you implement the structure of the London underground in a computer’s memory?
    (hint: how about a graph. The set of vertices would represent the stations. The edges connecting them would be the tracks)
  • How would you store the value of a colour in a database, as efficiently as possible?
    (hint: assuming we are measuring efficiency in size and not retrieval or storage speed, and the colour is 16^6 (FFFFFF), store it as an int)
  • What is the difference between a queue and a stack?
  • What is the difference between storing data on the heap vs. on the stack?
  • What is the number 21 in binary format? And in hex?
    (hint: 10101, 15)
  • What is the last thing you learned about data structures from a book, magazine or web site?
  • Can you name some different text file formats for storing unicode characters?
  • How would you store a vector in N dimensions in a datatable?

Algorithms

  • What type of language do you prefer for writing complex algorithms?
  • How do you find out if a number is a power of 2? And how do you know if it is an odd number?
  • How do you find the middle item in a linked list?
  • How would you change the format of all the phone numbers in 10,000 static html web pages?
  • Can you name an example of a recursive solution that you created?
  • Which is faster: finding an item in a hashtable or in a sorted list?
  • What is the last thing you learned about algorithms from a book, magazine or web site?
  • How would you write a function to reverse a string? And can you do that without a temporary string?
  • In an array with integers between 1 and 1,000,000 one value is in the array twice. How do you determine which one?
  • Do you know about the Traveling Salesman Problem?

Testing questions

  • It’s Monday and we’ve just finished Sprint Planning. How would you organize testing?
  • How do you verify that new changes have not broken existing features?
    (hint: regression test)
  • What can you do reduce the chance that a customer finds things that he doesn’t like during acceptance testing?
  • Can you tell me something that you have learned about testing and quality assurance in the last year?
  • What sort of information would you not want to be revealed via Http responses or error messages?
    (hint: Critical info about the likes of server name, version, installed program versions, etc)
  • What would you make sure you turned off on an app or web server before deployment?
    (hint: directory listing?)

Maintenance questions

  • How do you find an error in a large file with code that you cannot step through?
  • How can you make sure that changes in code will not affect any other parts of the product?
  • How can you debug a system in a production environment, while it is being used?

Configuration management questions

  • Which items do you normally place under version control?
  • How would you manage changes to technical documentation, like the architecture of a product?

Project management

  • How many of the three variables scope, time and cost can be fixed by the customer?
  • Who should make estimates for the effort of a project? Who is allowed to set the deadline?
  • Which kind of diagrams do you use to track progress in a project?
  • What is the difference between an iteration and an increment?
  • Can you explain the practice of risk management? How should risks be managed?
  • What do you need to be able to determine if a project is on time and within budget?
    (hint: Product Backlog burn-down)
  • How do you agree on scope and time with the customer, when the customer wants too much?

Candidate displays how they communicate / present to a group of people about a technical topic they are passionate and familiar about.

References I used

If any of these questions or answers are not clear, or you have other great ideas for questions, please leave comments.

How to Increase Software Developer Productivity

March 2, 2013

Is your organisation:

  • Wanting to get more out of your Software Developers?
  • Wanting to increase RoI?
  • Spending too much money fixing bugs?
  • Development team not releasing business value fast enough?
  • Maybe your a software developer and you want to lift your game to the next level?

If any of these points are of concern to you… read on.

There are many things we can do to lift a software developers productivity and thus the total output of The Development Team. I’m going to address some quick and cheap wins, followed by items that may take a little longer to implement, but non the less, will in many cases provide even greater results.

What ever it takes to remove friction and empower your software developers to work with the least amount of interruptions, do it.
Allow them to create a space that they love working in. I know when I work from home my days are far more productive than when working for a company that insists on cramming as many workers around you into a small space as possible. Chitter chatter from behind, both sides and in front of you will not help one get their mind into a state of deep thought easily.

I have included thoughts from Nicholas C. Zakas post to re-iterate the common fallacies uttered by non-engineers.

  • I don’t understand why this is such a big deal. Isn’t it just a few lines of code? (Technically, everything is a few lines of code. That doesn’t make it easy or simple.)
  • {insert name here} says it can be done in a couple of days. (That’s because {insert name here} already has perfect knowledge of the solution. I don’t, I need to learn it first.)
  • What can we do to make this go faster? Do you need more engineers? (Throwing more engineers at a problem frequently makes it worse. The only way to get something built faster is to build a smaller thing.)

Screen real estate

When writing code, a software developers work requires a lot of time spent deep in thought. Holding multiple layers of complexity within immediately accessible memory.
One of the big wins I’ve found that helps with continuity, is maximising your screen real estate.
I’ve now moved up to 3 x 27″ 2560×1440 IPS flat panels. These are absolutely gorgeous to look at/work with.
Software development generally requires a large number of applications to be running at any one time.
For example in any average session for me, I generally have somewhere around 30 windows open.
The more screen real estate a developer has, the less he/she has to fossick around for what he/she needs and switch between them.
Also, the less brain cycles he/she has to spend locating that next running application, means the more cycles you have in order to do real work.
So, the less gap there is switching between say one code editor and another, the easier it is for a developer to keep the big picture in memory.
We’re looking at:

  1. physical screen size
  2. total pixel count

The greater real estate available (physical screen size and pixel count) the more information you can have instant access to, which means:

  • less waiting
  • less memory loss
  • less time spent rebuilding structures in your head
  • greater continuity

Which then gives your organisation and developers:

  • greater productivity
  • greater RoI

These screens are cheaper than many realise. I set these up 4 months ago. They continue to drop in price.

  1. FSM-270YG 27″ PC Monitor LED S-IPS WIDE 2560×1440 16:9 WQHD DVI-D $470.98 NZD
  2. [QH270-IPSMS] Achieva ShiMian HDMI DVI D-Sub 27″ LG LED 2560×1440 $565.05 NZD
  3. [QH270-IPSMS] Achieva ShiMian HDMI DVI D-Sub 27″ LG LED 2560×1440 $565.05 NZD

It’s just simply not worth not to upgrading to these types of panels.

korean monitors

In this setup, I’m running Linux Mint Maya. Besides the IPS panels, I’m using the following hardware.

  • Video card: 1 x Gigabyte GV-N650OC-2GI GTX 650 PCIE
  • PSU: 1200w Corsair AX1200 (Corsair AX means no more PSU troubles (7 yr warranty))
  • CPU: Intel Core i7 3820 3.60GHz (2011)
  • Mobo: Asus P9X79
  • HDD: 1TB Western Digital WD10EZEX Caviar Blue
  • RAM: Corsair 16GB (2x8GB) Vengeance Performance Memory Module DDR3 1600MHz

One of the ShiMian panels is using the VGA port on the video card as the FSM-270YG only supports DVI.
The other ShiMian and the FSM-270YG are hooked up to the 2 DVI-D (dual link) ports on the video card. The two panels feeding on the dual link are obviously a lot clearer than the panel feeding on the VGA. Also I can reduce the size of the text considerably giving me greater clarity while reading, while enabling me to fit a lot more information on the screens.

With this development box, I’m never left waiting for the machine to catchup with my thought process.
So don’t skimp on hardware. It just doesn’t make sense any way you look at it.

Machine Speed

The same goes for your machine speed. If you have to wait for your machine to do what you’ve commanded it to do and at the same time try and keep a complex application structure in your head, the likelihood of loosing part of that picture increases. Plus your brain has to work harder to hold the image in memory while your trying to maintain continuity of thought. Again using precious cycles for something that shouldn’t be required rather than on the essential work. When a developer looses part of this picture, they have to rebuild it again when the machine finishes executing the last command given. This is re-work that should not be necessary.

An interesting observation from Joel Spolsky:

“The longer it takes to task switch, the bigger the penalty you pay for multitasking.
OK, back to the more interesting topic of managing humans, not CPUs. The trick here is that when you manage programmers, specifically, task switches take a really, really, really long time. That’s because programming is the kind of task where you have to keep a lot of things in your head at once. The more things you remember at once, the more productive you are at programming. A programmer coding at full throttle is keeping zillions of things in their head at once: everything from names of variables, data structures, important APIs, the names of utility functions that they wrote and call a lot, even the name of the subdirectory where they store their source code. If you send that programmer to Crete for a three week vacation, they will forget it all. The human brain seems to move it out of short-term RAM and swaps it out onto a backup tape where it takes forever to retrieve.”

Many of my posts so far have been focused on productivity enhancements. Essentially increasing RoI. This list will continue to grow.

Coding Standards and Guidelines

Agreeing on a set of Coding Standards and Guidelines and policing them (generally by way of code reviews and check-in commit scripts) means software developers get to spend less time thinking about things that they don’t need to and get to throw more time at the real problems.

For example:

Better Tooling

Improving tool sets has huge gains in productivity. In most cases many of the best tools are free. Moving from the likes of non distributed source control systems to best of bread distributed.

There are many more that should be considered.

Wiki

Implementing an excellent Wiki that is easy to use. I’ve put a few wiki’s in place now and have used even more. My current pick of the bunch would have to be Atlassians Confluence. I’ve installed this on a local server and also migrated the instance to their cloud. There are varying plans and all very reasonably priced with excellent support. If the wiki you’re planning on using is not as intuitive as it could be, developers just wont use it. So don’t settle for anything less.

Improving Processes

Code Reviews

Also a very important step in all successful development teams and often a discipline that must be satisfied as part of Scrums Definition of Done (DoD). What this gives us is high quality designs and code, conforming to the coding standards. This reduces defects, duplicate code (DRY) and enforces easily readable code as the reviewer has to understand it. Saves a lot of money in re-work.

Cost of Change

Scott Amblers Cost of change curve

Definition of Done (DoD)

Get The Team together and decide on what it means to have each Product Backlog Item that’s pulled into the Sprint Done.
Here’s an example of a DoD that one of my previous Development Teams compiled:

Definition of Done

What does Done actually mean?

Come Sprint Review on the last day of the Sprint, everyone knows what it means to be done. There is no “well I thought it was Done because I’ve written the code for it, but it’s not tested yet”.

Continuous Integration (CI)

There are many tools and ways to implement CI. What does CI give you? Visibility of code quality, adherence to standards, reports on cyclomatic complexity, predictability and quite a number of other positive side effects. You’ll know as soon as the code fails to build and/or your fast running tests (unit tests) fail. This means The Development Team don’t keep writing code on top of faulty code, thus reducing technical debt by not having to undo changes on changes later down the track.
I’ve used a number of these tools and have carried out extensive research and evaluation spikes on a number of the most popular offerings. In order of preference, the following are my candidates.

  1. Jenkins (free and open source, with a great community)
  2. TeamCity
  3. Atlassian Bamboo

Release Plans

Make sure you have these. This will reduce confusion and provide a clear definition of the steps involved to get your software out the door. This will reduce the likelihood of screwing up a release and re-work being required. You’ll definitely need one of these for the next item.

Here’s an example of a release notes guideline I wrote for one of the previous companies I worked for.

release notes

Continuous Deployment

If using Scrum, The Scrum Team will be forecasting a potentially releasable Increment (the sum of all the Product Backlog items completed during a Sprint and all previous Sprints).
You may decide to actually release this. When you do, you can look at the possibility of automating this deployment. Thus reducing the workload of the release manager or who ever usually deploys (often The Development Team in a Scrum environment). This has the added benefit of consistency, predictability, reliability and of course happy customers. I’ve also been through this process of research and evaluation on the tools available and the techniques to implement.

Here’s a good podcast that got me started. I’ve got a collection of other resources if you need them and can offer you my experience in this process. Just leave a comment.

Implement Scrum (and not the Flaccid flavour)

I hope this goes without saying?
Implementing Scrum to provide ultimate visibility

Get maximum quality out of the least money spent

How to get the most out of your limited QA budget

Driving your designs with tests, thus creating maintainable code, thus reducing technical debt.

Hold Retrospectives

Scrum is big on continual inspection and adaption, self-organisation and fostering innovation. The military have another term for inspection and adaption. It’s called the OODA Loop.
The Retrospective is just one of the Scrum Events that enable The Scrum Team to continually inspect the way they are doing things and improve the way they develop and deliver business value.

Invest a little into your servant leaders

Empowering the servant leaders.

Context Switching

Don’t do it. This is a real killer.
This is hard. What you need to do is be aware of how much productivity is killed with each switch. Then do everything in your power to make sure your Development Team is sheltered from as much as possible. There are many ways to do this. For starters, you’re going to need as much visibility as possible into how much this is currently happening. track add-hock requests and any other types of interruptions that steel the developers concentration. In the last Scrum Team that I was Scrum Master of, The Development Team decided to include another metric to the burn down chart that was on the middle of the wall, clearly visible to all. Every time one of the developers was interrupted during a Sprint, they would record this time, the reason and who interrupted them, on the burn down chart. The Scrum Team would then address this during the Retrospective and empirically address why this happened and work out how to stop it happening every Sprint. Jeff Atwood has an informative post on why and how context-switching/multitasking kills productivity. Be sure to check it out.

As always, if anything I’ve mentioned isn’t completely clear, or you have any questions, please leave a comment 🙂

Moving to TDD

December 1, 2012

My last employers software development team recently took up the challenge of writing their tests before writing the functionality for which the test was written. In software development, this is known as Test Driven Development or TDD.

TDD is a hard concept to get developers to embrace. It’s often as much of a paradigm shift as persuading a procedural programmer to start creating Object Oriented designs. Some never get it. Fortunately we had a very talented bunch of developers, and they’ve taken to it like fish to water.

The first thing to clear up is that TDD is not primarily about testing, but rather it forces the developer to write code that is testable (the fact the code has tests written for it and running regularly is a side effect, albeit a very positive one).

This is why there is often some confusion about TDD and the fact it or its derivatives (BDD, ATDD, AAT, etc.) are primarily focused on creating well designed software. Code that is testable must be modular, which provides good separation of concerns.

  • Testing is about measuring where the quality is currently at.
  • TDD and its derivatives are about building the quality in from the start.

red green refactor

TDD concentrates on writing a unit test for the routine we are about to create before it’s created. A developer writes code that acts as a low-level specification (the test) that will be run on the routine, to confirm that the routine does what we expect it will do.

To unit test a routine, we must be able to break out the routines dependencies and separate them. If we don’t do this, the hierarchy of calls often grows exponentially.

Thus:

  1. We end up testing far more than we want or need to.
  2. The complexity gets out of hand.
  3. The test takes longer to execute than it needs to.
  4. Thus, the tests don’t get run as often as they should because we developers have to wait, and we live in an instant society.

This allows us to ignore how the dependencies behave and concentrate on a single routine. There are a number of concepts we can instantiate to help with this.

We can use:

Although TDD isn’t primarily about testing, its sole purpose is to create solid, well designed, extensible and scalable software. TDD encourages and in some cases forces us down the path of the SOLID principles, because to test each routine, each routine must be able to stand on its own.

SOLID principles

So what does SOLID give us? SOLID stands for:

  • Single Responsibility Principle
  • Open Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

Single Responsibility Principle

  • Each class should have one and only one reason to change.
  • Each class should do one thing and do it well.

Single Responsibility Principle

Just because you can, doesn’t mean you should.

Open Closed Principle

  • A class’s behaviour should be able to be extended without modifying it.
  • There are several ways to achieve this. Some of which are polymorphism via inheritance, aggregation, wrapping.

Liskov Substitution Principle

Interface Segregation Principle

  • When an interface consists of too many members, it should be split into smaller and more specific (to the client’s needs) interfaces, so that clients using the interface only use the members applicable to them.
  • A client should not have to know about all the extra interface members they don’t use.
  • This encourages systems to be decoupled and thus more easily re-factored, extended and scaled.

Dependency Inversion Principle

  • Often implemented in the form of the Dependency Injection Pattern via the more specific Inversion of Control Principle (IoC). In some circles, known as the Hollywood Principle… Don’t call us, we’ll call you.

TDD assists in the monitoring of technical debt and streamlines the path to quality design.

Additional info on optimizing your team’s testing effort can be found here.

Guidance on Running Retrospectives

July 28, 2012

Following is the five steps we use to run our Retrospectives.
I’ve purposely made these as terse as possible,
so it can be used as a check list as the retrospective progresses.
Below the five steps I’ve added some extra info and tips.

What’s a Retrospective?

  • A Retrospective is a planned event where a team leader
    (or in the world of Scrum, a Scrum Master)
    guides the team through a process of looking inward.
    In the world of Scrum, we hold a Retrospective at the end of every Sprint.
    What’s Scrum?
    I made a post a while back outlining why an organisation aiming to deliver products that had complex elements, would use Scrum.
    Check it out here.
  • Locating impediments and working out what to do in order to remove them.
  • Move the team along the path of…
    Forming -> Storming -> Norming -> Performing.
  • Make the team a more fun place to be for all members.
  • Implement Kaizen.
  • Increases operational efficiencies for the stake holders.
  • Another opportunity to inspect and adapt.

Structure

  1. Set the stage
  2. Gather data
  3. Generate insights
  4. Decide what to do
  5. Close the retrospective

1. Set the stage

Time expected (time box)
  • Ask everyone in room to speak a word or two about what’s going on / how they’re feeling.
    This encourages everyone to have a voice and speak early.
    If anyone chooses to remain silent, they must remain silent for duration of Retrospective.
  • Request for amendments to our working agreements?
    These belong to the team.
    They are the teams responsibility.
    Social contract (> 10 points is too many).
    Check whether the Definition of Done (DoD) needs any modifications.
  • Establish environment where people can bring up difficult topics and have challenging conversations.
    Confirm (and establish if not already) the goal of this Retrospective.
    Remind team that Social contract applies for retrospective as it does at any other time.
    Teams personal Social contract should not contain abstract statements,
    but working statements and agreements that help the team talk about emotional, tough issues.
  • If someone is doing to much talking, just say “Lets hear from someone else”.
    Some Product Owners can have this tendency.
  • Review Action Points taken from last Retrospective.

2. Gather data

Time expected (time box)
  • Hard
  • events
  • metrics
  • features or PBI’s completed
  • Soft
  • feelings
    Rather than asking directly about how people felt, you can get the same info in other ways.
    When were you excited to come to work?
    When was coming to work “just a job”?
    When did you dread coming to work?
    What were the high points?
    What were the low points?
    How was it to be in this iteration?
    When where you mad, sad, surprised?

3. Generate insights

Time expected (time box)
  • Question why, and encourage team to start thinking about what to do differently.
  • Lead team to examine the conditions, interactions, surprises and patterns that contributed to the Sprint outcome.
  • Record all insights on the white board or a wall.
    insights are potential experiments and improvements taken from the gathered data.

4. Decide what to do

Time expected (time box)
  • Team picks the top 2 – 3 insights.
    These become the action points.
    Make sure each action point is assigned to someone and dated.
    The best way to make sure these happen is to include them in the next Sprints Backlog as PBI’s.

5. Close the Retrospective

Time expected (time box)
  • Make mention of the Sprint report and that all should read through it at least once to keep the decisions made in their mind.
  • The learning’s belong to the team. Not the CEO and Not the SM.
  • Show appreciation for the hard work everyone did during the Sprint and the Retrospective.
  • Perform Retrospective on Retrospective (a few minutes).
    It pays to inspect and adapt Retrospectives too.
    Or as the military call it, OODA loop.
    Observe -> Orient -> Decide -> Act

That’s basically it.

Additional Retrospective info and tips

The Retrospective is generally the last event in a Scrum Sprint.
The official Scrum Guide has a terse section on the Retrospective.

Time boxing

Scrum values time box’s.
Generally time boxed to 1.5 hours for a 2 week Sprint.
Proportionally shorter / longer for shorter / longer Sprints.
A general guideline for the 5 steps are:

  1. Set the stage 5%
  2. Gather data 30-50%
  3. Generate insights 20-30%
  4. Decide what to do 15-20%
  5. Close the retrospective 10%

Activities

I’m finding it useful building up a collection of activities to use to drive the Retrospectives.
Have an activity pre-defined for each of the five steps, and potentially a fall back activity also.
It pays to spend some time up front before the event,
preparing what you want the stake holders and the Team to get out of it (a goal).
Good activities to use, should include at least the following traits:

  1. Encourage all team members to actively participate.
  2. Help team members to keep discussions focused on the goal.
  3. Assist in producing creative thinking, and looking at things from different angles.

Don’t use the same activities every Retrospective.
If you and / or the Team is getting bored with the current activity, it’ll become less effective.

Breaks

If your running a Retrospective longer than aprx 2 hours,
you should think about factoring in breaks.
Often 10 minutes is all the team will need.
You as the Retrospective leader / Scrum Master, will benefit from a short break.
Especially if your feeling stressed or under tension.
Shake the tension out of your limbs and get the blood moving to the brain again.
Take a few good breaths.

Closing

I’ve found the book “Agile Retrospectives” by Esther Derby very useful.
Check it out for lots of additional info and ideas.

I wanted to keep the five steps really terse (a check list).
This way you can take them into the Retrospective and glance at them while your leading the event to make sure you and the team are on track.

Comments very welcome.

How to optimise your testing effort

March 24, 2012

I recently wrote a post for the company I currently work for around the joys of doing TDD.
You can check it out here.

What is your current approach to testing?
How can you spend the little time you have on the most important areas?

I thought I’d share some thoughts around where I see the optimal areas to invest your test effort.
I got to thinking last night, and when I was asleep.
We are putting too much effort into our UI, UA and system tests.
We are writing to many of them, thus we’re creating a top heavy test structure that will sooner or later topple.
These tests have their sweet spot, but they are slow, fragile and time consuming to write.

We should have a small handful for each user story to provide some UA, and the rest should be without the UI and database (the slow and fragile bits).
We need to get our mind sets lower down the test triangle.

test triangle

I’ll try and explain why we should be doing less Manual tests, followed by GUI tests, followed by UA tests, followed by integration tests, followed by Unit tests.

Try not to test the UI with the lower architectural layers included in the tests.
UI tests should have the lower layers mocked and / or stubbed.
Check out Dummy vs Fake vs Stub vs Mock
Full end to end system tests are not required to validate UI field constraints.
Dependency injection really helps us here.

When you are explicitly testing the upper levels of the test triangle, the lower / immediate lower layers are implicitly being tested.
So you might think, cool, if we invest in the upper layers, we implicitly cover the lower layers.
That’s right, but the disadvantages of the higher level tests outweigh the advantages.
UI tests and especially ones that go from end to end, should be avoided, or very few in number,
as they are fragile and incur high maintenance costs.
If we create to many of these, confidence in their value diminishes.
Read on and you’ll find out why.

Lets look at cost vs value to the business.

Some tests cost a lot to create and modify.
Some cost little to create and modify.
Some yield high value.
Some yield low value.
We only have so much time for testing,
so lets use it in the areas that provide the greatest value to the business.
Greatest value of course, will be measured differently for each feature.
There is no stock standard answer here, only guidelines.
What we’re aiming for is to spend the minimum effort (cost) and get the maximum benefit (value).
Not the other way around…
With the following set of scales, we’ve spent to much in the wrong areas, yielding suboptimal value.

cost verse business value

It’s worth the effort to get under the UI layer and do the required setup incl mocking the layers below.
It’s also not to hard to get around the likes of the HttpContext hierarchy of classes (HttpRequest, HttpResponse, and so on) encountered in ASP.NET Web Forms and MVC.

Beware

  • the higher level tests get progressively more expensive to create and maintain.
  • They are slower to run, which means they don’t run as part of CI, but maybe the nightly build.
    Which means there is more latency in the development cycle.
    Developers are less likely to run them manually.
  • When  they break, it takes longer to locate the fault, as you have all the layers below to go through.

Unreliable tests are a major cause for teams ignoring or losing confidence in automated tests.
UI, Acceptance, followed by integration tests are usually the culprits for causing this.
Once confidence is lost, the value initially invested in the automated tests is significantly reduced.
Fixing failing tests and resolving issues associated with brittle tests should be a priority to remove false positives.

Planning the test effort

This is usually the first step we do when starting work on a user story,
or any new feature.
We usually create a set of Test Conditions (Given/When/Then)

Given When Then
There are no items in the shopping cart Customer clicks “Purchase” button for a book which is in stock 1 x book is added to shopping cart. Book is held – preventing selling it twice.
Customer clicks “Purchase” button for a book which is not in stock Dialog with “Out of stock” message is displayed and offering customer option of putting book on back order.

for Product Backlog items where there are enough use cases for it to be worth doing.
Where we don’t create Test Conditions, we have a Test Condition workshop.
In the workshop we look at the What, How, Who and Why in that order.
The test quadrant (pictured below) assists us in this.
In the workshop, we write the previously recorded Acceptance Criteria on a board (the What) and discuss the most effective way to verify that the conditions are meet (the How)
With the how we look at the test triangle and the test quadrant and decide where our time is most effectively spent.

Test condition workshop

With the test condition workshop,
when we start on a user story (generally a feature in the sprint backlog),
we plan where we are going to spend our test resource.
Think about What, and sometimes Who, but not How.
The How comes last.

Unit tests are the developers bread and butter.
They are cheap to create and modify,
and consistently yield not only good value to the developers,
but implicitly good value to most / all other areas.
This is why they sit at the bottom of the test triangle.
This is why TDD is as strong as it is today.
test quadrant

The hierarchy of criteria that we use to help us

  1. Release Criteria
    Ultimately controlled by the Product Owner or release manager.
  2. Acceptance Criteria
    Also owned by the Product Owner.
    Attached to each user story, or more correctly… product backlog item.
    The Development team must meet these in order to fulfill the Definition of Done.
  3. Test Conditions
    When executable, confirm the development team have satisfied the requirements of the product backlog item.

Write your tests first

TDD is  not about testing, it’s about creating better designs.
This forces us to design better software. “Testable”, “Modular”, separating concerns, Single responsibility principle.
This forces us down the path of SOLID Principles.

red green refactor

  1. Write a unit test
    Run it and watch it fail (because the production code is not yet written)
  2. Write just enough production code to make the test pass
  3. Re-run the test and watch it pass

This podcast around TDD has lots of good info.

Continuous Integration

Realise the importance of setting up CI and nightly builds.
The benefit of having your unit (fast running) tests automatically executed regularly are great.
You get rapid feedback, which is crucial to an agile team completing features on time.
Tests that are not being run regularly have the risk that they may be failing.
The sooner you find a failing test, the easier it is to fix the code.
The longer it’s left unattended, the more technical debt you accrue and the more effort is required to hunt down the fault.
Make the effort to get your tests running on each commit or push.

Nightly Builds

The slower running tests (that’s all the automated tests above unit tests on the triangle), need to be run as part of a nightly build.
We can’t have these running as part of the CI because they are just too slow.
If something gets in the way of a developers work flow, it won’t get done.

Pair Review

Don’t forget to pair review all code written.
In my current position we’ve been requesting reviews verbally and responding with emails, comments on paper.
This is not ideal and we’re currently evaluating review software, of which there are many offerings.

Professional Scrum Master

March 23, 2012

Hi all.

Looking forward to attending the PSM course on Monday 26/03.
Shortly after I’ll be going for the exam.

I’ve been mostly working in a scrum environment since around 2007.
Now I’m looking at solidifying some of that experience and knowledge, and gaining a little more hopefully?

Here’s the outline.

Scrum.org has designed the Professional Scrum Master (PSM) program to have the utmost rigor. The program’s courses, assessments, and certifications give participants the knowledge they need to use Scrum effectively and the credentials they need to communicate this ability in the marketplace.

Audience

The audience of the PSM course includes those that help lead the software development process in an organization. PSM is specifically targeted at the role of the Scrum Master, but the lessons are applicable to anyone in a role that supports a software development team’s efficiency, effectiveness, and continual improvement.

The Course

The Professional Scrum Master course is the first significant update of the Certified ScrumMaster (CSM) course that Ken Schwaber first created in 2002. This course covers Scrum basics, including the framework, mechanics, and roles of Scrum. But it also teaches how to use Scrum to optimize value, productivity, and the total cost of ownership of software products. Students learn through instruction and team-based exercises, and they are challenged to think on their feet to better understand what to do when they return to their workplaces.

Scrum.org maintains a defined curriculum for the Professional Scrum Master courses and selects only the most qualified instructors to deliver them. Each instructor brings his or her individual experiences and areas of expertise to bear, but all students learn the same core course content. This improves their ability to pass the Professional Scrum Master assessments and apply Scrum in their workplaces.

The Professional Scrum Master course (previously known as the Scrum In Depth course) covers Scrum basics, including the framework, mechanics, and roles of Scrum. But it also teaches how to use Scrum how to optimize value, productivity, and the total cost of ownership of software products. Students learn through instruction and team-based exercises, and they are challenged to think on their feet to better understand what to do when they return to their workplaces.

The course curriculum covers:

  • Scrum Basics. What is Scrum and how has it evolved?
  • Scrum Theory. Why does Scrum work and what are its core principles? How are the Scrum principles different from those of more traditional software development approaches, and what is the impact?
  • Scrum Framework and Meetings. How Scrum theory is implemented using time-boxes, roles, rules, and artifacts. How can these be used most effectively and how can they fall apart?
  • Scrum and Change. Scrum is different: what does this mean to my project and my organization? How do I best adopt Scrum given the change that is expected?
  • Scrum and Total Cost of Ownership. A system isn’t just developed, it is also sustained, maintained and enhanced. How is the Total Cost of Ownership (TCO) of our systems or products measured and optimized?
  • Scrum Teams. Scrum Teams are self-organizing and cross-functional; this is different from traditional development groups. How do we start with Scrum teams and how do we ensure their success?
  • Scrum Planning. Plan a project and estimate its cost and completion date.
  • Predictability, Risk Management, and Reporting. Scrum is empirical. How can predictions be made, risk be controlled, and progress be tracked using Scrum.
  • Scaling Scrum. Scrum works great with one team. It also works better than anything else for projects or product releases that involve hundreds or thousands of globally dispersed team members. How is scaling best accomplished using Scrum?

Prerequisites

The Professional Scrum Master course is primarily targeted at those responsible for the successful use and/or rollout of Scrum in a project or enterprise. Attendees will be able to make the most of the class if they:

  • Have attended the Professional Scrum Foundations course
  • Understand the basics of project management.
  • Understand requirements and requirements decomposition.
  • Have been on or closely involved with a project that builds or enhances a product.
  • Have studied the Scrum Guide.
  • Have read one of the Scrum books.
  • Want to know more about how Scrum works, how to use it, and how to implement it in an organization.

Assessment and Certification

As a matter of principle, Scrum.org feels that certification should be available to all those who possess a particular level of knowledge — not only to those who have taken a class. As a result, they offer the option of Professional Scrum Master I and II assessments to the public — not only to those who have taken the Professional Scrum Master course. The Professional Scrum Master program features two assessments and two levels of certification.