Posts Tagged ‘tdd’

Consuming Free and Open Source

October 29, 2015

Risks

average-widespread-difficult-moderate

This is where A9 (Using Components with Known Vulnerabilities) of the 2013 OWASP Top 10 comes in.
We are consuming far more free and open source libraries than we have ever before. Much of the code we are pulling into our projects is never intentionally used, but is still adding surface area for attack. Much of it:

  • Is not thoroughly tested (for what it should do and what it should not do). We are often relying
    on developers we do not know a lot about to have not introduced defects. Most developers are more focused on building than breaking, they do not even see the defects they are introducing.
  • Is not reviewed evaluated. That is right, many of the packages we are consuming are created by solo developers with a single focus of creating and little to no focus of how their creations can be exploited. Even some teams with a security hero are not doing a lot better.
  • Is created by amateurs that could and do include vulnerabilities. Anyone can write code and publish to an open source repository. Much of this code ends up in our package management repositories which we consume.
  • Does not undergo the same requirement analysis, defining the scope, acceptance criteria, test conditions and sign off by a development team and product owner that our commercial software does.

Many vulnerabilities can hide in these external dependencies. It is not just one attack vector any more, it provides the opportunity for many vulnerabilities to be sitting waiting to be exploited. If you do not find and deal with them, I can assure you, someone else will. See Justin Searls talk on consuming all the open source.

Running install or any scripts from non local sources without first downloading them and inspecting can destroy or modify your and any other reachable systems, send sensitive information to an attacker, or any number of other criminal activities.

Countermeasures

prevention easy

Process

Dibbe Edwards discusses some excellent initiatives on how they do it at IBM. I will attempt to paraphrase some of them here:

  • Implement process and governance around which open source libraries you can use
  • Legal review: checking licenses
  • Scanning the code for vulnerabilities, manual and automated code review
  • Maintain a list containing all the libraries that have been approved for use. If not on the list, make request and it should go through the same process.
  • Once the libraries are in your product they should become as part of your own code so that they get the same rigour over them as any of your other code written in-house
  • There needs to be automated process that runs over the code base to check that nothing that is not on the approved list is included
  • Consider automating some of the suggested tooling options below

There is an excellent paper by the SANS Institute on Security Concerns in Using Open Source Software for Enterprise Requirements that is well worth a read. It confirms what the likes of IBM are doing in regards to their consumption of free and open source libraries.

Consumption is Your Responsibility

As a developer, you are responsible for what you install and consume. Malicious NodeJS packages do end up on NPM from time to time. The same goes for any source or binary you download and run. The following commands are often encountered as being “the way” to install things:

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(wget https://raw.github.com/account/repo/install.sh -O -)"
# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(curl -fsSL https://raw.github.com/account/repo/install.sh)"

Below is the official way to install NodeJS. Do not do this. wget or curl first, then make sure what you have just downloaded is not malicious.

Inspect the code before you run it.

1. The repository could have been tampered with
2. The transmission from the repository to you could have been intercepted and modified.

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Keeping Safe

wget, curl, etc

Please do not wget, curl or fetch in any way and pipe what you think is an installer or any script to your shell without first verifying that what you are about to run is not malicious. Do not download and run in the same command.

The better option is to:

  1. Verify the source that you are about to download, if all good
  2. Download it
  3. Check it again, if all good
  4. Only then should you run it

npm install

As part of an npm install, package creators, maintainers (or even a malicious entity intercepting and modifying your request on the wire) can define scripts to be run on specific NPM hooks. You can check to see if any package has hooks (before installation) that will run scripts by issuing the following command:
npm show [module-you-want-to-install] scripts

Recommended procedure:

  1. Verify the source that you are about to download, if all good
  2. npm show [module-you-want-to-install] scripts
  3. Download the module without installing it and inspect it. You can download it from
    http://registry.npmjs.org/%5Bmodule-you-want-to-install%5D/-/%5Bmodule-you-want-to-install%5D-VERSION.tgz
    

The most important step here is downloading and inspecting before you run.

Doppelganger Packages

Similarly to Doppelganger Domains, People often miss-type what they want to install. If you were someone that wanted to do something malicious like have consumers of your package destroy or modify their systems, send sensitive information to you, or any number of other criminal activities (ideally identified in the Identify Risks section. If not already, add), doppelganger packages are an excellent avenue for raising the likelihood that someone will install your malicious package by miss typing the name of it with the name of another package that has a very similar name. I covered this in my “0wn1ng The Web” presentation, with demos.

Make sure you are typing the correct package name. Copy -> Pasting works.

Tooling

For NodeJS developers: Keep your eye on the nodesecurity advisories. Identified security issues can be posted to NodeSecurity report.

RetireJS Is useful to help you find JavaScript libraries with known vulnerabilities. RetireJS has the following:

  1. Command line scanner
    • Excellent for CI builds. Include it in one of your build definitions and let it do the work for you.
      • To install globally:
        npm i -g retire
      • To run it over your project:
        retire my-project
        Results like the following may be generated:

        public/plugins/jquery/jquery-1.4.4.min.js
        ↳ jquery 1.4.4.min has known vulnerabilities:
        http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969
        http://research.insecurelabs.org/jquery/test/
        http://bugs.jquery.com/ticket/11290
        
    • To install RetireJS locally to your project and run as a git precommit-hook.
      There is an NPM package that can help us with this, called precommit-hook, which installs the git pre-commit hook into the usual .git/hooks/pre-commit file of your projects repository. This will allow us to run any scripts immediately before a commit is issued.
      Install both packages locally and save to devDependencies of your package.json. This will make sure that when other team members fetch your code, the same retire script will be run on their pre-commit action.

      npm install precommit-hook --save-dev
      npm install retire --save-dev
      

      If you do not configure the hook via the package.json to run specific scripts, it will run lint, validate and test by default. See the RetireJS documentation for options.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         }
      }
      

      Adding the pre-commit property allows you to specify which scripts you want run before a successful commit is performed. The following package.json defines that the lint and validate scripts will be run. validate runs our retire command.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         },
         "pre-commit": ["lint", "validate"]
      }
      

      Keep in mind that pre-commit hooks can be very useful for all sorts of checking of things immediately before your code is committed. For example running security tests using the OWASP ZAP API.

  2. Chrome extension
  3. Firefox extension
  4. Grunt plugin
  5. Gulp task
  6. Burp and OWASP ZAP plugin
  7. On-line tool you can simply enter your web applications URL and the resource will be analysed

requireSafe

provides “intentful auditing as a stream of intel for bithound“. I guess watch this space, as in speaking with Adam Baldwin, there doesn’t appear to be much happening here yet.

bithound

In regards to NPM packages, we know the following things:

  1. We know about a small collection of vulnerable NPM packages. Some of which have high fan-in (many packages depend on them).
  2. The vulnerable packages have published patched versions
  3. Many packages are still consuming the vulnerable unpatched versions of the packages that have published patched versions
    • So although we could have removed a much larger number of vulnerable packages due to their persistence on depending on unpatched packages, we have not. I think this mostly comes down to lack of visibility, awareness and education. This is exactly what I’m trying to change.

bithound supports:

  • JavaScript, TypeScript and JSX (back-end and front-end)
  • In terms of version control systems, only git is supported
  • Opening of bitbucket and github issues
  • Providing statistics on code quality, maintainability and stability. I queried Adam on this, but not a lot of information was forth coming.

bithound can be configured to not analyse some files. Very large repositories are prevented from being analysed due to large scale performance issues.

Analyses both NPM and Bower dependencies and notifies you if any are:

  • Out of date
  • Insecure. Assuming this is based on the known vulnerabilities (41 node advisories at the time of writing this)
  • Unused

Analysis of opensource projects are free.

You could of course just list all of your projects and global packages and check that there are none in the advisories, but this would be more work and who is going to remember to do that all the time?

For .Net developers, there is the likes of OWASP SafeNuGet.

Risks that Solution Causes

Some of the packages we consume may have good test coverage, but are the tests testing the right things? Are the tests testing that something can not happen? That is where the likes of RetireJS comes in.

Process

There is a danger of implementing to much manual process thus slowing development down more than necessary. The way the process is implemented will have a lot to do with its level of success. For example automating as much as possible, so developers don’t have to think about as much as possible is going to make for more productive, focused and happier developers.

For example, when a Development Team needs to pull a library into their project, which often happens in the middle of working on a product backlog item (not planned at the beginning of the Sprint), if they have to context switch while a legal review and/or manual code review takes place, then this will cause friction and reduce the teams performance even though it may be out of their hands.
In this case, the Development Team really needs a dedicated resource to perform the legal review. The manual review could be done by another team member or even themselves with perhaps another team member having a quicker review after the fact. These sorts of decisions need to be made by the Development Team, not mandated by someone outside of the team that doesn’t have skin in the game or does not have the localised understanding that the people working on the project do.

Maintaining a list of the approved libraries really needs to be a process that does not take a lot of human interaction. How ever you work out your process, make sure it does not require a lot of extra developer effort on an ongoing basis. Some effort up front to automate as much as possible will facilitate this.

Tooling

Using the likes of pre-commit hooks, the other tooling options detailed in the Countermeasures section and creating scripts to do most of the work for us is probably going to be a good option to start with.

Costs and Trade-offs

The process has to be streamlined so that it does not get in the developers way. A good way to do this is to ask the developers how it should be done. They know what will get in their way. In order for the process to be a success, the person(s) mandating it will need to get solid buy-in from the people using it (the developers).
The idea of setting up a process that notifies at least the Development Team if a library they want to use has known security defects, needs to be pitched to all stakeholders (developers, product owner, even external stakeholders) the right way. It needs to provide obvious benefit and not make anyones life harder than it already is. Everyone has their own agendas. Rather than fighting against them, include consideration for them in your pitch. I think this sort of a pitch is actually reasonably easy if you keep these factors in mind.

Attributions

Additional Resources

 

Automating Specification by Example for .NET Web Applications

February 22, 2014

If you or your organisation:

  1. are/is constrained to running your .NET tests (unit, acceptance) on-site rather than in the cloud
  2. would like some guidance on how to set-up Continuous Integration

read on.

Introduction

Purpose

Remember, an acceptance test system as a tool is only as good as the specification provided by it’s humans. The most important ingredients there-for is the relationships between the people creating the tests and the interactions performed by those people. Or as the Agile Manifesto states: Value “Individuals and interactions over processes and tools”. In order for an acceptance test system to be successful, the relationships of the Developers creating the increment and the interactions between them and the stake holders must be in good shape first. Once this is in order, you can take the next step and find some tools that will assist in creating working software that does what the stake holders want it to do.

It’s my intention that the following details will help you to create a system that automates “Specification by Example”.

The purpose of providing an automated Specification by Example Implementation, A.K.A Automated Acceptance Test System, is clearly explained here.

Do not fall into the trap of inverting the test triangle. Instead invest where it matters.

Scope

Create a system that can be triggered from

  1. Every developers workstation
  2. A build on the build machine, preferably from a best of bread build tool. TFS is not a best of bread build tool and if you want to get serious about Continuous Integration (CI), nightly builds, continuous deployment, I’d recommend not going down the path of TFS. Even Microsoft uses Git. Doesn’t that tell you something? Do you see TFS here? Last time I evaluated build tools, Jenkins previously named Hudson came out on top.

jenkins

The system will include

  1. An acceptance test framework that will run all the acceptance tests
  2. A Unit test framework. UI tests need to be run in parallel on a collection of VM’s (See the section on supported browsers for why). There are three immediately obvious approaches we could take here.
    1. We could try and rely on a unit test framework to distribute the tests. MSTest 2012 doesn’t provide the ability to run tests in parallel, but 2010 does. In order to have 2012 run tests in parallel, you can force it to use the 2012 test settings file. Only a maximum of 5 tests can be run concurrently though. Not a great option, considering it’s not going to be supported going forward.
    2.  My ParallelBrowser. If this link is not active and you’re interested in this, contact me.
    3. PNUnit. An example of how this works is here under the “PNunit Framework for writing selenium test cases” heading. I wrote the ParallelBrowser before Selenium had good support for running the same tests on multiple supported browsers. Both my ParallelBrowser and this option are reasonable options, but I’d go for the latter now. This way someone else can maintain the parallel aspect. As unless people are interested in ParallelBrowser I won’t be doing any further work on it.
  3. A Web User Interface Test Framework that will be driven by the acceptance test framework. Selenium in this case.
  4. A set of tests that run Selenium tests. These will of course need to be thread-safe.
  5. As per the Supported Browsers section, a collection of VM’s with our supported browsers installed.
    1. Each with a standalone selenium server setup with a role of webdriver. Details further on.
  6. A stand-alone selenium server setup with a role of hub

High Level Flow

Many organisations bound to .NET seem to be locked into using sub-standard tooling like TFS for their build. If you are in this predicament and can not break free, I’d suggest once all the unit tests, integration tests have run, then have the build kick off a psake script to:

  1. Clean out the existing target web app
  2. Deploy the newly built and tested web app
  3. Drop the database
  4. Create database by using latest DDL and DML scripts pulled from source control
  5. Apply any specific configurations
  6. Stop and start the target web server
  7. Run the acceptance tests which will include any Web UI tests.

If it’s within your power to choose a real CI Tool to run in-house, there are a handful of very solid contenders. A good proportion of which are free and open source.

Audience

Who ever is setting up the system. Often a developer or two. It’s important to make sure more than one person knows how it all hangs together, otherwise you have a single point of failure.

Chosen Tools

Evaluation Criterion I used

  • Who is the creator? I favour teams rather than individuals, as individuals move on often leaving projects stranded?
  • Does it do what you need it to do?
  • Does it suite the way you and your team want to work?
  • Does it integrate well with all of your other chosen components? This is based on communicating with those that have used the offerings more so than using Proof Of Concepts (POC).
  • Works with the versions of dependencies you currently use.
  • Cost in money. Is it free? Are there catches once you get further down the road? Usually open source projects are marketed as is. No catches
  • Cost in time. Is the set-up painful? Customisation feedback? Upgrade feedback?
  • How well does it appear to be supported? What do the users say?
  • Documentation. Is there any / much? What is its quality?
  • Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have valid reasons).
  • Release schedule. How often are releases being made? When was the last release?
  • Intuition. How does it feel. If you have experience in making these sorts of choices, lean on it. Believe it or not, this should probably be No. 1

The following tools have been my choice based on the above criterion.

Acceptance Test Framework

The following offerings are all free and open source.

If you’re not using User Stories and/or Test Conditions, the context/specification offerings provide greater flexibility than the xBehave style frameworks. As most Scrum teams use User Stories for their Product Backlog items and drive their acceptance tests with test conditions, xBehave offerings are a great choice. In saying that, there is probably no reason why both couldn’t be used where it makes sense to do so. In this section I’ve provided the results of evaluating the current xSpec and xBehave offerings for .NET ordered by best first for the categories.

xBehave (test conditions)

SpecFlow

specflow

  • Sourcecode: https://github.com/techtalk/SpecFlow/
  • Age: Over 4 years
  • Actively maintained: Yes
  • Large number of active committers
  • Community: Lively
  • Visual Studio Plug-in has been downloaded 70 times as many times as NBehave
  • Documentation: Excellent
  • Integrates well with Selenium (I’ve setup a couple of systems using SpecFlow and it’s been a joy to work with). The stake holders loved the visibility it provided too. I discussed it here in a recent presentation.
NBehave
  • Not a lot of activity
  • Only two committers
StoryQ
  • Only two coordinators
  • Well established framework

xSpec (context/specification)

Machine.Specification (MSpec)
NSpec

Web User Interface Test Framework

selenium

For me when I look at this category of tools for .NET, Selenium is always at the top and it just keeps getting better. If anyone has any questions around Selenium, feel free to contact me or leave a comment on this post. I can’t guarantee I’ll have the answer, but I’ll try. All the documentation can be found here. I would recommend installing the Selenium IDE for initially recording tests and be sure to check-out the IDE plug-ins. All the documentation you’ll need for the IDE is here. Once you get familiar with the code it generates, you will not use it much. I would recommend using the newer Web drivers rather than the selenium server by itself. The user group is very active and looks like a good place to ask questions also. Although I haven’t needed to as there is a huge amount of documentation that’s great.

The tools I would use are detailed here. Specifically we would be using

  1. Selenium 2 (aka WebDriver)
  2. The IDE for recording tests initially
  3. Selenium Server which is used by WebDriver and RC (now considered legacy) now includes built-in grid capabilities.

Supported Browsers

What I’ve done in the past is have each of our supported versions from each supported browser vendor installed on a single VM. So each VM has all the vendors browsers installed, but just a single version obviously.

Mid Level Flow

These are the same points listed above under “High Level Flow

1. Build Kicks off PSake Script

psake

The choice to use PSake over the likes of NAant, Rake and the other build scripting languages is reasonably straight forward for me. PSake (PowerShell build scripting language) gives us access to the full .NET environment. NAnt with all it’s angle brackets, was never a very nice scripting language to use. Rake is excellent and a possible option if you have ruby installed. If you don’t, why install it if you have .NET? There are many resources for PowerShell on the inter-webs. The wiki for PSake is good.

In the case where you may have a TFS Build run, I would suggest once all the unit tests and integration tests have run, then the build kicks off a possibly pre-build and post-build psake script to perform the following operations. This is how you do this. Oh, before you try to actually run a PSake script, download and import the module, or install the NuGet package. So once you have your PSake scripts running, just start adding PowerShell scripts to do the following work. PSake is just syntactic sugar around PowerShell, so anything you can do with PS, you can do with PSake.

2. Clean out the existing target web application

Using your PSaki script, use the Web Deploy cmdlets. You will find everything you need here for it. You can also install the NuGet package.

3. Deploy the newly built and unit tested web application

As above, just use the Web Deploy cmdlets.

4. Drop the database

As above, just use the Web Deploy cmdlets.

5. Create database by using latest DDL and DML scripts pulled from source control

Database update via Application

Kind of related, but not specific to CI.

Depending on your needs, there are quite a few ways you could do this.

One way of doing this is to have your application utilise a library that determines which version of the database the application needs and be able to update the database accordingly. This library would use similar or the same upgrade scripts that we would use in this test process.

Your applications should create (if non existent) and update database on run. So all the DDL, DML code per database lives in a library. Each application that uses a specific database, references the databases DDL code library. Script all stored procedures, views, functions, triggers they’re recreated as part of a deployment scrip.

When the application is deployed, and the database created or updated, anything that must be there for the application to run out of the box should be part of the scripts, and of course versioned. This includes the part of our data that is constant or configuration data. Tables, stored procedures, views, functions and triggers. For the variable part of your data, you will need a synthetic data generation plan for testing.

Database Process for Versioning

Also related, but not specific to CI.

DBA, Devs, Product Owner and consultants must be aware of the process.

When any schema, constant data, configuration data, test data is updated… the (version controlled) scripts must also be updated, else the updates will get overwritten.

As part of the nightly build, if your supporting multiple versions of your application, you could also hydrate the collection of database versions, then run the appropriate upgrade scripts against each one, to verify the upgrades work. If any don’t, the build fails.

Create set of well defined processes that:

  1. In most cases, looks after itself
  2. Upgrades existing databases if they are not on the latest version, to the latest version
  3. Creates databases for those applications that don’t have a database
  4. Informs the user on deployment if the database is corrupt, or can not be upgraded
  5. Outlines who is responsible for, and who may update the DDL and DML scripts for your projects
  6. Clearly documents that any changes made to any databases by un-authorised personal will more than likely be overwritten.

A User Story for this might look something like the following:

As the team, we need to create a set of well defined processes that clearly outline what is required in regards to setting up the development teams database versioning, creation, upgrade systems and processes strategy for our organisations databases. So that all team personal are aware of the benefits and dangers of making changes to the databases, and understand the change process.

Possibly useful tools

1. DB Ghost
2. http://www.red-gate.com/products/sql-development/sql-source-control/index-2
3. http://www.sqlaccessories.com/SQL_Data_Examiner/

6. Apply any specific configurations

As above, just use the Web Deploy cmdlets.

7. Stop and start the target web server

As above, just use the Web Deploy cmdlets.

8. Run the acceptance tests which will include any Web UI tests

As above, just use the Web Deploy cmdlets.

  1. Start each VM that hosts a set of browsers you want to use to farm your tests out to. From memory, you do not need to start each browser. There are of course many ways to do this. PS provides the following cmdlets Start-VM and Stop-VM. These would be my first options.
  2. Start the selenium standalone server. All details found here. Or just work through the “Distributed Testing with Selenium Grid” chapter until you get to the “Creating and executing Selenium script in parallel with TestNG” heading, at which point switch to this documentation to replace TestNG with PNUnit.

If I’ve failed to explain anything in enough detail for you, drop me a message below and I’ll do my best to help 🙂

Moving to TDD

December 1, 2012

My last employers software development team recently took up the challenge of writing their tests before writing the functionality for which the test was written. In software development, this is known as Test Driven Development or TDD.

TDD is a hard concept to get developers to embrace. It’s often as much of a paradigm shift as persuading a procedural programmer to start creating Object Oriented designs. Some never get it. Fortunately we had a very talented bunch of developers, and they’ve taken to it like fish to water.

The first thing to clear up is that TDD is not primarily about testing, but rather it forces the developer to write code that is testable (the fact the code has tests written for it and running regularly is a side effect, albeit a very positive one).

This is why there is often some confusion about TDD and the fact it or its derivatives (BDD, ATDD, AAT, etc.) are primarily focused on creating well designed software. Code that is testable must be modular, which provides good separation of concerns.

  • Testing is about measuring where the quality is currently at.
  • TDD and its derivatives are about building the quality in from the start.

red green refactor

TDD concentrates on writing a unit test for the routine we are about to create before it’s created. A developer writes code that acts as a low-level specification (the test) that will be run on the routine, to confirm that the routine does what we expect it will do.

To unit test a routine, we must be able to break out the routines dependencies and separate them. If we don’t do this, the hierarchy of calls often grows exponentially.

Thus:

  1. We end up testing far more than we want or need to.
  2. The complexity gets out of hand.
  3. The test takes longer to execute than it needs to.
  4. Thus, the tests don’t get run as often as they should because we developers have to wait, and we live in an instant society.

This allows us to ignore how the dependencies behave and concentrate on a single routine. There are a number of concepts we can instantiate to help with this.

We can use:

Although TDD isn’t primarily about testing, its sole purpose is to create solid, well designed, extensible and scalable software. TDD encourages and in some cases forces us down the path of the SOLID principles, because to test each routine, each routine must be able to stand on its own.

SOLID principles

So what does SOLID give us? SOLID stands for:

  • Single Responsibility Principle
  • Open Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

Single Responsibility Principle

  • Each class should have one and only one reason to change.
  • Each class should do one thing and do it well.

Single Responsibility Principle

Just because you can, doesn’t mean you should.

Open Closed Principle

  • A class’s behaviour should be able to be extended without modifying it.
  • There are several ways to achieve this. Some of which are polymorphism via inheritance, aggregation, wrapping.

Liskov Substitution Principle

Interface Segregation Principle

  • When an interface consists of too many members, it should be split into smaller and more specific (to the client’s needs) interfaces, so that clients using the interface only use the members applicable to them.
  • A client should not have to know about all the extra interface members they don’t use.
  • This encourages systems to be decoupled and thus more easily re-factored, extended and scaled.

Dependency Inversion Principle

  • Often implemented in the form of the Dependency Injection Pattern via the more specific Inversion of Control Principle (IoC). In some circles, known as the Hollywood Principle… Don’t call us, we’ll call you.

TDD assists in the monitoring of technical debt and streamlines the path to quality design.

Additional info on optimizing your team’s testing effort can be found here.