Posts Tagged ‘test’

Consuming Free and Open Source

October 29, 2015

Risks

average-widespread-difficult-moderate

This is where A9 (Using Components with Known Vulnerabilities) of the 2013 OWASP Top 10 comes in.
We are consuming far more free and open source libraries than we have ever before. Much of the code we are pulling into our projects is never intentionally used, but is still adding surface area for attack. Much of it:

  • Is not thoroughly tested (for what it should do and what it should not do). We are often relying
    on developers we do not know a lot about to have not introduced defects. Most developers are more focused on building than breaking, they do not even see the defects they are introducing.
  • Is not reviewed evaluated. That is right, many of the packages we are consuming are created by solo developers with a single focus of creating and little to no focus of how their creations can be exploited. Even some teams with a security hero are not doing a lot better.
  • Is created by amateurs that could and do include vulnerabilities. Anyone can write code and publish to an open source repository. Much of this code ends up in our package management repositories which we consume.
  • Does not undergo the same requirement analysis, defining the scope, acceptance criteria, test conditions and sign off by a development team and product owner that our commercial software does.

Many vulnerabilities can hide in these external dependencies. It is not just one attack vector any more, it provides the opportunity for many vulnerabilities to be sitting waiting to be exploited. If you do not find and deal with them, I can assure you, someone else will. See Justin Searls talk on consuming all the open source.

Running install or any scripts from non local sources without first downloading them and inspecting can destroy or modify your and any other reachable systems, send sensitive information to an attacker, or any number of other criminal activities.

Countermeasures

prevention easy

Process

Dibbe Edwards discusses some excellent initiatives on how they do it at IBM. I will attempt to paraphrase some of them here:

  • Implement process and governance around which open source libraries you can use
  • Legal review: checking licenses
  • Scanning the code for vulnerabilities, manual and automated code review
  • Maintain a list containing all the libraries that have been approved for use. If not on the list, make request and it should go through the same process.
  • Once the libraries are in your product they should become as part of your own code so that they get the same rigour over them as any of your other code written in-house
  • There needs to be automated process that runs over the code base to check that nothing that is not on the approved list is included
  • Consider automating some of the suggested tooling options below

There is an excellent paper by the SANS Institute on Security Concerns in Using Open Source Software for Enterprise Requirements that is well worth a read. It confirms what the likes of IBM are doing in regards to their consumption of free and open source libraries.

Consumption is Your Responsibility

As a developer, you are responsible for what you install and consume. Malicious NodeJS packages do end up on NPM from time to time. The same goes for any source or binary you download and run. The following commands are often encountered as being “the way” to install things:

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(wget https://raw.github.com/account/repo/install.sh -O -)"
# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(curl -fsSL https://raw.github.com/account/repo/install.sh)"

Below is the official way to install NodeJS. Do not do this. wget or curl first, then make sure what you have just downloaded is not malicious.

Inspect the code before you run it.

1. The repository could have been tampered with
2. The transmission from the repository to you could have been intercepted and modified.

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Keeping Safe

wget, curl, etc

Please do not wget, curl or fetch in any way and pipe what you think is an installer or any script to your shell without first verifying that what you are about to run is not malicious. Do not download and run in the same command.

The better option is to:

  1. Verify the source that you are about to download, if all good
  2. Download it
  3. Check it again, if all good
  4. Only then should you run it

npm install

As part of an npm install, package creators, maintainers (or even a malicious entity intercepting and modifying your request on the wire) can define scripts to be run on specific NPM hooks. You can check to see if any package has hooks (before installation) that will run scripts by issuing the following command:
npm show [module-you-want-to-install] scripts

Recommended procedure:

  1. Verify the source that you are about to download, if all good
  2. npm show [module-you-want-to-install] scripts
  3. Download the module without installing it and inspect it. You can download it from
    http://registry.npmjs.org/%5Bmodule-you-want-to-install%5D/-/%5Bmodule-you-want-to-install%5D-VERSION.tgz
    

The most important step here is downloading and inspecting before you run.

Doppelganger Packages

Similarly to Doppelganger Domains, People often miss-type what they want to install. If you were someone that wanted to do something malicious like have consumers of your package destroy or modify their systems, send sensitive information to you, or any number of other criminal activities (ideally identified in the Identify Risks section. If not already, add), doppelganger packages are an excellent avenue for raising the likelihood that someone will install your malicious package by miss typing the name of it with the name of another package that has a very similar name. I covered this in my “0wn1ng The Web” presentation, with demos.

Make sure you are typing the correct package name. Copy -> Pasting works.

Tooling

For NodeJS developers: Keep your eye on the nodesecurity advisories. Identified security issues can be posted to NodeSecurity report.

RetireJS Is useful to help you find JavaScript libraries with known vulnerabilities. RetireJS has the following:

  1. Command line scanner
    • Excellent for CI builds. Include it in one of your build definitions and let it do the work for you.
      • To install globally:
        npm i -g retire
      • To run it over your project:
        retire my-project
        Results like the following may be generated:

        public/plugins/jquery/jquery-1.4.4.min.js
        ↳ jquery 1.4.4.min has known vulnerabilities:
        http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969
        http://research.insecurelabs.org/jquery/test/
        http://bugs.jquery.com/ticket/11290
        
    • To install RetireJS locally to your project and run as a git precommit-hook.
      There is an NPM package that can help us with this, called precommit-hook, which installs the git pre-commit hook into the usual .git/hooks/pre-commit file of your projects repository. This will allow us to run any scripts immediately before a commit is issued.
      Install both packages locally and save to devDependencies of your package.json. This will make sure that when other team members fetch your code, the same retire script will be run on their pre-commit action.

      npm install precommit-hook --save-dev
      npm install retire --save-dev
      

      If you do not configure the hook via the package.json to run specific scripts, it will run lint, validate and test by default. See the RetireJS documentation for options.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         }
      }
      

      Adding the pre-commit property allows you to specify which scripts you want run before a successful commit is performed. The following package.json defines that the lint and validate scripts will be run. validate runs our retire command.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         },
         "pre-commit": ["lint", "validate"]
      }
      

      Keep in mind that pre-commit hooks can be very useful for all sorts of checking of things immediately before your code is committed. For example running security tests using the OWASP ZAP API.

  2. Chrome extension
  3. Firefox extension
  4. Grunt plugin
  5. Gulp task
  6. Burp and OWASP ZAP plugin
  7. On-line tool you can simply enter your web applications URL and the resource will be analysed

requireSafe

provides “intentful auditing as a stream of intel for bithound“. I guess watch this space, as in speaking with Adam Baldwin, there doesn’t appear to be much happening here yet.

bithound

In regards to NPM packages, we know the following things:

  1. We know about a small collection of vulnerable NPM packages. Some of which have high fan-in (many packages depend on them).
  2. The vulnerable packages have published patched versions
  3. Many packages are still consuming the vulnerable unpatched versions of the packages that have published patched versions
    • So although we could have removed a much larger number of vulnerable packages due to their persistence on depending on unpatched packages, we have not. I think this mostly comes down to lack of visibility, awareness and education. This is exactly what I’m trying to change.

bithound supports:

  • JavaScript, TypeScript and JSX (back-end and front-end)
  • In terms of version control systems, only git is supported
  • Opening of bitbucket and github issues
  • Providing statistics on code quality, maintainability and stability. I queried Adam on this, but not a lot of information was forth coming.

bithound can be configured to not analyse some files. Very large repositories are prevented from being analysed due to large scale performance issues.

Analyses both NPM and Bower dependencies and notifies you if any are:

  • Out of date
  • Insecure. Assuming this is based on the known vulnerabilities (41 node advisories at the time of writing this)
  • Unused

Analysis of opensource projects are free.

You could of course just list all of your projects and global packages and check that there are none in the advisories, but this would be more work and who is going to remember to do that all the time?

For .Net developers, there is the likes of OWASP SafeNuGet.

Risks that Solution Causes

Some of the packages we consume may have good test coverage, but are the tests testing the right things? Are the tests testing that something can not happen? That is where the likes of RetireJS comes in.

Process

There is a danger of implementing to much manual process thus slowing development down more than necessary. The way the process is implemented will have a lot to do with its level of success. For example automating as much as possible, so developers don’t have to think about as much as possible is going to make for more productive, focused and happier developers.

For example, when a Development Team needs to pull a library into their project, which often happens in the middle of working on a product backlog item (not planned at the beginning of the Sprint), if they have to context switch while a legal review and/or manual code review takes place, then this will cause friction and reduce the teams performance even though it may be out of their hands.
In this case, the Development Team really needs a dedicated resource to perform the legal review. The manual review could be done by another team member or even themselves with perhaps another team member having a quicker review after the fact. These sorts of decisions need to be made by the Development Team, not mandated by someone outside of the team that doesn’t have skin in the game or does not have the localised understanding that the people working on the project do.

Maintaining a list of the approved libraries really needs to be a process that does not take a lot of human interaction. How ever you work out your process, make sure it does not require a lot of extra developer effort on an ongoing basis. Some effort up front to automate as much as possible will facilitate this.

Tooling

Using the likes of pre-commit hooks, the other tooling options detailed in the Countermeasures section and creating scripts to do most of the work for us is probably going to be a good option to start with.

Costs and Trade-offs

The process has to be streamlined so that it does not get in the developers way. A good way to do this is to ask the developers how it should be done. They know what will get in their way. In order for the process to be a success, the person(s) mandating it will need to get solid buy-in from the people using it (the developers).
The idea of setting up a process that notifies at least the Development Team if a library they want to use has known security defects, needs to be pitched to all stakeholders (developers, product owner, even external stakeholders) the right way. It needs to provide obvious benefit and not make anyones life harder than it already is. Everyone has their own agendas. Rather than fighting against them, include consideration for them in your pitch. I think this sort of a pitch is actually reasonably easy if you keep these factors in mind.

Attributions

Additional Resources

 

How to optimise your testing effort

March 24, 2012

I recently wrote a post for the company I currently work for around the joys of doing TDD.
You can check it out here.

What is your current approach to testing?
How can you spend the little time you have on the most important areas?

I thought I’d share some thoughts around where I see the optimal areas to invest your test effort.
I got to thinking last night, and when I was asleep.
We are putting too much effort into our UI, UA and system tests.
We are writing to many of them, thus we’re creating a top heavy test structure that will sooner or later topple.
These tests have their sweet spot, but they are slow, fragile and time consuming to write.

We should have a small handful for each user story to provide some UA, and the rest should be without the UI and database (the slow and fragile bits).
We need to get our mind sets lower down the test triangle.

test triangle

I’ll try and explain why we should be doing less Manual tests, followed by GUI tests, followed by UA tests, followed by integration tests, followed by Unit tests.

Try not to test the UI with the lower architectural layers included in the tests.
UI tests should have the lower layers mocked and / or stubbed.
Check out Dummy vs Fake vs Stub vs Mock
Full end to end system tests are not required to validate UI field constraints.
Dependency injection really helps us here.

When you are explicitly testing the upper levels of the test triangle, the lower / immediate lower layers are implicitly being tested.
So you might think, cool, if we invest in the upper layers, we implicitly cover the lower layers.
That’s right, but the disadvantages of the higher level tests outweigh the advantages.
UI tests and especially ones that go from end to end, should be avoided, or very few in number,
as they are fragile and incur high maintenance costs.
If we create to many of these, confidence in their value diminishes.
Read on and you’ll find out why.

Lets look at cost vs value to the business.

Some tests cost a lot to create and modify.
Some cost little to create and modify.
Some yield high value.
Some yield low value.
We only have so much time for testing,
so lets use it in the areas that provide the greatest value to the business.
Greatest value of course, will be measured differently for each feature.
There is no stock standard answer here, only guidelines.
What we’re aiming for is to spend the minimum effort (cost) and get the maximum benefit (value).
Not the other way around…
With the following set of scales, we’ve spent to much in the wrong areas, yielding suboptimal value.

cost verse business value

It’s worth the effort to get under the UI layer and do the required setup incl mocking the layers below.
It’s also not to hard to get around the likes of the HttpContext hierarchy of classes (HttpRequest, HttpResponse, and so on) encountered in ASP.NET Web Forms and MVC.

Beware

  • the higher level tests get progressively more expensive to create and maintain.
  • They are slower to run, which means they don’t run as part of CI, but maybe the nightly build.
    Which means there is more latency in the development cycle.
    Developers are less likely to run them manually.
  • When  they break, it takes longer to locate the fault, as you have all the layers below to go through.

Unreliable tests are a major cause for teams ignoring or losing confidence in automated tests.
UI, Acceptance, followed by integration tests are usually the culprits for causing this.
Once confidence is lost, the value initially invested in the automated tests is significantly reduced.
Fixing failing tests and resolving issues associated with brittle tests should be a priority to remove false positives.

Planning the test effort

This is usually the first step we do when starting work on a user story,
or any new feature.
We usually create a set of Test Conditions (Given/When/Then)

Given When Then
There are no items in the shopping cart Customer clicks “Purchase” button for a book which is in stock 1 x book is added to shopping cart. Book is held – preventing selling it twice.
Customer clicks “Purchase” button for a book which is not in stock Dialog with “Out of stock” message is displayed and offering customer option of putting book on back order.

for Product Backlog items where there are enough use cases for it to be worth doing.
Where we don’t create Test Conditions, we have a Test Condition workshop.
In the workshop we look at the What, How, Who and Why in that order.
The test quadrant (pictured below) assists us in this.
In the workshop, we write the previously recorded Acceptance Criteria on a board (the What) and discuss the most effective way to verify that the conditions are meet (the How)
With the how we look at the test triangle and the test quadrant and decide where our time is most effectively spent.

Test condition workshop

With the test condition workshop,
when we start on a user story (generally a feature in the sprint backlog),
we plan where we are going to spend our test resource.
Think about What, and sometimes Who, but not How.
The How comes last.

Unit tests are the developers bread and butter.
They are cheap to create and modify,
and consistently yield not only good value to the developers,
but implicitly good value to most / all other areas.
This is why they sit at the bottom of the test triangle.
This is why TDD is as strong as it is today.
test quadrant

The hierarchy of criteria that we use to help us

  1. Release Criteria
    Ultimately controlled by the Product Owner or release manager.
  2. Acceptance Criteria
    Also owned by the Product Owner.
    Attached to each user story, or more correctly… product backlog item.
    The Development team must meet these in order to fulfill the Definition of Done.
  3. Test Conditions
    When executable, confirm the development team have satisfied the requirements of the product backlog item.

Write your tests first

TDD is  not about testing, it’s about creating better designs.
This forces us to design better software. “Testable”, “Modular”, separating concerns, Single responsibility principle.
This forces us down the path of SOLID Principles.

red green refactor

  1. Write a unit test
    Run it and watch it fail (because the production code is not yet written)
  2. Write just enough production code to make the test pass
  3. Re-run the test and watch it pass

This podcast around TDD has lots of good info.

Continuous Integration

Realise the importance of setting up CI and nightly builds.
The benefit of having your unit (fast running) tests automatically executed regularly are great.
You get rapid feedback, which is crucial to an agile team completing features on time.
Tests that are not being run regularly have the risk that they may be failing.
The sooner you find a failing test, the easier it is to fix the code.
The longer it’s left unattended, the more technical debt you accrue and the more effort is required to hunt down the fault.
Make the effort to get your tests running on each commit or push.

Nightly Builds

The slower running tests (that’s all the automated tests above unit tests on the triangle), need to be run as part of a nightly build.
We can’t have these running as part of the CI because they are just too slow.
If something gets in the way of a developers work flow, it won’t get done.

Pair Review

Don’t forget to pair review all code written.
In my current position we’ve been requesting reviews verbally and responding with emails, comments on paper.
This is not ideal and we’re currently evaluating review software, of which there are many offerings.