Posts Tagged ‘Infosec’

Captcha Considerations

December 31, 2015

Risks

Exploiting Captcha

Lack of captchas are a risk, but so are captchas themselves…

Let’s look at the problem here? What are we trying to stop with captchas?

Bots submitting. What ever it is, whether:

  • Advertising
  • Creating an unfair advantage over real humans
  • Link creation in attempt to increase SEO
  • Malicious code insertion

You are more than likely not interested in accepting it.

What do we not want to block?

People submitting genuinely innocent input. If a person is prepared to fill out a form manually, even if it is spam, then a person can view the submission and very quickly delete the validated, filtered and possibly sanitised message.

Countermeasures

PreventionVERYEASY

Types

Text Recognition

recaptcha uses this technique. See below for details.

Image Recognition

Uses images which users have to perform certain operations on, like dragging them to another image. For example: “Please drag all cat images to the cat mat.”, or “Please select all images of things that dogs eat.” sweetcaptcha is an example of this type of captcha. This type completely rules out the visually impaired users.

Friend Recognition

Pioneered by… you guessed it. Facebook. This type of captcha focusses on human hackers, the idea being that they will not know who your friends are.

Instead of showing you a traditional captcha on Facebook, one of the ways we may help verify your identity is through social authentication. We will show you a few pictures of your friends and ask you to name the person in those photos. Hackers halfway across the world might know your password, but they don’t know who your friends are.

I disagree with that statement. A determined hacker will usually be able to find out who your friends are. There is another problem, do you know who all of your friends are? Every acquaintance? I am terrible with names and so are many people. This is supposed to be used to authenticate you. So you have to be able to answer the questions before you can log in.

Logic Questions

This is what textcaptcha uses. Simple logic questions designed for the intelligence of a seven year old child. These are more accessible than image and textual image recognition, but they can take longer than image recognition to answer, unless the user is visually impared. The questions are usually language specific also, usually targeting the English language.

User Interaction

This is a little like image recognition. Users have to perform actions that virtual intelligence can not work out… yet. Like dragging a slider a certain number of notches.
If an offering gets popular, creating some code to perform the action may not be that hard and would definitely be worth the effort for bot creators.
This is obviously not going to work for the visually impaired or for people with handicapped motor skills.

 

In NPM land, as usual there are many options to choose from. The following were the offerings I evaluated. None of which really felt like a good fit:

Offerings

  • total-captcha. Depends on node-canvas. Have to install cairo first, but why? No explanation. Very little of anything here. Move on. How does this work? Do not know. What type is it? Presume text recognition.
  • easy-captcha is a text recognition offering generating images
  • simple-captcha looks like another text recognition offering. I really do not want to be writing image files to my server.
  • node-captcha Depends on canvas. By the look of the package this is another text recognition in a generated image.
  • re-captcha was one of the first captcha offerings, created at the Carnegie Mellon University by Luis von Ahn, Ben Maurer, Colin McMillen, David Abraham and Manuel Blum who invented the term captcha. Google later acquired it in September 2009. recaptcha is a text recognition captcha that uses scanned text that optical character recognition (OCR) technology has failed to interpret, which has the added benefit of helping to digitise text for The New York Times and Google Books.
    recaptcha
  • sweetcaptcha uses the sweetcaptcha cloud service of which you must abide by their terms and conditions, requires another node package, and requires some integration work. sweetcaptcha is an image recognition type of captcha.
    sweetcaptcha
  • textcaptcha is a logic question captcha relying on an external service for the questions and md5 hashes of the correct lower cased answers. This looks pretty simple to set up, but again expects your users to use their brain on things they should not have to.

 

After some additional research I worked out why the above types and offerings didn’t feel like a good fit. It pretty much came down to user experience.

Why should genuine users/customers of your web application be disadvantaged by having to jump through hoops because you have decided you want to stop bots spamming you? Would it not make more sense to make life harder for the bots rather than for your genuine users?

Some other considerations I had. Ideally I wanted a simple solution requiring few or ideally no external dependencies, no JavaScript required, no reliance on the browser or anything out of my control, no images and it definitely should not cost any money.

Alternative Approaches

  • Services like Disqus can be good for commenting. Obviously the comments are all stored somewhere in the cloud out of your control and this is an external dependency. For simple text input, this is probably not what you want. Similar services such as all the social media authentication services can take things a bit too far I think. They remove freedoms from your users. Why should your users be disadvantaged by leaving a comment or posting a message on your web application? Disqus tracks users activities from hosting website to website whether you have an account, are logged in or not. Any information they collect such as IP address, web browser details, installed add-ons, referring pages and exit links may be disclosed to any third party. When this data is aggregated it is useful for de-anonymising users. If users choose to block the Disqus script, the comments are not visible. Disqus has also published its registered users entire commenting histories, along with a list of connected blogs and services on publicly viewable user profile pages. Disqus also engage in add targeting and blackhat SEO techniques from the websites in which their script is installed.
  • Services like Akismet and Mollom which take user input and analyse for spam signatures. Mollom sometimes presents a captcha if it is unsure. These two services learn from their mistakes if they mark something as spam and you unmark it, but of course you are going to have to be watching for that. Matt Mullenweg created Akismet so that his mother could blog in safety. “His first attempt was a JavaScript plugin which modified the comment form and hid fields, but within hours of launching it, spammers downloaded it, figured out how it worked, and bypassed it. This is a common pitfall for anti-spam plugins: once they get traction“. My advice to this is not to use a common plugin, but to create something custom. I discuss this soon.

The above solutions are excellent targets for creating exploits that will have a large pay off due to the fact that so many websites are using them. There are exploits discovered for these services regularly.

Still not cutting it

Given the fact that many clients count on conversions to make money, not receiving 3.2% of those conversions could put a dent in sales. Personally, I would rather sort through a few SPAM conversions instead of losing out on possible income.

Casey Henry: Captchas’ Effect on Conversion Rates

Spam is not the user’s problem; it is the problem of the business that is providing the website. It is arrogant and lazy to try and push the problem onto a website’s visitors.

Tim Kadlec: Death to Captchas

User Time Expenditure

Recording how long it takes from fetch to submit. This is another technique, in which the time is measured from fetch to submit. For example if the time span is under five seconds it is more than likely a bot, so handle the message accordingly.

Bot Pot

Spamming bots operating on custom mechanisms will in most cases just try, then move on. If you decide to use one of the common offerings from above, exploits will be more common, depending on how wide spread the offering is. This is one of the cases where going custom is a better option. Worse case is you get some spam and you can modify your technique, but you get to keep things simple, tailored to your web application, your users needs, no external dependencies and no monthly fees. This is also the simplest technique and requires very little work to implement.

Spam bots:

  • Love to populate form fields
  • Usually ignore CSS. For example, if you have some CSS that hides a form field and especially if the CSS is not inline on the same page, they will usually fail at realising that the field is not supposed to be visible.

So what we do is create a field that is not visible to humans and is supposed to be kept empty. On the server once the form is submitted, we check that it is still empty. If it is not, then we assume a bot has been at it.

This is so simple, does not get in the way of your users, yet very effective at filtering bot spam.

Client side:

form .bot-pot {
   display: none;
}
<form>
   <!--...-->
   <div>
      <input type="text" name="bot-pot" class="bot-pot">
   </div>
   <!--...-->
</form>

Server side:

I show the validation code middle ware of the route on line 30 below. The validation is performed on line 16

var form = require('express-form');
var fieldToValidate = form.field;
//...

function home(req, res) {
   res.redirect('/');
}

function index(req, res) {
   res.render('home', { title: 'Home', id: 'home', brand: 'your brand' });
}

function validate() {
   return form(
      // Bots love to populate everything.
      fieldToValidate('bot-pot').maxLength(0)
   );
}

function contact(req, res) {

   if(req.form.isValid)
      // We know the bot-pot is of zero length. So no bots.
   //...
}

module.exports = function (app) {
   app.get('/', index);
   app.get('/home', home);
   app.post('/contact', validate(), contact);
};

So as you can see, a very simple solution. You could even consider combining the above two techniques.

Lack of Visibility in Web Applications

November 26, 2015

Risks

I see this as an indirect risk to the asset of web application ownership (That’s the assumption that you will always own your web application).

Not being able to introspect your application at any given time or being able to know how the health status is, is not a comfortable place to be in and there is no reason you should be there.

Insufficient Logging and Monitoring

average-widespread-veryeasy-moderate

Can you tell at any point in time if someone or something is:

  • Using your application in a way that it was not intended to be used
  • Violating policy. For example circumventing client side input sanitisation.

How easy is it for you to notice:

  • Poor performance and potential DoS?
  • Abnormal application behaviour or unexpected logic threads
  • Logic edge cases and blind spots that stake holders, Product Owners and Developers have missed?

Countermeasures

As Bruce Schneier said: “Detection works where prevention fails and detection is of no use without response“. This leads us to application logging.

With good visibility we should be able to see anticipated and unanticipated exploitation of vulnerabilities as they occur and also be able to go back and review the events.

Insufficient Logging

PreventionAVERAGE

When it comes to logging in NodeJS, you can’t really go past winston. It has a lot of functionality and what it does not have is either provided by extensions, or you can create your own. It is fully featured, reliable and easy to configure like NLog in the .NET world.

I also looked at express-winston, but could not see why it needed to exist.

{
   ...
   "dependencies": {
      ...,
      "config": "^1.15.0",
      "express": "^4.13.3",
      "morgan": "^1.6.1",
      "//": "nodemailer not strictly necessary for this example,",
      "//": "but used later under the node-config section.",
      "nodemailer": "^1.4.0",
      "//": "What we use for logging.",
      "winston": "^1.0.1",
      "winston-email": "0.0.10",
      "winston-syslog-posix": "^0.1.5",
      ...
   }
}

winston-email also depends on nodemailer.

Opening UDP port

with winston-syslog seems to be what a lot of people are using. I think it may be due to the fact that winston-syslog is the first package that works well for winston and syslog.

If going this route, you will need the following in your /etc/rsyslog.conf:

$ModLoad imudp
# Listen on all network addresses. This is the default.
$UDPServerAddress 0.0.0.0
# Listen on localhost.
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
# Or the new style configuration.
Address <IP>
Port <port>
# Logging for your app.
local0.* /var/log/yourapp.log

I Also looked at winston-rsyslog2 and winston-syslogudp, but they did not measure up for me.

If you do not need to push syslog events to another machine, then it does not make much sense to push through a local network interface when you can use your posix syscalls as they are faster and safer. Line 7 below shows the open port.

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0015s latency).
PORT STATE SERVICE REASON VERSION
514/udp open|filtered syslog no-response
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Using Posix

The winston-syslog-posix package was inspired by blargh. winston-syslog-posix uses node-posix.

If going this route, you will need the following in your /etc/rsyslog.conf instead of the above:

# Logging for your app.
local0.* /var/log/yourapp.log

Now you can see on line 7 below that the syslog port is no longer open:

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0014s latency).
PORT STATE SERVICE REASON VERSION
514/udp closed syslog port-unreach
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Logging configuration should not be in the application startup file. It should be in the configuration files. This is discussed further under the Store Configuration in Configuration files section.

Notice the syslog transport in the configuration below starting on line 39.

module.exports = {
   logger: {
      colours: {
         debug: 'white',
         info: 'green',
         notice: 'blue',
         warning: 'yellow',
         error: 'yellow',
         crit: 'red',
         alert: 'red',
         emerg: 'red'
      },
      // Syslog compatible protocol severities.
      levels: {
         debug: 0,
         info: 1,
         notice: 2,
         warning: 3,
         error: 4,
         crit: 5,
         alert: 6,
         emerg: 7
      },
      consoleTransportOptions: {
         level: 'debug',
         handleExceptions: true,
         json: false,
         colorize: true
      },
      fileTransportOptions: {
         level: 'debug',
         filename: './yourapp.log',
         handleExceptions: true,
         json: true,
         maxsize: 5242880, //5MB
         maxFiles: 5,
         colorize: false
      },
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'debug',
         identity: 'yourapp_winston'
         //facility: 'local0' // default
            // /etc/rsyslog.conf also needs: local0.* /var/log/yourapp.log
            // If non posix syslog is used, then /etc/rsyslog.conf or one
            // of the files in /etc/rsyslog.d/ also needs the following
            // two settings:
            // $ModLoad imudp // Load the udp module.
            // $UDPServerRun 514 // Open the standard syslog port.
            // $UDPServerAddress 127.0.0.1 // Interface to bind to.
      },
      emailTransportOptions: {
         handleExceptions: true,
         level: 'crit',
         from: 'yourusername_alerts@fastmail.com',
         to: 'yourusername_alerts@fastmail.com',
         service: 'FastMail',
         auth: {
            user: "yourusername_alerts",
            pass: null // App specific password.
         },
         tags: ['yourapp']
      }
   }
}

In development I have chosen here to not use syslog. You can see this on line 3 below. If you want to test syslog in development, you can either remove the logger object override from the devbox1-development.js file or modify it to be similar to the above. Then add one line to the /etc/rsyslog.conf file to turn on. As mentioned in a comment above in the default.js config file on line 44.

module.exports = {
   logger: {
      syslogPosixTransportOptions: null
   }
}

In production we log to syslog and because of that we do not need the file transport you can see configured starting on line 30 above in the default.js configuration file, so we set it to null as seen on line 6 below in the prodbox-production.js file.

I have gone into more depth about how we handle syslogs here, where all of our logs including these ones get streamed to an off-site syslog server. Thus providing easy aggregation of all system logs into one user interface that DevOpps can watch on their monitoring panels in real-time and also easily go back in time to visit past events. This provides excellent visibility as one layer of defence.

There were also some other options for those using Papertrail as their off-site syslog and aggregation PaaS, but the solutions were not as clean as simply logging to local syslog from your applications and then sending off-site from there.

module.exports = {
   logger: {
      consoleTransportOptions: {
         level: {},
      },
      fileTransportOptions: null,
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'info',
         identity: 'yourapp_winston'
      }
   }
}
// Build creates this file.
module.exports = {
   logger: {
      emailTransportOptions: {
         auth: {
            pass: 'Z-o?(7GnCQsnrx/!-G=LP]-ib' // App specific password.
         }
      }
   }
}

The logger.js file wraps and hides extra features and transports applied to the logging package we are consuming.

var winston = require('winston');
var loggerConfig = require('config').logger;
require('winston-syslog-posix').SyslogPosix;
require('winston-email').Email;

winston.emitErrs = true;

var logger = new winston.Logger({
   // Alternatively: set to winston.config.syslog.levels
   exitOnError: false,
   // Alternatively use winston.addColors(customColours); There are many ways
   // to do the same thing with winston
   colors: loggerConfig.colours,
   levels: loggerConfig.levels
});

// Add transports. There are plenty of options provided and you can add your own.

logger.addConsole = function(config) {
   logger.add (winston.transports.Console, config);
   return this;
};

logger.addFile = function(config) {
   logger.add (winston.transports.File, config);
   return this;
};

logger.addPosixSyslog = function(config) {
   logger.add (winston.transports.SyslogPosix, config);
   return this;
};

logger.addEmail = function(config) {
   logger.add (winston.transports.Email, config);
   return this;
};

logger.emailLoggerFailure = function (err /*level, msg, meta*/) {
   // If called with an error, then only the err param is supplied.
   // If not called with an error, level, msg and meta are supplied.
   if (err) logger.alert(
      JSON.stringify(
         'error-code:' + err.code + '. '
         + 'error-message:' + err.message + '. '
         + 'error-response:' + err.response + '. logger-level:'
         + err.transport.level + '. transport:' + err.transport.name
      )
   );
};

logger.init = function () {
   if (loggerConfig.fileTransportOptions)
      logger.addFile( loggerConfig.fileTransportOptions );
   if (loggerConfig.consoleTransportOptions)
      logger.addConsole( loggerConfig.consoleTransportOptions );
   if (loggerConfig.syslogPosixTransportOptions)
      logger.addPosixSyslog( loggerConfig.syslogPosixTransportOptions );
   if (loggerConfig.emailTransportOptions)
      logger.addEmail( loggerConfig.emailTransportOptions );
};

module.exports = logger;
module.exports.stream = {
   write: function (message, encoding) {
      logger.info(message);
   }
};

When the app first starts it initialises the logger on line 7 below.

//...
var express = require('express');
var morganLogger = require('morgan');
var logger = require('./util/logger'); // Or use requireFrom module so no relative paths.
var app = express();
//...
logger.init();
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
//...
// In order to utilise connect/express logger module in our third party logger,
// Pipe the messages through.
app.use(morganLogger('combined', {stream: logger.stream}));
//...
app.use(express.static(path.join(__dirname, 'public')));
//...
require('./routes')(app);

if ('development' == app.get('env')) {
   app.use(errorHandler({ dumpExceptions: true, showStack: true }));
   //...
}
if ('production' == app.get('env')) {
   app.use(errorHandler());
   //...
}

http.createServer(app).listen(app.get('port'), function(){
   logger.info(
      "Express server listening on port " + app.get('port') + ' in '
      + process.env.NODE_ENV + ' mode'
   );
});

* You can also optionally log JSON metadata
* You can provide an optional callback to do any work required, which will be called once all transports have logged the specified message.

Here are some examples of how you can use the logger. The logger.log(<level> can be replaced with logger.<level>( where level is any of the levels defined in the default.js configuration file above:

// With string interpolation also.
logger.log('info', 'test message %s', 'my string');
logger.log('info', 'test message %d', 123);
logger.log('info', 'test message %j', {aPropertyName: 'Some message details'}, {});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);

Also consider hiding cross cutting concerns like logging using Aspect Oriented Programing (AOP)

Insufficient Monitoring

PreventionEASY

There are a couple of ways of approaching monitoring. You may want to see the health of your application even if it is all fine, or only to be notified if it is not fine (sometimes called the dark cockpit approach).

Monit is an excellent tool for the dark cockpit approach. It’s easy to configure. Has excellent short documentation that is easy to understand and the configuration file has lots of examples commented out ready for you to take as is and modify to suite your environment. I’ve personally had excellent success with Monit.

 

Risks that Solution Causes

Lack of Visibility

With the added visibility, you will have to make decisions based on the new found information you now have. There will be no more blissful ignorance if there was before.

Insufficient Logging and Monitoring

There will be learning and work to be done to become familiar with libraries and tooling. Code will have to be written around logging as in wrapping libraries, initialising and adding logging statements or hiding them using AOP.

 

Costs and Trade-offs

Insufficient Logging and Monitoring

You can do a lot for little cost here. I would rather trade off a few days work in order to have a really good logging system through your code base that is going to show you errors fast in development and then show you different errors in the places your DevOps need to see them in production.

Same for monitoring. Find a tool that you find working with a pleasure. There are just about always free and open source tools to every commercial alternative. If you are working with a start-up or young business, the free and open source tools can be excellent to keep ongoing costs down. Especially mature tools that are also well maintained like Monit.

Additional Resources

Consuming Free and Open Source

October 29, 2015

Risks

average-widespread-difficult-moderate

This is where A9 (Using Components with Known Vulnerabilities) of the 2013 OWASP Top 10 comes in.
We are consuming far more free and open source libraries than we have ever before. Much of the code we are pulling into our projects is never intentionally used, but is still adding surface area for attack. Much of it:

  • Is not thoroughly tested (for what it should do and what it should not do). We are often relying
    on developers we do not know a lot about to have not introduced defects. Most developers are more focused on building than breaking, they do not even see the defects they are introducing.
  • Is not reviewed evaluated. That is right, many of the packages we are consuming are created by solo developers with a single focus of creating and little to no focus of how their creations can be exploited. Even some teams with a security hero are not doing a lot better.
  • Is created by amateurs that could and do include vulnerabilities. Anyone can write code and publish to an open source repository. Much of this code ends up in our package management repositories which we consume.
  • Does not undergo the same requirement analysis, defining the scope, acceptance criteria, test conditions and sign off by a development team and product owner that our commercial software does.

Many vulnerabilities can hide in these external dependencies. It is not just one attack vector any more, it provides the opportunity for many vulnerabilities to be sitting waiting to be exploited. If you do not find and deal with them, I can assure you, someone else will. See Justin Searls talk on consuming all the open source.

Running install or any scripts from non local sources without first downloading them and inspecting can destroy or modify your and any other reachable systems, send sensitive information to an attacker, or any number of other criminal activities.

Countermeasures

prevention easy

Process

Dibbe Edwards discusses some excellent initiatives on how they do it at IBM. I will attempt to paraphrase some of them here:

  • Implement process and governance around which open source libraries you can use
  • Legal review: checking licenses
  • Scanning the code for vulnerabilities, manual and automated code review
  • Maintain a list containing all the libraries that have been approved for use. If not on the list, make request and it should go through the same process.
  • Once the libraries are in your product they should become as part of your own code so that they get the same rigour over them as any of your other code written in-house
  • There needs to be automated process that runs over the code base to check that nothing that is not on the approved list is included
  • Consider automating some of the suggested tooling options below

There is an excellent paper by the SANS Institute on Security Concerns in Using Open Source Software for Enterprise Requirements that is well worth a read. It confirms what the likes of IBM are doing in regards to their consumption of free and open source libraries.

Consumption is Your Responsibility

As a developer, you are responsible for what you install and consume. Malicious NodeJS packages do end up on NPM from time to time. The same goes for any source or binary you download and run. The following commands are often encountered as being “the way” to install things:

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(wget https://raw.github.com/account/repo/install.sh -O -)"
# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(curl -fsSL https://raw.github.com/account/repo/install.sh)"

Below is the official way to install NodeJS. Do not do this. wget or curl first, then make sure what you have just downloaded is not malicious.

Inspect the code before you run it.

1. The repository could have been tampered with
2. The transmission from the repository to you could have been intercepted and modified.

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Keeping Safe

wget, curl, etc

Please do not wget, curl or fetch in any way and pipe what you think is an installer or any script to your shell without first verifying that what you are about to run is not malicious. Do not download and run in the same command.

The better option is to:

  1. Verify the source that you are about to download, if all good
  2. Download it
  3. Check it again, if all good
  4. Only then should you run it

npm install

As part of an npm install, package creators, maintainers (or even a malicious entity intercepting and modifying your request on the wire) can define scripts to be run on specific NPM hooks. You can check to see if any package has hooks (before installation) that will run scripts by issuing the following command:
npm show [module-you-want-to-install] scripts

Recommended procedure:

  1. Verify the source that you are about to download, if all good
  2. npm show [module-you-want-to-install] scripts
  3. Download the module without installing it and inspect it. You can download it from
    http://registry.npmjs.org/%5Bmodule-you-want-to-install%5D/-/%5Bmodule-you-want-to-install%5D-VERSION.tgz
    

The most important step here is downloading and inspecting before you run.

Doppelganger Packages

Similarly to Doppelganger Domains, People often miss-type what they want to install. If you were someone that wanted to do something malicious like have consumers of your package destroy or modify their systems, send sensitive information to you, or any number of other criminal activities (ideally identified in the Identify Risks section. If not already, add), doppelganger packages are an excellent avenue for raising the likelihood that someone will install your malicious package by miss typing the name of it with the name of another package that has a very similar name. I covered this in my “0wn1ng The Web” presentation, with demos.

Make sure you are typing the correct package name. Copy -> Pasting works.

Tooling

For NodeJS developers: Keep your eye on the nodesecurity advisories. Identified security issues can be posted to NodeSecurity report.

RetireJS Is useful to help you find JavaScript libraries with known vulnerabilities. RetireJS has the following:

  1. Command line scanner
    • Excellent for CI builds. Include it in one of your build definitions and let it do the work for you.
      • To install globally:
        npm i -g retire
      • To run it over your project:
        retire my-project
        Results like the following may be generated:

        public/plugins/jquery/jquery-1.4.4.min.js
        ↳ jquery 1.4.4.min has known vulnerabilities:
        http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969
        http://research.insecurelabs.org/jquery/test/
        http://bugs.jquery.com/ticket/11290
        
    • To install RetireJS locally to your project and run as a git precommit-hook.
      There is an NPM package that can help us with this, called precommit-hook, which installs the git pre-commit hook into the usual .git/hooks/pre-commit file of your projects repository. This will allow us to run any scripts immediately before a commit is issued.
      Install both packages locally and save to devDependencies of your package.json. This will make sure that when other team members fetch your code, the same retire script will be run on their pre-commit action.

      npm install precommit-hook --save-dev
      npm install retire --save-dev
      

      If you do not configure the hook via the package.json to run specific scripts, it will run lint, validate and test by default. See the RetireJS documentation for options.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         }
      }
      

      Adding the pre-commit property allows you to specify which scripts you want run before a successful commit is performed. The following package.json defines that the lint and validate scripts will be run. validate runs our retire command.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         },
         "pre-commit": ["lint", "validate"]
      }
      

      Keep in mind that pre-commit hooks can be very useful for all sorts of checking of things immediately before your code is committed. For example running security tests using the OWASP ZAP API.

  2. Chrome extension
  3. Firefox extension
  4. Grunt plugin
  5. Gulp task
  6. Burp and OWASP ZAP plugin
  7. On-line tool you can simply enter your web applications URL and the resource will be analysed

requireSafe

provides “intentful auditing as a stream of intel for bithound“. I guess watch this space, as in speaking with Adam Baldwin, there doesn’t appear to be much happening here yet.

bithound

In regards to NPM packages, we know the following things:

  1. We know about a small collection of vulnerable NPM packages. Some of which have high fan-in (many packages depend on them).
  2. The vulnerable packages have published patched versions
  3. Many packages are still consuming the vulnerable unpatched versions of the packages that have published patched versions
    • So although we could have removed a much larger number of vulnerable packages due to their persistence on depending on unpatched packages, we have not. I think this mostly comes down to lack of visibility, awareness and education. This is exactly what I’m trying to change.

bithound supports:

  • JavaScript, TypeScript and JSX (back-end and front-end)
  • In terms of version control systems, only git is supported
  • Opening of bitbucket and github issues
  • Providing statistics on code quality, maintainability and stability. I queried Adam on this, but not a lot of information was forth coming.

bithound can be configured to not analyse some files. Very large repositories are prevented from being analysed due to large scale performance issues.

Analyses both NPM and Bower dependencies and notifies you if any are:

  • Out of date
  • Insecure. Assuming this is based on the known vulnerabilities (41 node advisories at the time of writing this)
  • Unused

Analysis of opensource projects are free.

You could of course just list all of your projects and global packages and check that there are none in the advisories, but this would be more work and who is going to remember to do that all the time?

For .Net developers, there is the likes of OWASP SafeNuGet.

Risks that Solution Causes

Some of the packages we consume may have good test coverage, but are the tests testing the right things? Are the tests testing that something can not happen? That is where the likes of RetireJS comes in.

Process

There is a danger of implementing to much manual process thus slowing development down more than necessary. The way the process is implemented will have a lot to do with its level of success. For example automating as much as possible, so developers don’t have to think about as much as possible is going to make for more productive, focused and happier developers.

For example, when a Development Team needs to pull a library into their project, which often happens in the middle of working on a product backlog item (not planned at the beginning of the Sprint), if they have to context switch while a legal review and/or manual code review takes place, then this will cause friction and reduce the teams performance even though it may be out of their hands.
In this case, the Development Team really needs a dedicated resource to perform the legal review. The manual review could be done by another team member or even themselves with perhaps another team member having a quicker review after the fact. These sorts of decisions need to be made by the Development Team, not mandated by someone outside of the team that doesn’t have skin in the game or does not have the localised understanding that the people working on the project do.

Maintaining a list of the approved libraries really needs to be a process that does not take a lot of human interaction. How ever you work out your process, make sure it does not require a lot of extra developer effort on an ongoing basis. Some effort up front to automate as much as possible will facilitate this.

Tooling

Using the likes of pre-commit hooks, the other tooling options detailed in the Countermeasures section and creating scripts to do most of the work for us is probably going to be a good option to start with.

Costs and Trade-offs

The process has to be streamlined so that it does not get in the developers way. A good way to do this is to ask the developers how it should be done. They know what will get in their way. In order for the process to be a success, the person(s) mandating it will need to get solid buy-in from the people using it (the developers).
The idea of setting up a process that notifies at least the Development Team if a library they want to use has known security defects, needs to be pitched to all stakeholders (developers, product owner, even external stakeholders) the right way. It needs to provide obvious benefit and not make anyones life harder than it already is. Everyone has their own agendas. Rather than fighting against them, include consideration for them in your pitch. I think this sort of a pitch is actually reasonably easy if you keep these factors in mind.

Attributions

Additional Resources

 

Risks and Countermeasures to the Management of Application Secrets

September 17, 2015

Risks

  • Passwords and other secrets for things like data-stores, syslog servers, monitoring services, email accounts and so on can be useful to an attacker to compromise data-stores, obtain further secrets from email accounts, file servers, system logs, services being monitored, etc, and may even provide credentials to continue moving through the network compromising other machines.
  • Passwords and/or their hashes travelling over the network.

Data-store Compromise

Exploitability

The reason I’ve tagged this as moderate is because if you take the countermeasures, it doesn’t have to be a disaster.

There are many examples of this happening on a daily basis to millions of users. The Ashley Madison debacle is a good example. Ashley Madison’s entire business relied on its commitment to keep its clients (37 million of them) data secret, provide discretion and anonymity.

Before the breach, the company boasted about airtight data security but ironically, still proudly displays a graphic with the phrase “trusted security award” on its homepage

We worked hard to make a fully undetectable attack, then got in and found nothing to bypass…. Nobody was watching. No security. Only thing was segmented network. You could use Pass1234 from the internet to VPN to root on all servers.

Any CEO who isn’t vigilantly protecting his or her company’s assets with systems designed to track user behavior and identify malicious activity is acting negligently and putting the entire organization at risk. And as we’ve seen in the case of Ashley Madison, leadership all the way up to the CEO may very well be forced out when security isn’t prioritized as a core tenet of an organization.

Dark Reading

Other notable data-store compromises were LinkedIn with 6.5 million user accounts compromised and 95% of the users passwords cracked in days. Why so fast? Because they used simple hashing, specifically SHA-1. EBay with 145 million active buyers. Many others coming to light regularly.

Are you using well salted and quality strong key derivation functions (KDFs) for all of your sensitive data? Are you making sure you are notifying your customers about using high quality passwords? Are you informing them what a high quality password is? Consider checking new user credentials against a list of the most frequently used and insecure passwords collected.

Countermeasures

Secure password management within applications is a case of doing what you can, often relying on obscurity and leaning on other layers of defence to make it harder for compromise. Like many of the layers already discussed in my book.

Find out how secret the data that is supposed to be secret that is being sent over the network actually is and consider your internal network just as malicious as the internet. Then you will be starting to get the idea of what defence in depth is about. That way when one defence breaks down, you will still be in good standing.

defence in depth

You may read in many places that having data-store passwords and other types of secrets in configuration files in clear text is an insecurity that must be addressed. Then when it comes to mitigation, there seems to be a few techniques for helping, but most of them are based around obscuring the secret rather than securing it. Essentially just making discovery a little more inconvenient like using an alternative port to SSH to other than the default of 22. Maybe surprisingly though, obscurity does significantly reduce the number of opportunistic type attacks from bots and script kiddies.

Store Configuration in Configuration files

Prevention

Do not hard code passwords in source files for all developers to see. Doing so also means the code has to be patched when services are breached. At the very least, store them in configuration files and use different configuration files for different deployments and consider keeping them out of source control.

Here are some examples using the node-config module.

node-config

is a fully featured, well maintained configuration package that I have used on a good number of projects.

To install: From the command line within the root directory of your NodeJS application, run:

npm install node-config --save

Now you are ready to start using node-config. An example of the relevant section of an app.js file may look like the following:

// Due to bug in node-config the if statement is required before config is required
// https://github.com/lorenwest/node-config/issues/202
if (process.env.NODE_ENV === 'production')
   process.env.NODE_CONFIG_DIR = path.join(__dirname, 'config');

Where ever you use node-config, in your routes for example:

var config = require('config');
var nodemailer = require('nodemailer');
var enquiriesEmail = config.enquiries.email;

// Setting up email transport.
var transporter = nodemailer.createTransport({
   service: config.enquiries.service,
   auth: {
      user: config.enquiries.user,
      pass: config.enquiries.pass // App specific password.
   }
});

A good collection of different formats can be used for the config files: .json, .json5, .hjson, .yaml.js, .coffee, .cson, .properties, .toml

There is a specific file loading order which you specify by file naming convention, which provides a lot of flexibility and which caters for:

  • Having multiple instances of the same application running on the same machine
  • The use of short and full host names to mitigate machine naming collisions
  • The type of deployment. This can be anything you set the $NODE_ENV environment variable to for example: development, production, staging, whatever.
  • Using and creating config files which stay out of source control. These config files have a prefix of local. These files are to be managed by external configuration management tools, build scripts, etc. Thus providing even more flexibility about where your sensitive configuration values come from.

The config files for the required attributes used above may take the following directory structure:

OurApp/
|
+-- config/
| |
| +-- default.js (usually has the most in it)
| |
| +-- devbox1-development.js
| |
| +-- devbox2-development.js
| |
| +-- stagingbox-staging.js
| |
| +-- prodbox-production.js
| |
| +-- local.js (creted by build)
|
+-- routes
| |
| +-- home.js
| |
| +-- ...
|
+-- app.js (entry point)
|
+-- ...

The contents of the above example configuration files may look like the following:

module.exports = {
   enquiries: {
      // Supported services:
      // https://github.com/andris9/nodemailer-wellknown#supported-services
      // supported-services actually use the best security settings by default.
      // I tested this with a wire capture, because it is always the most fool proof way.
      service: 'FastMail',
      email: 'yourusername@fastmail.com',
      user: 'yourusername',
      pass: null
   }
   // Lots of other settings.
   // ...
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox1
      pass: 'D[6F9,4fM6?%2ULnirPVTk#Q*7Z+5n' // App specific password.
   }
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox2
      pass: 'eUoxK=S9&amp;amp;amp;lt;,`@m0T1=^(EZ#61^5H;.H' // App specific password.
   }
}
{
}
{
}
// Build creates this file.
module.exports = {
   enquiries: {
      // Password created by the build.
      pass: '10lQu$4YC&amp;amp;amp;amp;x~)}lUF&amp;amp;amp;gt;3pm]Tk&amp;amp;amp;gt;@+{N]' // App specific password.
   }
}

node-config also:

  • Provides command line overrides, thus allowing you to override configuration values at application start from command
  • Allows for the overriding of environment variables with custom environment variables from a custom-environment-variables.json file

Encrypting/decrypting credentials in code may provide some obscurity, but not much more than that.
There are different answers for different platforms. None of which provide complete security, if there is such a thing, but instead focusing on different levels of obscurity.

Windows

Store database credentials as a Local Security Authority (LSA) secret and create a DSN with the stored credential. Use a SqlServer connection string with Trusted_Connection=yes

The hashed credentials are stored in the SAM file and the registry. If an attacker has physical access to the storage, they can easily copy the hashes if the machine is not running or can be shut-down. The hashes can be sniffed from the wire in transit. The hashes can be pulled from the running machines memory (specifically the Local Security Authority Subsystem Service (LSASS.exe)) using tools such as Mimikatz, WCE, hashdump or fgdump. An attacker generally only needs the hash. Trusted tools like psexec take care of this for us. All discussed in my “0wn1ng The Web” presentation.

Encrypt Sections of a web, executable, machine-level, application-level configuration files with aspnet_regiis.exe with the -pe option and name of the configuration element to encrypt and the configuration provider you want to use. Either DataProtectionConfigurationProvider (uses DPAPI) or RSAProtectedConfigurationProvider (uses RSA). the -pd switch is used to decrypt or programatically:

string connStr = ConfigurationManager.ConnectionString["MyDbConn1"].ToString();

Of course there is a problem with this also. DPAPI uses LSASS, which again an attacker can extract the hash from its memory. If the RSAProtectedConfigurationProvider has been used, a key container is required. Mimikatz will force an export from the key container to a .pvk file. Which can then be read using OpenSSL or tools from the Mono.Security assembly.

I have looked at a few other ways using PSCredential and SecureString. They all seem to rely on DPAPI which as mentioned uses LSASS which is open for exploitation.

Credential Guard and Device Guard leverage virtualisation-based security. By the look of it still using LSASS. Bromium have partnered with Microsoft and coined it Micro-virtualization. The idea is that every user task is isolated into its own micro-VM. There seems to be some confusion as to how this is any better. Tasks still need to communicate outside of their VM, so what is to stop malicious code doing the same? I have seen lots of questions but no compelling answers yet. Credential Guard must run on physical hardware directly. Can not run on virtual machines. This alone rules out many
deployments.

Bromium vSentry transforms information and infrastructure protection with a revolutionary new architecture that isolates and defeats advanced threats targeting the endpoint through web, email and documents

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

This is marketing talk. Please don’t take this literally.

vSentry empowers users to access whatever information they need from any network, application or website, without risk to the enterprise

Traditional security solutions rely on detection and often fail to block targeted attacks which use unknown “zero day” exploits. Bromium uses hardware enforced isolation to stop even “undetectable” attacks without disrupting the user.

Bromium

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and
a joy to use

Bromium

These seem like bold claims.

Also worth considering is that Microsofts new virtualization-based security also relies on UEFI Secure Boot, which has been proven insecure.

Linux

Containers also help to provide some form of isolation. Allowing you to only have the user accounts to do what is necessary for the application.

I usually use a deployment tool that also changes the permissions and ownership of the files involved with the running web application to a single system user, so unprivileged users can not access the web applications files at all. The deployment script is executed over SSH in a remote shell. Only specific commands on the server are allowed to run and a very limited set of users have any sort of access to the machine. If you are using Linux Containers then you can reduce this even more if it is not already.

One of the beauties of GNU/Linux is that you can have as much or little security as you decide. No one has made that decision for you already and locked you out of the source. You are not feed lies like all of the closed source OS vendors trying to pimp their latest money spinning product. GNU/Linux is a dirty little secrete that requires no marketing hype. It just provides complete control if you want it. If you do not know what you want, then someone else will probably take that control from you. It is just a matter of time if it hasn’t happened already.

Least Privilege

Prevention

An application should have the least privileges possible in order to carry out what it needs to do. Consider creating accounts for each trust distinction. For example where you only need to read from a data store, then create that connection with a users credentials that is only allowed to read, and so on for other privileges. This way the attack surface is minimised. Adhering to the principle of least privilege. Also consider removing table access completely from the application and only provide permissions to the application to run stored queries. This way if/when an attacker is able to
compromise the machine and retrieve the password for an action on the data-store, they will not be able to do a lot anyway.

Location

Prevention

Put your services like data-stores on network segments that are as sheltered as possible and only contain similar services.

Maintain as few user accounts on the servers in question as possible and with the least privileges as possible.

Data-store Compromise

Prevention

As part of your defence in depth strategy, you should expect that your data-store is going to get stolen, but hope that it does not. What assets within the data-store are sensitive? How are you going to stop an attacker that has gained access to the data-store from making sense of the sensitive data?

As part of developing the application that uses the data-store, a strategy also needs to be developed and implemented to carry on business as usual when this happens. For example, when your detection mechanisms realise that someone unauthorised has been on the machine(s) that host your data-store, as well as the usual alerts being fired off to the people that are going to investigate and audit, your application should take some automatic measures like:

  • All following logins should be instructed to change passwords

If you follow the recommendations below, data-store theft will be an inconvenience, but not a disaster.

Consider what sensitive information you really need to store. Consider using the following key derivation functions (KDFs) for all sensitive data. Not just passwords. Also continue to remind your customers to always use unique passwords that are made up of alphanumeric, upper-case, lower-case and special characters. It is also worth considering pushing the use of high quality password vaults. Do not limit password lengths. Encourage long passwords.

PBKDF2, bcrypt and scrypt are KDFs that are designed to be slow. Used in a process commonly known as key stretching. The process of key stretching in terms of how long it takes can be tuned by increasing or decreasing the number of cycles used. Often 1000 cycles or more for passwords. “The function used to protect stored credentials should balance attacker and defender verification. The defender needs an acceptable response time for verification of users’ credentials during peak use. However, the time required to map <credential> -> <protected form> must remain beyond threats’ hardware (GPU, FPGA) and technique (dictionary-based, brute force, etc) capabilities.

OWASP Password Storage

PBKDF2, bcrypt and the newer scrypt, apply a Pseudorandom Function (PRF) such as a crypto-graphic hash, cipher or HMAC to the data being received along with a unique salt. The salt should be stored with the hashed data.

Do not use MD5, SHA-1 or the SHA-2 family of cryptographic one-way hashing functions by themselves for cryptographic purposes like hashing your sensitive data. In-fact do not use hashing functions at all for this unless they are leveraged with one of the mentioned KDFs. Why? Because the hashing speed can not be slowed as hardware continues to get faster. Many organisations that have had their data-stores stolen and continue to on a weekly basis could avoid their secrets being compromised simply by using a decent KDF with salt and a decent number of iterations. “Using four AMD Radeon HD6990 graphics cards, I am able to make about 15.5 billion guesses per second using the SHA-1 algorithm.

Per Thorsheim

In saying that, PBKDF2 can use MD5, SHA-1 and the SHA-2 family of hashing functions. Bcrypt uses the Blowfish (more specifically the Eksblowfish) cipher. Scrypt does not have user replaceable parts like PBKDF2. The PRF can not be changed from SHA-256 to something else.

Which KDF To Use?

This depends on many considerations. I am not going to tell you which is best, because there is no best. Which to use depends on many things. You are going to have to gain understanding into at least all three KDFs. PBKDF2 is the oldest so it is the most battle tested, but there has also been lessons learnt from it that have been taken to the latter two. The next oldest is bcrypt which uses the Eksblowfish cipher which was designed specifically for bcrypt from the blowfish cipher, to be very slow to initiate thus boosting protection against dictionary attacks which were often run on custom Application-specific Integrated Circuits (ASICs) with low gate counts, often found in GPUs of the day (1999).
The hashing functions that PBKDF2 uses were a lot easier to get speed increases due to ease of parallelisation as opposed to the Eksblowfish cipher attributes such as: far greater memory required for each hash, small and frequent pseudo-random memory accesses, making it harder to cache the data into faster memory. Now with hardware utilising large Field-programmable Gate Arrays (FPGAs), bcrypt brute-forcing is becoming more accessible due to easily obtainable cheap hardware such as:

The sensitive data stored within a data-store should be the output of using one of the three key derivation functions we have just discussed. Feed with the data you want protected and a salt. All good frameworks will have at least PBKDF2 and bcrypt APIs

bcrypt brute-forcing

With well ordered rainbow tables and hardware with high FPGA counts, brute-forcing bcrypt is now feasible:

Risks that Solution Causes

Reliance on adjacent layers of defence means those layers have to actually be up to scratch. There is a possibility that they will not be.

Possibility of missing secrets being sent over the wire.

Possible reliance on obscurity with many of the strategies I have seen proposed. Just be aware that obscurity may slow an attacker down a little, but it will not stop them.

Store Configuration in Configuration files

With moving any secrets from source code to configuration files, there is a possibility that the secrets will not be changed at the same time. If they are not changed, then you have not really helped much, as the secrets are still in source control.

With good configuration tools like node-config, you are provided with plenty of options of splitting up meta-data, creating overrides, storing different parts in different places, etc. There is a risk that you do not use the potential power and flexibility to your best advantage. Learn the ins and outs of what ever system it is you are using and leverage its features to do the best at obscuring your secrets and if possible securing them.

node-config

Is an excellent configuration package with lots of great features. There is no security provided with node-config, just some potential obscurity. Just be aware of that, and as discussed previously, make sure surrounding layers have beefed up security.

Windows

As is often the case with Microsoft solutions, their marketing often leads people to believe that they have secure solutions to problems when that is not the case. As discussed previously, there are plenty of ways to get around the Microsoft so called security features. As anything else in this space, they may provide some obscurity, but do not depend on them being secure.

Statements like the following have the potential for producing over confidence:

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

Bromium

Please keep your systems patched and updated.

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and a joy to use

Bromium

There is a risk that people will believe this.

Linux

As with Microsofts “virtualisation-based security” Linux containers may slow system compromise down, but a determined attacker will find other ways to get around container isolation. Maintaining a small set of user accounts is a worthwhile practise, but that alone will not be enough to stop a highly skilled and determined attacker moving forward.
Even when technical security is very good, an experienced attacker will use other mediums to gain what they want, like social engineering, physical compromise, both, or some other attack vectors. Defence in depth is crucial in achieving good security. Concentrating on the lowest hanging fruit first and working your way up the tree.

Locking file permissions and ownership down is good, but that alone will not save you.

Least Privilege

Applying least privilege to everything can take quite a bit of work. Yes, it is probably not that hard to do, but does require a breadth of thought and time. Some of the areas discussed could be missed. Having more than one person working on the task is often effective as each person can bounce ideas off of each other and the other person is likely to notice areas that you may have missed and visa-versa.

Location

Segmentation is useful, and a common technique to helping to build resistance against attacks. It does introduce some complexity though. With complexity comes the added likely-hood of introducing a fault.

Data-store Compromise

If you follow the advice in the countermeasures section, you will be doing more than most other organisations in this area. It is not hard, but if implemented could increase complacency/over confidence. Always be on your guard. Always expect that although you have done a lot to increase your security stance, a determined and experienced attacker is going to push buttons you may have never realised you had. If they want something enough and have the resources and determination to get it, they probably will. This is where you need strategies in place to deal with post compromise. Create process (ideally partly automated) to deal with theft.

Also consider that once an attacker has made off with your data-store, even if it is currently infeasible to brute-force the secrets, there may be other ways around obtaining the missing pieces of information they need. Think about the paper shredders and the associated competitions. With patience, most puzzles can be cracked. If the compromise is an opportunistic type of attack, they will most likely just give up and seek an easier target. If it is a targeted attack by determined and experienced attackers, they will probably try other attack vectors until they get what they want.

Do not let over confidence be your weakness. An attacker will search out the weak link. Do your best to remove weak links.

Costs and Trade-offs

There is potential for hidden costs here, as adjacent layers will need to be hardened. There could be trade-offs here that force us to focus on the adjacent layers. This is never a bad thing though. It helps us to step back and take a holistic view of our security.

Store Configuration in Configuration files

There should be little cost in moving secrets out of source code and into configuration files.

Windows

You will need to weigh up whether the effort to obfuscate secrets is worth it or not. It can also make the developers job more cumbersome. Some of the options provided may be worthwhile doing.

Linux

Containers have many other advantages and you may already be using them for making your deployment processes easier and less likely to have dependency issues. They also help with scaling and load balancing, so they have multiple benefits.

Least Privilege

Is something you should be at least considering and probably doing in every case. It is one of those considerations that is worth while applying to most layers.

Location

Segmenting of resources is a common and effective measure to take for at least slowing down attacks and a cost well worth considering if you have not already.

Data-store Compromise

The countermeasures discussed here go without saying, although many organisations do not do them well if at all. It is up to you whether you want to be one of the statistics that has all of their secrets revealed. Following the countermeasures here is something that just needs to be done if you have any data that is sensitive in your data-store(s).

Evaluation of Host Intrusion Detection Systems (HIDS)

May 30, 2015

Followed up with a test deployment and drive.

The best time to install a HIDS is on a fresh install before you open the host up to the internet or even your LAN if it’s corporate. Of course if you don’t have that luxury, there are a bunch of tools that can help you determine if you’re already owned. Be sure to run one or more over your target system before your HIDS bench-marks it.

The reason I chose Stealth and OSSEC to take further into an evaluation was because they rose to the top of a preliminary evaluation I performed during a recent Debian Web server hardening process where I looked at several other HIDS as well.

OSSEC

ossec-hids

Source

ossec-hids on github

Who’s Behind Ossec

  • Many developers, contributors, managers, reviewers, translators (infact the OSSEC team looks almost as large as the Stealth user base)

Documentation

Lots of documentation. Not always the easiest to navigate. Lots of buzz on the inter-webs.
Several books.

Community / Communication

IRC channel #ossec at irc.freenode.org Although it’s not very active.

Components

  • Manager (sometimes called server): does most of the work monitoring the Agents. It stores the file integrity checking databases, the logs, events and system auditing entries, rules, decoders, major configuration options.
  • Agents: small collections of programs installed on the machines we are interested in monitoring. Agents collect information and forward it to the manager for analysis and correlation.

There are quite a few other ancillary components also.

Architecture

You can also go the agent-less route which may allow the Manager to perform file integrity checks using agent-less scripts. As with Stealth, you’ve still got the issue of needing to be root in order to read some of the files.

Agents can be installed on VMware ESX but from what I’ve read it’s quite a bit of work.

Features (What does it do)

  • File integrity checking
  • Rootkit detection
  • Real-time log file monitoring and analysis (you may already have something else doing this)
  • Intrusion Prevention System (IPS) features as well: blocking attacks in real-time
  • Alerts can go to a databases MySQL or PostgreSQL or other types of outputs
  • There is a PHP web UI that runs on Apache if you would rather look at pretty outputs vs log files.

What I like

To me, the ability to scan in real-time off-sets the fact that the agents need binaries installed. This hinders the attacker from covering their tracks.

Can be configured to scan systems in realtime based on inotify events.

Backed by a large company Trend Micro.

Options. Install options for starters. You’ve got the options of:

  • Agent-less installation as described above
  • Local installation: Used to secure and protect a single host
  • Agent installation: Used to secure and protect hosts while reporting back to a
    central OSSEC server
  • Server installation: Used to aggregate information

Can install a web UI on the manager, so you need Apache, PHP, MySQL.

What I don’t like

  • Unlike Stealth, The fact that something has to be installed on the agents
  • The packages are not in the standard repositories. The downloads, PGP keys and directions are here.
  • I think Ossec may be doing to much and if you don’t like the way it does one thing, you may be stuck with it. Personally I really like the idea of a tool doing one thing, doing it well and providing plenty of configuration options to change the way it does it’s one thing. This provides huge flexibility and minimises your dependency on a suite of tools and/or libraries
  • Information overload. There seems to be a lot to get your head around to get it set-up. There are a lot of install options documented (books, interwebs, official docs). It takes a bit to workout exactly the best procedure for your environment. Following are some of the resources I used:
    1. Some official documentation
    2. Perving in the repository
    3. User take one install
    4. User take two install
    5. User take three install

Stealth

Stealth file integrity checker

Source

sourceforge

Who’s Behind Stealth

Author: Frank B. Brokken. An admirable job for one person. Frank is not a fly-by-nighter though. Stealth was first presented to Congress in 2003. It’s still actively maintained and used by a few. It’s one of GNU/Linux’s dirty little secrets I think. It’s a great idea, makes a tricky job simple and does it in an elegant way.

Documentation

  • 3.00.00 (2014-08-29)
  • 2.11.03 (2013-06-18)
    • Once you install Stealth, all the documentation can be found by sudo updatedb && locate stealth. I most commonly used: HTML docs (/usr/share/doc/stealth-doc/manual/html/) and (/usr/share/doc/stealth-doc/manual/pdf/stealth.pdf) for easy searching across the HTML docs
    • man page (/usr/share/doc/stealth/stealthman.html)
    • Examples: (/usr/share/doc/stealth/examples/)
  • More covered in my demo set-up below

Binaries

  • 3.00.00 (2014-08-29) is in the repository for Debain Jessie
  • 2.11.03-2 (2013-06-18) This is as recent as you can go for Linux Mint Qiana (17) and Rebecca (17.1) within the repositories, unless you want to go out of band. There are quite a few dependencies ldd will show you:
    libbobcat.so.3 =&amp;gt; /usr/lib/libbobcat.so.3 (0xb765d000)
    libpthread.so.0 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libpthread.so.0 (0xb7641000)
    libstdc++.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb754e000)
    libm.so.6 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libm.so.6 (0xb7508000)
    libgcc_s.so.1 =&amp;gt; /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74eb000)
    libc.so.6 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb7340000)
    libX11.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libX11.so.6 (0xb71ee000)
    libcrypto.so.1.0.0 =&amp;gt; /usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0 (0xb7022000)
    libreadline.so.6 =&amp;gt; /lib/i386-linux-gnu/libreadline.so.6 (0xb6fdc000)
    libmilter.so.1.0.1 =&amp;gt; /usr/lib/i386-linux-gnu/libmilter.so.1.0.1 (0xb6fcb000)
    /lib/ld-linux.so.2 (0xb7748000)
    libxcb.so.1 =&amp;gt; /usr/lib/i386-linux-gnu/libxcb.so.1 (0xb6fa5000)
    libdl.so.2 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libdl.so.2 (0xb6fa0000)
    libtinfo.so.5 =&amp;gt; /lib/i386-linux-gnu/libtinfo.so.5 (0xb6f7c000)
    libXau.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libXau.so.6 (0xb6f78000)
    libXdmcp.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libXdmcp.so.6 (0xb6f72000)
    
  • 2.10.01 (2012-10-04) is in the repository for Debian Wheezy

Community / Communication

No community really. I see it as one of the dirty little secretes that I’m surprised many diligent sysadmins haven’t jumped on. The author is happy to answer emails. The author doesn’t market.

Components

Controller

The computer initiating the scan.

Needs two kinds of outgoing services:

  1. ssh to reach the clients
  2. mail transport agent (MTA)(sendmail, postfix)

Considerations for the controller:

  1. No public access.
  2. All inbound services should be denied.
  3. Access only via its console.
  4. Physically secure location (one would think goes without saying, but you may be surprised).
  5. Sensitive information of the clients are stored on the controller.
  6. Password-less access to the clients for anyone who gains controller root access, unless ssh-cron is used, which appears to have been around since 2014-05.

Client

The computer/s being scanned. I don’t see any reason why a Stealth solution couldn’t be set-up to look after many clients.

Architecture

The controller stores one to many policy files. Each of which is specific to a single client and contains use directives and commands. It’s recommended policy to take copies of the client programmes such as the hashing programme sha1sum, find and others that are used extensively during the integrity scans and copy them to the controller to take bench-mark hashes. Subsequent runs will do the same to compare with the initial hashes stored.

Features (What does it do)

File integrity tests leaving virtually no sediments on the tested client.

Stealth subscribes to the “dark cockpit” approach. I.E. no mail is sent when no changes were detected. If you have a MTA, Stealth can be configured to send emails on changes it finds.

What I like

  • It’s simplicity. There is one package to install on the controller. Nothing to install on the client machines. Client just needs to have the controllers SSH public key. You will need a Mail Transfer Agent on your controller if you don’t already have one. My test machine (Linux Mint) didn’t have one.
  • Rather than just modifying the likes of sha1sum on the clients that Stealth uses to perform it’s integrity checks, Stealth would somehow have to be fooled into thinking that the changed hash of the sha1sum it’s just copied to the controller is the same as the previously recorded hash that it did the same with. If the previously recorded hash is removed or does not match the current hash, then Stealth will fire an alert off.
  • It’s in the Debian repositories. Which is a little surprising considering I couldn’t find any test suite results.
  • The whole idea behind it. Systems being monitored give little appearance that they’re being monitored, other than I think the presence of a single SSH login when Stealth first starts in the auth.log. This could actually be months ago, as the connection remains active for the life of Stealth. The login could be from a user doing anything on the client. It’s very discrete.
  • unpredictability of Stealth runs is offered through Stealth’s --random-interval and --repeat options. E.g., --repeat 60 --random-interval 30 results in new Stealth-runs on average every 75 seconds.
  • Subscribes to the Unix philosophy: “do one thing and do it well”
  • Stealth’s author is very approachable and open. After talking with Frank and suggesting some ideas to promote Stealth and it’s community, Frank started a discussion list.

What I don’t like

  • Lack of visible code reviews and testing.Yes it’s in Debian, but so was OpenSSL and Bash
  • One man band. Support provided via one person alone via email. Comparing with the likes of Ossec which has …
  • Lack of use cases. I don’t see anyone using/abusing it. Although Frank did send me some contacts of other people that are using it, so again, a very helpful author. Can’t find much on the interwebs. The documentation has clear signs that it’s been written and is targeting people already familiar with the tool. This is understandable as the author has been working on this project for nine years and could possibly be disconnected with what’s involved for someone completely new to the project to dive in and start using it. In saying that, that’s what I did and so far it worked out well.
  • This tells me that either very few are using it or it has no bugs and the install and configuration is stupidly straight forward or both.
  • Small user base. This is how many people are happy to reveal they are using Stealth.
  • Reading through the userguide, the following put me off a little: “preferably the public ssh-identity key of the controller should be placed in the client’s root .ssh/authorized_keys file, granting the controller root access to the client. Root access is normally needed to gain access to all directories and files of the client’s file system.” I never allow SSH root access to servers. So I’m not about to start. What’s worse, is that Stealth SSHing from server to client with key-pair can only do so automatically if the pass-phrase is blank. If it’s not blank then someone has to drive Stealth and the whole idea of Stealth (as far as I can tell) is to be an automatic file integrity checker.
    There are however some work-arounds to this. ssh-cron can run scheduled Stealth jobs, but needs to aquire the pass-phrase from the user once. It does this via ssh-askpass which is a X11 based pass-phrase input dialog. If your running ssh-cron and Stealth from a server (which would be a large number of potential users I would think) you won’t have X11. So if that’s the case ssh-cron is out of the question. At least that’s how I understand it from the man page and emails from Frank. Frank mentioned in an email: “Stealth itself could perfectly well be started `by hand’ setting up the connection using, e.g., an existing ssh private-key, which could thereafter completely be removed from the system running Stealth. The running Stealth process would then continue to use the established connection for as long as it’s running. This may be a very long time if the –daemon option is used.” I’ve used Monit to do any checks on the client that need root access. This way Stealth can run as a low privileged user.
  • In order to use an SSH key-pair with passphrase and have your controller resume scans on reboot, you’re going to need ssh-cron. Some distros like Mint and Ubuntu only have very old versions of libbobcat (3.19.01 (December 2013)) in their repositories. You could re-build if you fancy the dependency issues it may bring. Failing that, use Debian which is way more up to date, or just fire stealth off each time you reboot your controller machine manually and run it as a daemon with arguments such as --keep-alive (or --daemon if running stealth >= 4.00.00), --repeat. This will cause Stealth to keep running (sleeping) most of the time, then doing it’s checks at the intervals you define.

Outcomes

In making all of my considerations, I changed my mind quite a few times on which offerings were most suited to which environments. I think this is actually a good thing, as I think it means my evaluations were based on the real merits of each offering rather than any biases.

If you already have real-time logging to an off-site syslog server set-up and alerting. OSSEC would provide redundant features.

If you don’t already have real-time logging to an off-site syslog server, then OSSEC can help here.

The simplicity of Stealth, flatter learning curve and it’s over-all philosophy is what won me over.

Stealth Up and Running

I tested this out on a Mint installation.

Installed stealth and stealth-doc (2.11.03-2) via synaptic package manager. Then just did a locate for stealth to find the docs and other example files. The following are the files I used for documentation, how I used them and the tab order that made sense to me:

  1. The main documentation index: file:///usr/share/doc/stealth-doc/manual/html/stealth.html
  2. Chapter one introduction: file:///usr/share/doc/stealth-doc/manual/html/stealth01.html
  3. Chapter three to help build up a policy file: file:///usr/share/doc/stealth-doc/manual/html/stealth03.html
  4. Chapter five for running Stealth and building up policy file: file:///usr/share/doc/stealth-doc/manual/html/stealth05.html
  5. Chapter six for running Stealth: file:///usr/share/doc/stealth-doc/manual/html/stealth06.html
  6. Chapter seven for arguements to pass to Stealth: file:///usr/share/doc/stealth-doc/manual/html/stealth07.html
  7. Chapter eight for error messages: file:///usr/share/doc/stealth-doc/manual/html/stealth08.html
  8. The Man page: file:///usr/share/doc/stealth/stealthman.html
  9. Policy file examples: file:///usr/share/doc/stealth/examples/
  10. Useful scripts to use with Stealth: file:///usr/share/doc/stealth/scripts/usr/bin/
  11. All of the documentation in simple text format (good for searching across chapters for strings): file:///usr/share/doc/stealth-doc/manual/text/stealth.txt

Files I would need to copy and modify were:

  • /usr/share/doc/stealth/scripts/usr/bin/stealthcleanup.gz
  • /usr/share/doc/stealth/scripts/usr/bin/stealthcron.gz
  • /usr/share/doc/stealth/scripts/usr/bin/stealthmail.gz

Files I used for reference to build up a policy file:

  • /usr/share/doc/stealth/examples/demo.pol.gz
  • /usr/share/doc/stealth/examples/localhost.pol.gz
  • /usr/share/doc/stealth/examples/simple.pol.gz

As mentioned above, providing you have a working MTA, then Stealth will just do it’s thing when you run it. The next step is to schedule it’s runs. This can be also (as mentioned above) with a pseudo random interval.

Feel free to leave a comment if you need help setting Stealth up, as it did take a bit of fiddling, but does what it says it does on the box very well.

Web Server Log Management

April 25, 2015

As part of the ongoing work around preparing a Debian web server to host applications accessible from the WWW I performed some research, analysis, made decisions along the way and implemented a first stage logging strategy. I’ve done similar set-ups many times before, but thought it worth sharing my experience for all to learn something from it and/or provide input, recommendations, corrections to the process so we all get to improve.

The main system loggers I looked into

  • GNU syslogd which I don’t think is being developed anymore? Correct me if I’m wrong. Most Linux distributions no longer ship with this. Only supports UDP. It’s also a bit lacking in features. From what I gather is single-threaded. I didn’t spend long looking at this as there wasn’t much point. The following two offerings are the main players.
  • rsyslog: which ships with Debian and most other Linux distros now I believe. I like to do as little as possible and rsyslog fits this description for me. The documentation seems pretty good. Rainer Gerhards wrote rsyslog and his blog provides some good insights. Supports UDP, TCP. Can send over TLS. There is also the Reliable Event Logging Protocol (RELP) which Rainer created.
    rsyslog is great at gathering, transporting, storing log messages and includes some really neat functionality for dividing the logs. It’s not designed to alert on logs. That’s where the likes of Simple Event Correlator (SEC) comes in. Rainer discusses why TCP isn’t as reliable as many think here.
  • syslog-ng: I didn’t spend to long here, as I didn’t see any features that I needed that were better than the default of rsyslog. Can correlate log messages, both real-time and off-line. Supports reliable and encrypted transport using TCP and TLS. message filtering, sorting, pre-processing, log normalisation.

There are are few comparisons around. Most of the ones I’ve seen are a bit biased and often out of date.

Aims

  • Record events and have them securely transferred to another syslog server in real-time, or as close to it as possible, so that potential attackers don’t have time to modify them on the local system before they’re replicated to another location
  • Reliability (resilience / ability to recover connectivity)
  • Extensibility: ability to add more machines and be able to aggregate events from many sources on many machines
  • Receive notifications from the upstream syslog server of specific events. No HIDS is going to remove the need to reinstall your system if you are not notified in time and an attacker plants and activates their root-kit.
  • Receive notifications from the upstream syslog server of lack of events. The network is down for example.

Environmental Considerations

A couple of servers in the mix:

FreeNAS File Server

Recent versions can send their syslog events to a syslog server. With some work, it looks like FreeNAS can be setup to act as a syslog server.

pfSense Router

Can send log events, but only by UDP by the look of it.

Following are the two strategies that emerged. You can see by the detail that I went down the path of the first one initially. It was the path of least resistance / quickest to setup. I’m going to be moving away from papertrail toward strategy two. Mainly because I’ve had a few issues where messages have been getting lost that have been very hard to track down (I’ve spent over a week on it). As the sender, you have no insight into what papertrail is doing. The support team don’t provide a lot of insight into their service when you have to trouble-shoot things. They have been as helpful as they can be, but I’ve expressed concern around them being unable to trouble-shoot their own services.

Outcomes

Strategy One

Rsyslog, TCP, local queuing, TLS, papertrail for your syslog server (PT doesn’t support RELP, but say that’s because their clients haven’t seen any issues with reliability in using plain TCP over TLS with local queuing). My guess is they haven’t looked hard enough. I must be the first then. Beware!

As I was setting this up and watching both ends. We had an internet outage of just over an hour. At that stage we had very few events being generated, so it was trivial to verify both ends. I noticed that once the ISP’s router was back on-line and the events from the queue moved to papertrail, that there was in fact one missing.

Why did Rainer Gerhards create RELP if TCP with queues was good enough? That was a question that was playing on me for a while. In the end, it was obvious that TCP without RELP isn’t good enough.
At this stage it looks like the queues may loose messages. Rainer says things like “In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that the action is not ready, which means the remote server is off-line. This can be detected with plain TCP syslog and RELP“, but it can be detected without RELP.

You can aggregate log files with rsyslog or by using papertrails remote_syslog daemon.

Alerting is available, including for inactivity of events.

Papertrails documentation is good and support is reasonable. Due to the huge amounts of traffic they have to deal with, they are unable to trouble-shoot any issues you may have. If you still want to go down the papertrail path, to get started, work through this which sets up your rsyslog to use UDP (specified in the /etc/rsyslog.conf by a single ampersand in front of the target syslog server). I want something more reliable than that, so I use two ampersands, which specifies TCP.

As we’re going to be sending our logs over the internet for now, we need TLS. Check papertrails CA server bundle for integrity:

curl https://papertrailapp.com/tools/papertrail-bundle.pem | md5sum

Should be: c75ce425e553e416bde4e412439e3d09

If all good throw the contents of that URL into a file called papertrail-bundle.pem.
Then scp the papertrail-bundle.pem into the web servers /etc dir. The command for that will depend on whether you’re already on the web server and you want to pull, or whether you’re somewhere else and want to push. Then make sure the ownership is correct on the pem file.

chown root:root papertrail-bundle.pem

install rsyslog-gnutls

apt-get install rsyslog-gnutls

Add the TLS config

$DefaultNetstreamDriverCAFile /etc/papertrail-bundle.pem # trust these CAs
$ActionSendStreamDriver gtls # use gtls netstream driver
$ActionSendStreamDriverMode 1 # require TLS
$ActionSendStreamDriverAuthMode x509/name # authenticate by host-name
$ActionSendStreamDriverPermittedPeer *.papertrailapp.com

to your /etc/rsyslog.conf. Create egress rule for your router to let traffic out to dest port 39871.

sudo service rsyslog restart

To generate a log message that uses your system syslogd config /etc/rsyslog.conf, run:

logger "hi"

should log “hi” to /var/log/messages and also to papertrail, but it wasn’t.

# Show a live update of the last 10 lines (by default) of /var/log/messages
sudo tail -f [-n <number of lines to tail>] /var/log/messages

OK, so lets run rsyslog in config checking mode:

/usr/sbin/rsyslogd -f /etc/rsyslog.conf -N1

Output all good looks like:

rsyslogd: version <the version number>, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.

Trouble-shooting

  1. https://www.loggly.com/docs/troubleshooting-rsyslog/
  2. http://help.papertrailapp.com/
  3. http://help.papertrailapp.com/kb/configuration/troubleshooting-remote-syslog-reachability/
  4. /usr/sbin/rsyslogd -version will provide the installed version and supported features.

Which didn’t help a lot, as I don’t have telnet installed. I can’t ping from the DMZ as ICMP is not allowed out and I’m not going to install tcpdump or strace on a production server. The more you have running, the more surface area you have, the greater the opportunities to exploit.

So how do we tell if rsyslogd is actually running if it doesn’t appear to be doing anything useful?

pidof rsyslogd

or

/etc/init.d/rsyslog status

Showing which files rsyslogd has open can be useful:

lsof -p <rsyslogd pid>

or just combine the results of pidof rsyslogd

sudo lsof -p $(pidof rsyslogd)

To start with I had a line like:

rsyslogd 3426 root 8u IPv4 9636 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (SYN_SENT)

Which obviously showed rsyslogd‘s SYN packets were not getting through. I’ve had some discussion with Troy from PT support around the reliability of plain TCP over TLS without RELP. I think if the server is business critical, then strategy two “maybe” the better option. Troy has assured me that they’ve never had any issues with logs being lost due to lack of reliability with out RELP. Troy also pointed me to their recommended local queue options. After adding the queue tweaks and a rsyslogd restart, it resulted in:

rsyslogd 3615 root 8u IPv4 9766 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (ESTABLISHED)

I could now see events in the papertrail web UI in real-time.

Socket Statistics (ss)(the better netstat) should also show the established connection.

By default papertrail accepts TCP over TLS (TLS encryption check-box on, Plain text check-box off) and UDP. So if your TLS isn’t setup properly, your events won’t be accepted by papertrail. I later confirmed this to be true.

Check that our Logs are Commuting over TLS

Now without installing anything on the web server or router, or physically touching the server sending packets to papertrail or the router. Using a switch (ubiquitous) rather than a hub. No wire tap or multi-network interfaced computer. No switch monitoring port available on expensive enterprise grade switches (along with the much needed access). We’re basically down to two approaches I can think of and I really couldn’t be bothered getting up out of my chair.

  1. MAC flooding with the help of macof which is a utility from the dsniff suite. This essentially causes your switch to go into a “failopen mode” where it acts like a hub and broadcasts it’s packets to every port.

    MAC Flooding

    Or…
  2. Man in the Middle (MiTM) with some help from ARP spoofing or poisoning. I decided to choose the second option, as it’s a little more elegant.

    ARP Spoofing

On our MitM box, I set a static IP: address, netmask, gateway in /etc/network/interfaces and add domain, search and nameservers to the /etc/resolv.conf.

Follow that up with a service network-manager restart

On the web server run:

ifconfig -a

to get MAC: <MitM box MAC> On MitM box run the same command to get MAC: <web server MAC>
On web server run:

ip neighbour

to find MACs associated with IP’s (the local ARP table). Router was: <router MAC>.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> REACHABLE
<router IP> dev eth0 lladdr <router MAC> REACHABLE

Now you need to turn your MitM box into a router temporarily. On the MitM box run

cat /proc/sys/net/ipv4/ip_forward

You’ll see a ‘1’ if forwarding is on. If it’s not, throw a ‘1’ into the file:

echo 1 > /proc/sys/net/ipv4/ip_forward

and check again to make sure. Now on the MitM box run

arpspoof -t <web server IP> <router IP>

This will continue to notify <web server IP> that our (MitM box) MAC address belongs to <router IP>. Essentially… we (MitM box) are <router IP> to the <web server IP> box, but our IP address doesn’t change. Now on the web server you can see that it’s ARP table has been updated and because arpspoof keeps running, it keeps telling <web server IP> that our MitM box is the router.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> STALE
<router IP> dev eth0 lladdr <MitM box MAC> REACHABLE

Now on our MitM box, while our arpspoof continues to run, we start Wireshark listening on our eth0 interface or what ever interface your using, and you can see that all packets that the web server is sending, we are intercepting and forwarding (routing) on to the gateway.

Now Wireshark clearly showed that the data was encrypted. I commented out the five TLS config lines in the /etc/rsyslog.conf file -> saved -> restarted rsyslog -> turned on “Plain text” in papertrail and could now see the messages in clear text. Now when I turned off “Plain text” papertrail would no longer accept syslog events. Excellent!

One of the nice things about arpspoof is that it re-applies the original ARP’s once it’s done.

You can also tell arpspoof to poison the routers ARP table. This way any traffic going to the web server via the router, not originating from the web server will be routed through our MitM box also.

Don’t forget to revert the change to /proc/sys/net/ipv4/ip_forward.

Exporting Wireshark Capture

You can use the File->Save As… option here for a collection of output types, or the way I usually do it is:

  1. First completely expand all the frames you want visible in your capture file
  2. File->Export Packet Dissections->as “Plain Text” file…
  3. Check the “All packets” check-box
  4. Check the “Packet summary line” check-box
  5. Check the “Packet details:” check-box and the “As displayed”
  6. OK

Trouble-shooting messages that papertrail never shows

To run rsyslogd in debug

Check to see which arguments get passed into rsyslogd to run as a daemon in /etc/init.d/rsyslog and /etc/default/rsyslog. You’ll probably see a RSYSLOGD_OPTIONS="". There may be some arguments between the quotes.

sudo service rsyslog stop
sudo /usr/sbin/rsyslogd [your options here] -dn >> ~/rsyslog-debug.log

The debug log can be quite useful for trouble-shooting. Also keep your eye on the stderr as you can see if it’s writing anything out (most system start-up scripts throw this away).
Once you’ve finished collecting log:
ctrl+C

sudo service rsyslog start

To see if rsyslog is running

pidof rsyslogd
# or
/etc/init.d/rsyslog status
Turn on the impstats module

The stats it produces show when you run into errors with an output, and also the state of the queues.
You can also run impstats on the receiving machine if it’s in your control. Papertrail obviously is not.
Put the following into your rsyslog.conf file at the top and restart rsyslog:

# Turn on some internal counters to trouble-shoot missing messages
module(load="impstats"
interval="600"
severity="7"
log.syslog="off"

# need to turn log stream logging off
log.file="/var/log/rsyslog-stats.log")
# End turn on some internal counters to trouble-shoot missing messages

Now if you get an error like:

rsyslogd-2039: Could not open output pipe '/dev/xconsole': No such file or directory [try http://www.rsyslog.com/e/2039 ]

You can just change the /dev/xconsole to /dev/console
xconsole is still in the config file for legacy reasons, it should have been cleaned up by the package maintainers.

GnuTLS error in rsyslog-debug.log

By running rsyslogd manually in debug mode, I found an error when the message failed to send:

unexpected GnuTLS error -53 in nsd_gtls.c:1571

Standard Error when running rsyslogd manually produces:

GnuTLS error: Error in the push function

With some help from the GnuTLS mailing list:

That means that send() returned -1 for some reason.” You can enable more output by adding an environment variable GNUTLS_DEBUG_LEVEL=9 prior to running the application, and that should at least provide you with the errno. This didn’t actually provide any more detail to stderr. However, thanks to Rainer we do now have debug.gnutls parameter in the rsyslog code that if you specify this global variable in the rsyslog.conf and assign it a value between 0-10 you’ll have gnutls debug output going to rsyslog’s debug log.

Strategy Two

Rsyslog, TCP, local queuing, TLS, RELP, SEC, syslog server on local network. Notification for inactivity of events could be performed by cron and SEC?
LogAnalyzer also created by Rainer Gerhards (rsyslog author), but more work to setup than an on-line service you don’t have to setup. In saying that. You would have greater control and security which for me is the big win here.
Normalisation also looks like Rainer has his finger in this pie.

In theory Adding RELP to TCP with local queues is a step-up in terms of reliability. Others have said, the reliability of TCP over TLS with local queues is excellent anyway. I’ve yet to confirm it’s excellence. At the time of writing this post,I’m seriously considering moving toward RELP to help solve my reliability issues.

Additional Resource

gentoo rsyslog wiki

Up and Running with Kali Linux and Friends

March 29, 2014

When it comes to measuring the security posture of an application or network, the best defence against an attacker is offence. What does that mean? It means your best defence is to have someone with your best interests (generally employed by you), if we’re talking about your asset, assess the vulnerabilities of your asset and attempt to exploit them.

In the words of Offensive Security (Creators of Kali Linux), Kali Linux is an advanced Penetration Testing and Security Auditing Linux distribution. For those that are familiar with BackTrack, basically Kali is a new creation based on Debian rather than Ubuntu, with significant improvements over BackTrack.

When it comes to actually getting Kali on some hardware, there is a multitude of options available.

All externally listening services by default are disabled, but very easy to turn on if/when required. The idea being to reduce chances of detecting the presence of Kali.

I’ve found the Kali Linux documentation to be of a high standard and plentiful.

In this article I’ll go over getting Kali Linux installed and set-up. I’ll go over a few of the packages in a low level of detail (due to the share number of them) that come out of the box. On top of that I’ll also go over a few programmes I like to install separately. In a subsequent article I’d like to continue with additional programmes that come with Kali Linux as there are just to many to cover in one go.

System Requirements

  1. Minimum of 8 GB disk space is required for the Kali install
  2. Minimum RAM 512 MB
  3. CD/DVD Drive or USB boot support

Supported Hardware

Officially supported architectures

i386, amd64, ARM (armel and armhf)

Unofficial (but maintained) images

You can download official Kali Linux images for the following, these are maintained on a best effort basis by Offensive Security.

  • VMware (pre-made vm with VMware tools installed)

ARM images

  • rk3306 mk/ss808CPU: dual-core 1.6 GHz A9

    RAM: 1 GB

  • Raspberry Pi
  • ODROID U2CPU: quad-core 1.7 GHz

    RAM: 2GB

    Ethernet: 10/100Mbps

  • ODROID X2CPU: quad-core Cortex-A9 MPCore

    RAM: 2GB

    USB 2: 6 ports

    Ethernet: 10/100Mbps

  • MK802/MK802 II
  • Samsung Chromebook
  • Galaxy Note 10.1
  • CuBox
  • Efika MX
  • BeagleBone Black

Create a Customised Kali Image

Kali also provides a simple way to create your own ISO image from the latest source. You can include the packages you want and exclude the ones you don’t. You can customise the kernel. The options are virtually limitless.

The default desktop environment is Gnome, but Kali also provides an easy way to configure which desktop environment you use before building your custom ISO image.

The alternative options provided are: KDE, LXDE, XFCE, I3WM and MATE.

Kali has really embraced the Debian ethos of being able to be run on pretty well any hardware with extreme flexibility. This is great to see.

Installation

You should find most if not all of what you need here. Just follow the links specific to your requirements.

As with BackTrack, the default user is “root” without the quotes. If your installing, make sure you use a decent password. Not a dictionary word or similar. It’s generally a good idea to use a mix of upper case, lower case characters, numbers and special characters and of a decent length.

I’m not going to repeat what’s already documented on the Kali site, as I think they’ve done a pretty good job of it already, but I will go over some things that I think may not be 100% clear at first attempt. Also just to be clear, I’ve done this on a Linux box.

Now once you have down loaded the image that suites your target platform,

you’re going to want to check its validity by verifying the SHA1 checksums. Now this is where the instructions can be a little confusing. You’ll need to make sure that the SHA1SUMS file that contains the specific checksum you’re going to use to verify the checksum of the image you downloaded, is in fact the authentic SHA1SUMS file. instructions say “When you download an image, be sure to download the SHA1SUMS and SHA1SUMS.gpg files that are next to the downloaded image (i.e. in the same directory on the server).”. You’ve got to read between the lines a bit here. A little further down the page has the key to where these files are. It’s buried in a wget command. Plus you have to add another directory to find them. The location was here. Now that you’ve got these two files downloaded in the same directory, verify the SHA1SUMS.gpg signature as follows:

$ gpg --verify SHA1SUMS.gpg SHA1SUMS
gpg: Signature made Thu 25 Jul 2013 08:05:16 NZST using RSA key ID 7D8D0BF6
gpg: Good signature from "Kali Linux Repository <devel@kali.org>

You’ll also get a warning about the key not being certified with a trusted signature.

Now verify the checksum of the image you downloaded with the checksum within the (authentic) SHA1SUMS file

Compare the output of the following two commands. They should be the same.

# Calculate the checksum of your downloaded image file.
$ sha1sum [name of your downloaded image file]
# Print the checksum from the SHA1SUMS file for your specific downloaded image file name.
$ grep [name of your downloaded image file] SHA1SUMS

Kali also has a live USB Install including persistence to your USB drive.

Community

IRC: #kali-linux on FreeNode. Stick to the rules.

What’s Included

> 300 security programmes packaged with the operating system:

Before installation you can view the tools included in the Kali repository.

Or once installed by issuing the following command:

# prints complete list of installed packages.
dpkg --get-selections | less

To find out a little more about the application:

dpkg-query -l '*[some text you think may exist in the package name]*'

Or if you know the package name your after:

dpkg -l [package name]

Want more info still?

man [package name]

Some of the notable applications installed by default

Metasploit

Framework that provides the infrastructure to create, re-use and automate a wide variety of exploitation tasks.

If you require database support for Metasploit, start the postgresql service.

# I like to see the ports that get opened, so I run ss -ant before and after starting the services.
ss -ant
service postgresql start
ss -ant

ss or “socket statistics” which is a new replacement programme for the old netstat command. ss gets its information from kernel space via Netlink.

Start the Metasploit service:

ss -ant
service metasploit start
ss -ant

When you start the metasploit service, it will create a database and user, both with the names msf3, providing you have your database service started. Now you can run msfconsole.

Start msfconsole:

msfconsole

The following is an image of terminator where I use the top pane for stopping/starting services, middle pane for checking which ports are opened/closed, bottom pane for running msfconsole. terminator is not installed by default. It’s as simple as apt-get install terminator

metasploit

You can find full details of setting up Metasploits database and start/stopping the services here.

You can also find the Metasploit frameworks database commands simply by typing help database at the msf prompt.

# Print the switches that you can run msfconsole with.
msfconsole -h

Once your in msf type help at the prompt to get yourself started.

There is also a really easy to navigate all encompassing set of documentation provided for msfconsole here.

You can also set-up PostgreSQL and Metasploit to launch on start-up like this:

update-rc.d postgresql enable
update-rc.d metasploit enable

Offensive Security also has a Metasploit online course here.

Armitage

Just as it was included in BackTrack, which is no longer supporting Armitage, you’ll also find Armitage comes installed out of the box in version 1.0.4 of Kali Linux. Armitage is a GUI to assist in metasploit visualisation. You can find the official documentation here. Offensive Security has also done a good job of providing their own documentation for Armitage over here. To get started with Armitage, just make sure you’ve got the postgresql service running. Armitage will start the metasploit service for you if it’s not already running. Armitage allows your red team to collaborate by using a single instance of Metasploit. There is also a commercial offering developed by Raphael Mudge’s company “Strategic Cyber LLC” which also created Armitage, called Cobalt Strike. Cobalt Strike currently costs $2500 per user per year. There is a 21 day trial though. Cobalt Strike offers a bunch of great features. Check them out here. Armitage can connect to an existing instance of Metasploit on another host.

NMap

Target use is network discovery and auditing. Provides host information for anything it can access from a network. Also now has a scripting engine that can execute arbitrary custom tasks.

I’m guessing we’ve probably all used NMap? ZenMap which Kali Linux also provides out of the box Is a gui for NMap. This was also included in BackTrack.

Intercepting Web Proxies

Burp Suite

I use burp quite regularly and have a few blog posts where I’ve detailed some of it’s use. In fact I’ve used it to reverse engineer the comms between VMware vSphere and ESXi to create a UPS solution that deals with not only virtual hosts but also the clients.

WebScarab

I haven’t really found out what webscarab’s sweet spot is if it has one. I’d love to know what it does better than burp, zap and w3af combined? There is also a next generation version which according to the google code repository hasn’t had any work done on it since March 2011, where as the classic version is still receiving fixes. The documentation has always seemed fairly minimalistic also.

In terms of web proxy/interceptors I’ve also used fiddler which relies on the .NET framework and as mono is not installed out of the box on Kali, neither is fiddler.

OWASP Zed Attack Proxy (ZAP)

Which is an OWASP flagship project, so it’s free and open source. Cross platform. It was forked from the Paros Proxy project which is not longer supported. Includes automated, passive, brute force and port scanners. Traditional and AJAX spiders. Can even find unlinked files. Provides fuzzing, port scanning. Can be run without the UI in headless mode and can be accessed via a REST API. Supports Anti CSRF tokens. The Script Console that is one of the add-ons supports any language that JSR (Java Specification Requests) 223 supports. That’s languages such as JavaScript Groovy, Python, Ruby and many more. There is plenty of info on the add-ons here. OWASP also provide directions on how to write your own extensions and they provide some sample templates. Following is the list of current extensions, which can also be managed from within Zap. “Manage Add-ons” menu → Marketplace tab. Select and click “Install Selected”

OWASP Zap

The idea is to first set Zap up as a proxy for your browser. Fetch some web pages (build history). Zap will create a history of URLs. You then right click the item of interest and click Attack->[one of the spider options], then click the play button and watch the progress bar. which will crawl all the pages you have access to according to your permissions. Then under the Analyse menu → Scan Policy… Setup your scan policy so your only scanning what you want to scan. Then hit Scan to assess your target application. Out of the box, you’ve got many scan options. Zap does a lot for you. I’m really loving this tool OWASP!

As usual with OWASP, zap has a wealth of documentation. If zap doesn’t provide enough out of the box, extend it. OWASP also provide an API for zap.

You can find the user group here (also accessible from the ZAP ‘Online’ menu.), which is good for getting help if the help file (which can also be found via ZAP itself) fails to yeild. There is also a getting started guide which is a work in progress. There is also the ZAP Blog.

FoxyProxy

Although nothing to do with Kali Linux and could possibly be in the IceWeasel add-ons section below, I’ve added it here instead as it really reduces friction with web proxy interception. FoxyProxy is a very handy add-on for both firefox and chromium. Although it seems to have more options for firefox, or at least they are more easily accessible. It allows you to set-up a list of proxies and then switch between them as you need. When I run chromium as a non root user I can’t change the proxy settings once the browser is running. I have to run the following command in order to set the proxy to my intermediary before run time like this:

chromium-browser --temp-profile –proxy-server=localhost:3001

Firefox is a little easier, but neither browsers allow you to build up lists of proxies and then switch them in mid flight. FoxyProxy provides a menu button, so with two clicks you can disable the add-on completely to revert to your previous settings, or select any or your predefined proxies. This is a real time saver.

Vulnerability Scanners

Open Vulnerability Assessment System (OpenVAS)

Forked from the last free version (closed in 2005) of Nessus. OpenVAS plugins are written in the same language that Nessus uses. OpenVAS looks for known misconfigurations and vulnerabilities common in out of date software. In fact it covers the following OWASP Top 10 items:

  • No.5 Security Misconfiguration
  • No.7 Missing Function Level Access Control (formerly known as “failure to restrict URL access”)
  • No.9 Using Components with Known Vulnerabilities.

OpenVAS also has some SQLi and other probes to test application input, but it’s primary purpose is to scan networks of machines with out of date software and bad configurations.

Tests continue to be added. Now currently at 32413 Network Vulnerability Tests (NVTs) details here.

OpenVAS

Greenbone Security Desktop (gsd) who’s package is a GUI that uses the Greenbone Security Manager, OpenVAS Manager or any other service that offers the OpenVAS Management Protocol (omp) protocol. Currently at version 1.2.2 and licensed under the GPLv2. The Greenbone Security Assistant (gsad) is currently at version 4.0.0. The Germany government also sponsor OpenVAS.

From the menu: Kali Linux → Vulnerability Analysis → OpenVAS, we have a couple of short-cuts visible. openvas-gsd is actually just the gsd package and openvas-setup which is the set-up script.

Before you run openvas-gsd, you can either:

  1. Run openvas-setup which will do all the setup which I think is already done on Kali. At the end of this, you will be prompted to add a password for a user to the Admin role. The password you add here is for a new user called “admin” (of course it doesn’t say that, so can be a little confusing as to what the password is for).
  2. Or you can just run the following command, which is much quicker because you don’t run the set-up procedure:
openvasad -c 'add_user' -n [a new administrative username of your choosing] -r Admin

You’ll be prompted to add a new password. Make sure you remember it.

Check out the man page for further options. For example the -c switch is a shortened –command and it lists a selection of commands you can use.

I think -n is for –name although not listed in the man page. -r switch is –role. Either User or Admin.

The user you’ve just added is used to connect the gsd to the:

  1. openvasmd (OpenVAS Manager daemon) which listens on port 9390
  2. openvassd (OpenVAS Scanner daemon) which listens on port 9391
  3. gsad (Greenbone Security Assistant daemon) which listens on port 9392. This is a web app, which also listens on port 443
  4. openvasad (OpenVAS Administrator daemon) which listens on 9393

The core functionality is provided by the scanner and the manager. The manager handles and organises scan results. The gsad or assistant connects to the manager and administrator to provide a fully featured user interface. There is also a CLI (omp) but I haven’t been able to get this going on Kali Linux yet. You’ll also find that the previous link has links to all the man pages for OpenVAS. You can read more about the architecture and how the different components fit together.

I’ve also found that sometimes the daemons don’t automatically start when gsd starts. So you have to start them manually.

openvasmd && openvassd && gsad && openvasad

You can also use the web app https://127.0.0.1/omp

Then try logging in to the openvasmd. When your finished with gsd you can kill the running daemons if you like. I like to keep an eye on the listening ports when I’m done to keep things as quite as possible.

Check the ports.

ss -anp

Optional to see the processes running, but not necessary.

ps -e
kill -9 <PID of openvasad> <PID of gsad> <PID of openvassd> <PID of openvasmd>

There are also plenty of options when it comes to the report. This can be output in HTML, PDF, XML, Emailed and quite a few others. The reports are colour coded and you can choose what to have put in them. The vulnerabilities are classified by risk: High, Medium, Low, OpenVAS can take quite a while to scan as it runs so many tests.

This is how to get started with gsd.

Web Vulnerability Scanners

This is the generally accepted criteria of a tool to be considered a Web Application Security Scanner.

SkipFish

A high performance active reconnaissance tool written in C. From the documentation “Multiplexing single-thread, fully asynchronous network I/O and data processing model that eliminates memory management, scheduling, and IPC inefficiencies present in some multi-threaded clients.”. OK. So it’s fast.

which prepares an interactive sitemap by carrying out a recursive crawl and probes based on existing dictionaries or ones you build up yourself. Further details in the documentation linked below.

Doesn’t conform to most of the criteria outlined in the above Web Application Security Scanner criteria.

SkipFish v2.05 is the current version packaged with Kali Linux.

SkipFish v2.10b (released Dec 2012)

Free and you can view the source code. Apache license 2.0

Performs a similar role to w3af.

Project details can be found here.

You can find the tests here.

How do you use it though? This is a good place to start. Instead of reading through the non-existent doc/dictionaries.txt, I think you can do as well by reading through /usr/share/skipfish/dictionaries/README-FIRST.

The other two documentation sources are the man page and skipfish with the -h option.

Web Application Attack and Audit Framework (w3af)

Andres Riancho has created a masterpiece. The main behavior of this application is to assess and identify vulnerabilities in a web application by sending customised HTTP requests. Results can be output in quite a few formats including email. It can also proxy, but burp suite is more focused on this role and does it well.

Can be run with a gui: w3af_gui or from the terminal: w3af_console. Written in Python and Runs on Linux BSD or Mac. Older versions used to work on Windows, but it’s not currently being tested on Windows. Open source on GitHub and released under the GPLv2 license.

You can write your own plug-ins, but check first to make sure it doesn’t already exist. The plugins are listed within the application and on the w3af.org web site along with links to their source code, unit tests and descriptions. If it doesn’t appear that the plug-in you want exists, contact Andres Riancho to make sure, write it and submit a pull request. Also looks like Andres Riancho is driving the development TDD style, which means he’s obviously serious about creating quality software. Well done Andres!

w3af provides the ability to inject your payloads into almost every part of the HTTP request by way of it’s fuzzing engine. Including: query string, POST data, headers, cookie values, content of form files, URL file-names and paths.

There’s a good set of documentation found here and you can watch the training videos. I’m really looking forward to using this in anger.

w3af

Nikto

Is a web server scanner that’s not overly stealthy. It’s built on “Rain Forest Puppies” LIbWhisker2 which has a BSD license.

Nikto is free and open source with GPLv3 license. Can be run on any platform that runs a perl interpreter. It’s source can be found here. The first release of Nikto was in December of 2001 and is still under active development. Pull requests encouraged.

Suports SSL. Supports HTTP proxies, so you can see what Nikto is actually sending. Host authentication. Attack encoding. Update local databases and plugins via the -update argument. Checks for server configuration items like multiple index files and HTTP server options. Attempts to identify installed web servers and software.

Looks like the LibWhisker web site no longer exists. Last release of LibWhisker was at the beginning of 2010.

Nikto v2.1.4 (Released Feb 20 2011) is the current version packaged with Kali Linux. Tests for multiple items, including > 6400 potentially dangerous files/CGIs. Outdated versions of > 1200 servers. Insecurities of specific versions of > 270 servers.

Nikto v2.1.5 (released Sep 16 2012) is the latest version. Tests for multiple items, including > 6500 potentially dangerous files/CGIs. Outdated versions of > 1250 servers. Insecurities of specific versions of > 270 servers.

Just spoke with the Kali developers about the old version. They are now building a package of 2.1.5 as I write this. So should be an apt-get update && apt-get upgrade away by the time you read this all going well. Actually I can see it in the repo now. Man those guys are responsive!

Most of the info you will need can be found here.

SQLNinja

sqlninja: Targets Microsoft SQL Servers. Uses SQL injection vulnerabilities on a web app. Focuses on popping remote shells on the target database server and uses them to gain a foothold over the target network. You can set-up graphical access via a VNC server injection. Can upload executables by using HTTP requests via vbscript or debug.exe. Supports direct and reverse bindshell. Quite a few other methods of obtaining access. Documentation here.

Text Editors

  1. Vim. Shouldn’t need much explanation.
  2. Leafpad. This is a very basic graphical text editor. A bit like Windows Notepad.
  3. Gvim. This is the Graphical version of Vim. I’ve mostly used sublime text 2 & 3, gedit on Linux, but Gvim is really quite powerful too.

Note Keeping

  1. KeepNote. Supported on Linux, Windows and MacOS X. Easy to transport notes by zipping or copying a folder. Notes stored in HTML and XML.
  2. Zim Desktop Wiki.

Other Notable Features

  • Offensive Securities Kali Linux is free and always will be. It’s also completely open (as it’s based on debian) to modification of it’s OS or programmes.
  • FHS compliant. That means the file system complies to the Linux Filesystem Hierarchy Standard
  • Wireless device support is vast. Including USB devices.
  • Forensics Mode. As with BackTrack 5, the Kali ISO also has an option to boot into the forensic mode. No drives are written to (including swap). No drives will be auto mounted upon insertion.

Customising installed Kali

Wireless Card

I had a little trouble with my laptop wireless card not being activated. Turned out to be me just not realising that an external wi-fi switch had to be turned on. I had wireless enabled in the BIOS. The following where the steps I took to resolve it:

Read Kali Linux documentation on Troubleshooting Wireless Drivers  and found the card listed with lspci. Opened /var/log/dmesg with vi. Searched for the name of the card:

#From command mode to make search case insensitive:
:set ic
#From command mode to search
/[name of my wireless card]

There were no errors. So ran iwconfig (similar to ifconfig but dedicated to wireless interfaces). I noticed that the card was definitely present and the Tx-Power was off. I then thought I’d give rfkill a spin and it’s output made me realise I must have missed a hardware switch somewhere.

rfkill

Found the hard switch and turned it on and we now have wireless.

Adding Shortcuts to your Panel

[Alt]+[right click]->[Add to Panel…]

Or if your Kali install is on VirtualBox:

[Windows]+[Alt]+[right click]->[Add to Panel…]

Caching Debian Packages

If you want to:

  1. save on bandwidth
  2. have a large number of your packages delivered at your network speed rather than your internet speed
  3. have several debian based machines on your network

I’d recommend using apt-cacher-ng. If not already, you’ll have to set this up on a server and add the following file to each of your debian based machines.

/etc/apt/apt.conf with the following contents and set it’s permissions to be the same as your sources.list:

Acquire::http::Proxy “http://[ip address of your apt-cacher server]:3142”;

IceWeasel add-ons

  • Firebug
  • NoScript
  • Web Developer
  • FoxyProxy (more details mentioned above)
  • HackBar. Somewhat useful for (en/de)coding (Base64, Hex, MD5, SHA-(1/256), etc), manipulating and splitting URLs

SQL Inject Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. SQL Inject Me is a component of the Exploit-Me suite. Allows you to test all or any number of input fields on all or any of a pages forms. You just fill in the fields with valid data, then test with all the tools attacks or with the top that you’ve defined in the options menu. It then looks for database errors which are rendered into the returned HTML as a result of sending escape strings, so doesn’t cater for blind injection. You can also add remove escape strings and resulting error strings that SQL Inject Me should look for on response. The order in which each escape string can be tried can also be changed. All you need to know can be found here.

XSS Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. XSS Me is also a component of the Exploit-Me suite. This tool’s behaviour is very similar to SQL Inject Me (follows the POLA) which makes using the tools very easy. Both these add-ons have next to no learning curve. The level of entry is very low and I think are exactly what web developers that make excuses for not testing their own security need. The other thing is that it helps developers understand how these attacks can be carried out. XSS Me currently only tests for reflected XSS. It doesn’t attempt to compromise the security of the target system. Both XSS Me and SQL Inject Me are reconnaissance tools, where the information is the vulnerabilities found. XSS Me doesn’t support stored XSS or user supplied data from sources such as cookies, links, or HTTP headers. How effective XSS Me is in finding vulnerabilities is also determined by the list of attack strings the tool has available. Out of the box the list of XSS attack strings are derived from RSnakes collection which were donated to OWASP who now maintains it as one of their cheatsheets.. Multiple encodings are not yet supported, but are planned for the future. You can help to keep the collection up to date by submitting new attack strings.

Chromium

Because it’s got great developer tools that I’m used to using. In order to run this under the root account, you’ll need to add the following parameter to /etc/chromium/default between the quotes for CHROMIUM_FLAGS=””

--user-data-dir

I like to install the following extensions: Cookies, ScriptSafe

Terminator

Because I like a more powerful console than the default. Terminator adds split screen on top of multi tabs. If you live at the command line, you owe it to yourself to get the best console you can find. So far terminator still fits this bill for me.

KeePass

The password database app. Because I like passwords to be long, complex, unique for everything and as secure as possible.

Exploits

I was going to go over a few exploits we could carry out with the Kali Linux set-up, but I ran out of time and page space. In fact there are still many tools I wanted to review, but there just isn’t enough time or room in this article. Feel free to subscribe to my blog and you’ll get an update when I make posts. I’d like to extend on this by reviewing more of the tools offered in Kali Linux

Input Sanitisation

This has been one of my pet topics for a while. Why? Because the lack of it is so often abused. In fact this is one of the primary techniques for No.1 (Injection) and No.3 (XSS) of this years OWASP Top 10 List (unchanged from 2010). I’d encourage any serious web developers to look at my Sanitising User Input From Browser. Part 1” and Part 2

Part 1 deals with the client side (untrused) code.

Part 2 deals with the server side (trusted) code.

I provide source code, sources and discuss the following topics:

  1. Minimising the attack surface
  2. Defining maximum field lengths (validation)
  3. Determining a white list of allowable characters (validation)
  4. Escaping untrusted data
  5. External libraries, cheat sheets, useful code and sites, I used. Also discuss the less useful resources and why.
  6. The point of validating client side when the server side is going to do it again anyway
  7. Full set of server side tests to test the sanitisation is doing what is expected

Sanitising User Input from Browser. part 2

November 16, 2012

Untrusted data (data entered by a user), should always be treated as though it contains attack code.
This data should not be sent anywhere without taking the necessary steps to detect and neutralise the malicious code.
With applications becoming more interconnected, attacks being buried in user input and decoded and/or executed by a downstream interpreter is becoming all the more common.
Input validation, that’s restricting user input to allow only certain white listed characters and restricting field lengths are only two forms of defence.
Any decent attacker can get around client side validation, so you need to employ defence in depth.
validation and escaping also needs to be performed on the server side.

Leveraging existing libraries

  1. Microsofts AntiXSS is not extensible,
    it doesn’t allow the user to define their own whitelist.
    It didn’t allow me to add behaviour to the routines.
    I want to know how many instances of HTML encoded values there were.
    There was certainly a lot of code in there, but I didn’t find it very useful.
  2. The OWASP encoding project (Reform)(as mentioned in part 1 of this series).
    This is quite a useful set of projects for different technologies.
  3. System.Net.WebUtility from the System.Web.dll.
    Now this did most of what I needed other than provide me with fine grained information of what had been tampered with.
    So I took it and extended it slightly.
    We hadn’t employed AOP at this stage and it wasn’t considered important enough to invest the time to do so.
    So it was a matter of copy past modify.

What’s the point in client side validation if the server has to do it again anyway?

Now there are arguments both ways for this.
My current take on this for the project in question was:
If you only have server side validation, the client side is less responsive and user friendly.
If you only have client side validation, it’s out of our control.
This also gives fuel to the argument of using JavaScript on the client and server side (with the likes of node.js).
So the same code can be used both sides without having to code the same validation in two different languages.
Personally I find writing validation code easier using JavaScript than C#.
This maybe just because I’ve been writing considerably more JavaScript than C# lately though.

The code

I drew a sequence diagram of how this should work, but it got lost in a move.
So I wasn’t keen on doing it again, as the code had already been done.
In saying that, the code has reasonably good documentation (I think).
Code is king, providing it has been written to be read.
If you notice any of the escaping isn’t quite making sense, it could be the blogging engine either doing what it’s meant to, or not doing what it’s meant to.
I’ve been over the code a few times, but I may have missed something.
Shout out if anything’s not clear.

First up, we’ll look at the custom exceptions as we’ll need those soon.

using System;

namespace Common.WcfHelpers.ErrorHandling.Exceptions
{
    public abstract class WcfException : Exception
    {
        /// <summary>
        /// In order to set the message for the client, set it here, or via the property directly in order to over ride default value.
        /// </summary>
        /// <param name="message">The message to be assigned to the Exception's Message.</param>
        /// <param name="innerException">The exception to be assigned to the Exception's InnerException.</param>
        /// <param name="messageForClient">The client friendly message. This parameter is optional, but should be set.</param>
        public WcfException(string message, Exception innerException = null, string messageForClient = null) : base(message, innerException)
        {
            MessageForClient = messageForClient;
        }

        /// <summary>
        /// This is the message that the service's client will see.
        /// Make sure it is set in the constructor. Or here.
        /// </summary>
	    public string MessageForClient
        {
            get { return string.IsNullOrEmpty(_messageForClient) ? "The MessageForClient property of WcfException was not set" : _messageForClient; }
            set { _messageForClient = value; }
        }
        private string _messageForClient;
    }
}

And the more specific SanitisationWcfException

using System;
using System.Configuration;

namespace Common.WcfHelpers.ErrorHandling.Exceptions
{
    /// <summary>
    /// Exception class that is used when the user input sanitisation fails, and the user needs to be informed.
    /// </summary>
    public class SanitisationWcfException : WcfException
    {
        private const string _defaultMessageForClient = "Answers were NOT saved. User input validation was unsuccessful.";
        public string UnsanitisedAnswer { get; private set; }

        /// <summary>
        /// In order to set the message for the client, set it here, or via the property directly in order to over ride default value.
        /// </summary>
        /// <param name="message">The message to be assigned to the Exception's Message.</param>
        /// <param name="innerException">The Exception to be assigned to the base class instance's inner exception. This parameter is optional.</param>
        /// <param name="messageForClient">The client friendly message. This parameter is optional, but should be set.</param>
        /// <param name="unsanitisedAnswer">The user input string before service side sanitisatioin is performed.</param>
        public SanitisationWcfException
        (
            string message,
            Exception innerException = null,
            string messageForClient = _defaultMessageForClient,
            string unsanitisedAnswer = null
        )
            : base(
                message,
                innerException,
                messageForClient + " If this continues to happen, please contact " + ConfigurationManager.AppSettings["SupportEmail"] + Environment.NewLine
                )
        {
            UnsanitisedAnswer = unsanitisedAnswer;
        }
    }
}

Now as we define whether our requirements are satisfied by way of executable requirements (unit tests(in their rawest form))
Lets write some executable specifications.

using NUnit.Framework;
using Common.Security.Sanitisation;

namespace Common.Security.Encoding.UnitTest
{
    [TestFixture]
    public class ExtensionsTest
    {

        private readonly string _inNeedOfEscaping = @"One #x2F / two amp & three #x27 ' four lt < five quot "" six gt >.";
        private readonly string _noNeedForEscaping = @"One x2F two amp three x27 four lt five quot six gt       .";

        [Test]
        public void SingleDecodeDoubleEncodedHtml_ShouldSingleDecodeDoubleEncodedHtml()
        {
            string doubleEncodedHtml = @"";               // between the ""'s we have a string of Html with double escaped values like &amp;#x27; user entered text &amp;#x2F.
            string singleEncodedHtmlShouldLookLike = @""; // between the ""'s we have a string of Html with single escaped values like ' user entered text &#x2F.
            // In the above, the bloging engine is escaping the sinlge escaped entity encoding, so all you'll see is the entity it self.
            // but it should look like the double encoded entity encodings without the first &amp->;


            string singleEncodedHtml = doubleEncodedHtml.SingleDecodeDoubleEncodedHtml();
            
            Assert.That(singleEncodedHtml, Is.EqualTo(singleEncodedHtmlShouldLookLike));
        }

        [Test]
        public void Extensions_CompliesWithWhitelist_ShouldNotComply()
        {
            Assert.That(_inNeedOfEscaping.CompliesWithWhitelist(whiteList: @"^[\w\s\.,]+$"), Is.False);
        }

        [Test]
        public void Extensions_CompliesWithWhitelist_ShouldComply()
        {
            Assert.That(_noNeedForEscaping.CompliesWithWhitelist(whiteList: @"^[\w\s\.,]+$"), Is.True);
            Assert.That(_inNeedOfEscaping.CompliesWithWhitelist(whiteList: @"^[\w\s\.,#/&'<"">]+$"), Is.True);
        }
    }
}

Now the code that satisfies the above executable specifications, and more.

using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Text.RegularExpressions;

namespace Common.Security.Sanitisation
{
    /// <summary>
    /// Provides a series of extension methods that perform sanitisation.
    /// Escaping, unescaping, etc.
    /// Usually targeted at user input, to help defend against the likes of XSS and other injection attacks.
    /// </summary>
    public static class Extensions
    {

        private const int CharacterIndexNotFound = -1;

        /// <summary>
        /// Returns a new string in which all occurrences of a double escaped html character (that's an html entity immediatly prefixed with another html entity)
        /// in the current instance are replaced with the single escaped character.
        /// </summary>
        /// <param name="source">The target text used to strip one layer of Html entity encoding.</param>
        /// <returns>The singly escaped text.</returns>
        public static string SingleDecodeDoubleEncodedHtml(this string source)
        {
            return source.Replace("&amp;#x", "&#x");
        }
        /// <summary>
        /// Filter a text against a regular expression whitelist of specified characters.
        /// </summary>
        /// <param name="target">The text that is filtered using the whitelist.</param>
        /// <param name="alternativeTarget"></param>
        /// <param name="whiteList">Needs to be be assigned a valid whitelist, otherwise nothing gets through.</param>
        public static bool CompliesWithWhitelist(this string target, string alternativeTarget = "", string whiteList = "")
        {
            if (string.IsNullOrEmpty(target))
                target = alternativeTarget;
            
            return Regex.IsMatch(target, whiteList);
        }
        /// <summary>
        /// Takes a string and returns another with a single layer of Html entity encoding replaced with it's Html entity literals.
        /// </summary>
        /// <param name="encodedUserInput">The text to perform the opperation on.</param>
        /// <param name="numberOfEscapes">The number of Html entity encodings that were replaced.</param>
        /// <returns>The text that's had a single layer of Html entity encoding replaced with it's Html entity literals.</returns>
        public static string HtmlDecode(this string encodedUserInput, ref int numberOfEscapes)
        {
            const int NotFound = -1;

            if (string.IsNullOrEmpty(encodedUserInput))
                return string.Empty;

            StringWriter output = new StringWriter(CultureInfo.InvariantCulture);
            
            if (encodedUserInput.IndexOf('&') == NotFound)
            {
                output.Write(encodedUserInput);
            }
            else
            {
                int length = encodedUserInput.Length;
                for (int index1 = 0; index1 < length; ++index1)
                {
                    char ch1 = encodedUserInput[index1];
                    if (ch1 == 38)
                    {
                        int index2 = encodedUserInput.IndexOfAny(_htmlEntityEndingChars, index1 + 1);
                        if (index2 > 0 && encodedUserInput[index2] == 59)
                        {
                            string entity = encodedUserInput.Substring(index1 + 1, index2 - index1 - 1);
                            if (entity.Length > 1 && entity[0] == 35)
                            {
                                ushort result;
                                if (entity[1] == 120 || entity[1] == 88)
                                    ushort.TryParse(entity.Substring(2), NumberStyles.AllowHexSpecifier, NumberFormatInfo.InvariantInfo, out result);
                                else
                                    ushort.TryParse(entity.Substring(1), NumberStyles.AllowLeadingWhite | NumberStyles.AllowTrailingWhite | NumberStyles.AllowLeadingSign, NumberFormatInfo.InvariantInfo, out result);
                                if (result != 0)
                                {
                                    ch1 = (char)result;
                                    numberOfEscapes++;
                                    index1 = index2;
                                }
                            }
                            else
                            {
                                index1 = index2;
                                char ch2 = HtmlEntities.Lookup(entity);
                                if ((int)ch2 != 0)
                                {
                                    ch1 = ch2;
                                    numberOfEscapes++;
                                }
                                else
                                {
                                    output.Write('&');
                                    output.Write(entity);
                                    output.Write(';');
                                    continue;
                                }
                            }
                        }
                    }
                    output.Write(ch1);
                }
            }
            string decodedHtml = output.ToString();
            output.Dispose();
            return decodedHtml;
        }
        /// <summary>
        /// Escapes all character entity references (double escaping where necessary).
        /// Why? The XmlTextReader that is setup in XmlDocument.LoadXml on the service considers the character entity references (&#xxxx;) to be the character they represent.
        /// All XML is converted to unicode on reading and any such entities are removed in favor of the unicode character they represent.
        /// </summary>
        /// <param name="unencodedUserInput">The string that needs to be escaped.</param>
        /// <param name="numberOfEscapes">The number of escapes applied.</param>
        /// <returns>The escaped text.</returns>
        public static unsafe string HtmlEncode(this string unencodedUserInput, ref int numberOfEscapes)
        {
            if (string.IsNullOrEmpty(unencodedUserInput))
                return string.Empty;

            StringWriter output = new StringWriter(CultureInfo.InvariantCulture);
            
            if (output == null)
                throw new ArgumentNullException("output");
            int num1 = IndexOfHtmlEncodingChars(unencodedUserInput);
            if (num1 == -1)
            {
                output.Write(unencodedUserInput);
            }
            else
            {
                int num2 = unencodedUserInput.Length - num1;
                fixed (char* chPtr1 = unencodedUserInput)
                {
                    char* chPtr2 = chPtr1;
                    while (num1-- > 0)
                        output.Write(*chPtr2++);
                    while (num2-- > 0)
                    {
                        char ch = *chPtr2++;
                        if (ch <= 62)
                        {
                            switch (ch)
                            {
                                case '"':
                                    output.Write(""");
                                    numberOfEscapes++;
                                    continue;
                                case '&':
                                    output.Write("&amp;");
                                    numberOfEscapes++;
                                    continue;
                                case '\'':
                                    output.Write("&amp;#x27;");
                                    numberOfEscapes = numberOfEscapes + 2;
                                    continue;
                                case '<':
                                    output.Write("<");
                                    numberOfEscapes++;
                                    continue;
                                case '>':
                                    output.Write(">");
                                    numberOfEscapes++;
                                    continue;
                                case '/':
                                    output.Write("&amp;#x2F;");
                                    numberOfEscapes = numberOfEscapes + 2;
                                    continue;
                                default:
                                    output.Write(ch);
                                    continue;
                            }
                        }
                        if (ch >= 160 && ch < 256)
                        {
                            output.Write("&#");
                            output.Write(((int)ch).ToString(NumberFormatInfo.InvariantInfo));
                            output.Write(';');
                            numberOfEscapes++;
                        }
                        else
                            output.Write(ch);
                    }
                }
            }
            string encodedHtml = output.ToString();
            output.Dispose();
            return encodedHtml;
        }

 

        private static unsafe int IndexOfHtmlEncodingChars(string searchString)
        {
            int num = searchString.Length;
            fixed (char* chPtr1 = searchString)
            {
                char* chPtr2 = (char*)((UIntPtr)chPtr1);
                for (; num > 0; --num)
                {
                    char ch = *chPtr2;
                    if (ch <= 62)
                    {
                        switch (ch)
                        {
                            case '"':
                            case '&':
                            case '\'':
                            case '<':
                            case '>':
                            case '/':
                                return searchString.Length - num;
                        }
                    }
                    else if (ch >= 160 && ch < 256)
                        return searchString.Length - num;
                    ++chPtr2;
                }
            }
            return CharacterIndexNotFound;
        }

        private static char[] _htmlEntityEndingChars = new char[2]
        {
            ';',
            '&'
        };
        private static class HtmlEntities
        {
            private static string[] _entitiesList = new string[253]
            {
                "\"-quot",
                "&-amp",
                "'-apos",
                "<-lt",
                ">-gt",
                " -nbsp",
                "¡-iexcl",
                "¢-cent",
                "£-pound",
                "¤-curren",
                "¥-yen",
                "¦-brvbar",
                "§-sect",
                "¨-uml",
                "©-copy",
                "ª-ordf",
                "«-laquo",
                "¬-not",
                "\x00AD-shy",
                "®-reg",
                "¯-macr",
                "°-deg",
                "±-plusmn",
                "\x00B2-sup2",
                "\x00B3-sup3",
                "´-acute",
                "µ-micro",
                "¶-para",
                "·-middot",
                "¸-cedil",
                "\x00B9-sup1",
                "º-ordm",
                "»-raquo",
                "\x00BC-frac14",
                "\x00BD-frac12",
                "\x00BE-frac34",
                "¿-iquest",
                "À-Agrave",
                "Á-Aacute",
                "Â-Acirc",
                "Ã-Atilde",
                "Ä-Auml",
                "Å-Aring",
                "Æ-AElig",
                "Ç-Ccedil",
                "È-Egrave",
                "É-Eacute",
                "Ê-Ecirc",
                "Ë-Euml",
                "Ì-Igrave",
                "Í-Iacute",
                "Î-Icirc",
                "Ï-Iuml",
                "Ð-ETH",
                "Ñ-Ntilde",
                "Ò-Ograve",
                "Ó-Oacute",
                "Ô-Ocirc",
                "Õ-Otilde",
                "Ö-Ouml",
                "×-times",
                "Ø-Oslash",
                "Ù-Ugrave",
                "Ú-Uacute",
                "Û-Ucirc",
                "Ü-Uuml",
                "Ý-Yacute",
                "Þ-THORN",
                "ß-szlig",
                "à-agrave",
                "á-aacute",
                "â-acirc",
                "ã-atilde",
                "ä-auml",
                "å-aring",
                "æ-aelig",
                "ç-ccedil",
                "è-egrave",
                "é-eacute",
                "ê-ecirc",
                "ë-euml",
                "ì-igrave",
                "í-iacute",
                "î-icirc",
                "ï-iuml",
                "ð-eth",
                "ñ-ntilde",
                "ò-ograve",
                "ó-oacute",
                "ô-ocirc",
                "õ-otilde",
                "ö-ouml",
                "÷-divide",
                "ø-oslash",
                "ù-ugrave",
                "ú-uacute",
                "û-ucirc",
                "ü-uuml",
                "ý-yacute",
                "þ-thorn",
                "ÿ-yuml",
                "Œ-OElig",
                "œ-oelig",
                "Š-Scaron",
                "š-scaron",
                "Ÿ-Yuml",
                "ƒ-fnof",
                "\x02C6-circ",
                "˜-tilde",
                "Α-Alpha",
                "Β-Beta",
                "Γ-Gamma",
                "Δ-Delta",
                "Ε-Epsilon",
                "Ζ-Zeta",
                "Η-Eta",
                "Θ-Theta",
                "Ι-Iota",
                "Κ-Kappa",
                "Λ-Lambda",
                "Μ-Mu",
                "Ν-Nu",
                "Ξ-Xi",
                "Ο-Omicron",
                "Π-Pi",
                "Ρ-Rho",
                "Σ-Sigma",
                "Τ-Tau",
                "Υ-Upsilon",
                "Φ-Phi",
                "Χ-Chi",
                "Ψ-Psi",
                "Ω-Omega",
                "α-alpha",
                "β-beta",
                "γ-gamma",
                "δ-delta",
                "ε-epsilon",
                "ζ-zeta",
                "η-eta",
                "θ-theta",
                "ι-iota",
                "κ-kappa",
                "λ-lambda",
                "μ-mu",
                "ν-nu",
                "ξ-xi",
                "ο-omicron",
                "π-pi",
                "ρ-rho",
                "ς-sigmaf",
                "σ-sigma",
                "τ-tau",
                "υ-upsilon",
                "φ-phi",
                "χ-chi",
                "ψ-psi",
                "ω-omega",
                "ϑ-thetasym",
                "ϒ-upsih",
                "ϖ-piv",
                " -ensp",
                " -emsp",
                " -thinsp",
                "\x200C-zwnj",
                "\x200D-zwj",
                "\x200E-lrm",
                "\x200F-rlm",
                "–-ndash",
                "—-mdash",
                "‘-lsquo",
                "’-rsquo",
                "‚-sbquo",
                "“-ldquo",
                "”-rdquo",
                "„-bdquo",
                "†-dagger",
                "‡-Dagger",
                "•-bull",
                "…-hellip",
                "‰-permil",
                "′-prime",
                "″-Prime",
                "‹-lsaquo",
                "›-rsaquo",
                "‾-oline",
                "⁄-frasl",
                "€-euro",
                "ℑ-image",
                "℘-weierp",
                "ℜ-real",
                "™-trade",
                "ℵ-alefsym",
                "←-larr",
                "↑-uarr",
                "→-rarr",
                "↓-darr",
                "↔-harr",
                "↵-crarr",
                "⇐-lArr",
                "⇑-uArr",
                "⇒-rArr",
                "⇓-dArr",
                "⇔-hArr",
                "∀-forall",
                "∂-part",
                "∃-exist",
                "∅-empty",
                "∇-nabla",
                "∈-isin",
                "∉-notin",
                "∋-ni",
                "∏-prod",
                "∑-sum",
                "−-minus",
                "∗-lowast",
                "√-radic",
                "∝-prop",
                "∞-infin",
                "∠-ang",
                "∧-and",
                "∨-or",
                "∩-cap",
                "∪-cup",
                "∫-int",
                "∴-there4",
                "∼-sim",
                "≅-cong",
                "≈-asymp",
                "≠-ne",
                "≡-equiv",
                "≤-le",
                "≥-ge",
                "⊂-sub",
                "⊃-sup",
                "⊄-nsub",
                "⊆-sube",
                "⊇-supe",
                "⊕-oplus",
                "⊗-otimes",
                "⊥-perp",
                "⋅-sdot",
                "⌈-lceil",
                "⌉-rceil",
                "⌊-lfloor",
                "⌋-rfloor",
                "〈-lang",
                "〉-rang",
                "◊-loz",
                "♠-spades",
                "♣-clubs",
                "♥-hearts",
                "♦-diams"
            };
            private static Dictionary<string, char> _lookupTable = GenerateLookupTable();

            private static Dictionary<string, char> GenerateLookupTable()
            {
                Dictionary<string, char> dictionary = new Dictionary<string, char>(StringComparer.Ordinal);
                foreach (string str in _entitiesList)
                    dictionary.Add(str.Substring(2), str[0]);
                return dictionary;
            }

            public static char Lookup(string entity)
            {
                char ch;
                _lookupTable.TryGetValue(entity, out ch);
                return ch;
            }
        }
    }
}

You may also notice that I’ve mocked the OperationContext.
Thanks to WCFMock, a mocking framework for WCF services.
I won’t include this code, but you can get it here.
I’ve used the popular NUnit test framework and RhinoMocks for the stubbing and mocking.
Both pulled into the solution using NuGet.
Most useful documentation for RhinoMocks:
http://ayende.com/Wiki/Rhino+Mocks+3.5.ashx
http://ayende.com/wiki/Rhino+Mocks.ashx

For this project I used NLog and wrapped it.
Now you start to get an idea of how to use the sanitisation.

using System;
using System.ServiceModel;
using System.ServiceModel.Channels;
using NUnit.Framework;
using System.Configuration;
using Rhino.Mocks;
using Common.Wrapper.Log;
using MockedOperationContext = System.ServiceModel.Web.MockedOperationContext;
using Common.WcfHelpers.ErrorHandling.Exceptions;

namespace Sanitisation.UnitTest
{
    [TestFixture]
    public class SanitiseTest
    {
        private const string _myTestIpv4Address = "My.Test.Ipv4.Address";
        private readonly int _maxLengthHtmlEncodedUserInput = int.Parse(ConfigurationManager.AppSettings["MaxLengthHtmlEncodedUserInput"]);
        private readonly int _maxLengthHtmlDecodedUserInput = int.Parse(ConfigurationManager.AppSettings["MaxLengthHtmlDecodedUserInput"]);
        private readonly string _encodedUserInput_thatsMaxDecodedLength = @"One #x2F &amp;#x2F; two amp &amp; three #x27 &amp;#x27; four lt < five quot " six gt >.
One #x2F &amp;#x2F; two amp &amp; three #x27 &amp;#x27; four lt < five quot " six gt >.
One #x2F &amp;#x2F; two amp &amp; three #x27 &amp;#x27; four lt < five quot " six gt >.
One #x2F &amp;#x2F; two amp &amp; three #x27 &amp;#x27; four lt < five quot " six gt >.
One #x2F &amp;#x2F; two amp &amp; three #x27 &amp;#x27; four lt < five quot " six gt >.
One #x2F &amp;#x2F; two amp &amp; three #x27 &amp;#x27; four lt < five quot " six gt >.";
        private readonly string _decodedUserInput_thatsMaxLength = @"One #x2F / two amp & three #x27 ' four lt < five quot "" six gt >.
One #x2F / two amp & three #x27 ' four lt < five quot "" six gt >.
One #x2F / two amp & three #x27 ' four lt < five quot "" six gt >.
One #x2F / two amp & three #x27 ' four lt < five quot "" six gt >.
One #x2F / two amp & three #x27 ' four lt < five quot "" six gt >.
One #x2F / two amp & three #x27 ' four lt < five quot "" six gt >.";

        [Test]
        public void Sanitise_UserInput_WhenGivenNull_ShouldReturnEmptyString()
        {
            Assert.That(new Sanitise().UserInput(null), Is.EqualTo(string.Empty));
        }

        [Test]
        public void Sanitise_UserInput_WhenGivenEmptyString_ShouldReturnEmptyString()
        {
            Assert.That(new Sanitise().UserInput(string.Empty), Is.EqualTo(string.Empty));
        }

        [Test]
        public void Sanitise_UserInput_WhenGivenSanitisedString_ShouldReturnSanitisedString()
        {
            // Open the whitelist up in order to test the encoding without restriction.
            Assert.That(new Sanitise(whiteList: @"^[\w\s\.,#/&'<"">]+$").UserInput(_encodedUserInput_thatsMaxDecodedLength), Is.EqualTo(_encodedUserInput_thatsMaxDecodedLength));
        }
        [Test]
        [ExpectedException(typeof(SanitisationWcfException))]
        public void Sanitise_UserInput_ShouldThrowExceptionIfEscapedInputToLong()
        {
            string fourThousandAndOneCharacters = "Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand characters. Four thousand character";
            string expectedError = "The un-modified string received from the client with the following IP address: " +
                   '"' + _myTestIpv4Address + "\" " +
                   "exceeded the allowed maximum length of an escaped Html user input string. " +
                   "The maximum length allowed is: " +
                   _maxLengthHtmlEncodedUserInput +
                   ". The length was: " +
                   (_maxLengthHtmlEncodedUserInput+1) + ".";

            using(new MockedOperationContext(StubbedOperationContext))
            {
                try
                {
                    new Sanitise().UserInput(fourThousandAndOneCharacters);
                }
                catch(SanitisationWcfException e)
                {
                    Assert.That(e.Message, Is.EqualTo(expectedError));
                    Assert.That(e.UnsanitisedAnswer, Is.EqualTo(fourThousandAndOneCharacters));
                    throw;
                }
            }
        }
        [Test]
        [ExpectedException(typeof(SanitisationWcfException))]
        public void Sanitise_UserInput_DecodedUserInputShouldThrowException_WhenMaxLengthHtmlDecodedUserInputIsExceeded()
        {
            char oneCharOverTheLimit = '.';
            string expectedError =
                           "The string received from the client with the following IP address: " +
                           "\"" + _myTestIpv4Address + "\" " +
                           "after Html decoding exceded the allowed maximum length of an un-escaped Html user input string." +
                           Environment.NewLine +
                           "The maximum length allowed is: " + _maxLengthHtmlDecodedUserInput + ". The length was: " +
                           (_decodedUserInput_thatsMaxLength + oneCharOverTheLimit).Length + oneCharOverTheLimit;

            using(new MockedOperationContext(StubbedOperationContext))
            {
                try
                {
                    new Sanitise().UserInput(_encodedUserInput_thatsMaxDecodedLength + oneCharOverTheLimit);
                }
                catch(SanitisationWcfException e)
                {
                    Assert.That(e.Message, Is.EqualTo(expectedError));
                    Assert.That(e.UnsanitisedAnswer, Is.EqualTo(_encodedUserInput_thatsMaxDecodedLength + oneCharOverTheLimit));
                    throw;
                }
            }
        }
        [Test]
        public void Sanitise_UserInput_ShouldLogAndSendEmail_IfNumberOfDecodedHtmlEntitiesDoesNotMatchNumberOfEscapes()
        {
            string encodedUserInput_with6HtmlEntitiesNotEscaped = _encodedUserInput_thatsMaxDecodedLength.Replace("&amp;#x2F;", "/");
            string errorWeAreExpecting =
                "It appears as if someone has circumvented the client side Html entity encoding." + Environment.NewLine +
                "The requesting IP address was: " +
                "\"" + _myTestIpv4Address + "\" " +
                "The sanitised input we receive from the client was the following:" + Environment.NewLine +
                "\"" + encodedUserInput_with6HtmlEntitiesNotEscaped + "\"" + Environment.NewLine +
                "The same input after decoding and re-escaping on the server side was the following:" + Environment.NewLine +
                "\"" + _encodedUserInput_thatsMaxDecodedLength + "\"";
            string sanitised;
            // setup _logger
            ILogger logger = MockRepository.GenerateMock<ILogger>();
            logger.Expect(lgr => lgr.logError(errorWeAreExpecting));

            Sanitise sanitise = new Sanitise(@"^[\w\s\.,#/&'<"">]+$", logger);

            using (new MockedOperationContext(StubbedOperationContext))
            {
                // Open the whitelist up in order to test the encoding etc.
                sanitised = sanitise.UserInput(encodedUserInput_with6HtmlEntitiesNotEscaped);
            }

            Assert.That(sanitised, Is.EqualTo(_encodedUserInput_thatsMaxDecodedLength));
            logger.VerifyAllExpectations();
        }        

        private static IOperationContext StubbedOperationContext
        {
            get
            {
                IOperationContext operationContext = MockRepository.GenerateStub<IOperationContext>();
                int port = 80;
                RemoteEndpointMessageProperty remoteEndpointMessageProperty = new RemoteEndpointMessageProperty(_myTestIpv4Address, port);
                operationContext.Stub(oc => oc.IncomingMessageProperties[RemoteEndpointMessageProperty.Name]).Return(remoteEndpointMessageProperty);
                return operationContext;
            }
        }
    }
}

Now the API code that we can use to do our sanitisation.

using System;
using System.Configuration;
// Todo : KC We need time to implement DI. Should be using something like ninject.extensions.wcf.
using OperationContext = System.ServiceModel.Web.MockedOperationContext;
using System.ServiceModel.Channels;
using Common.Security.Sanitisation;
using Common.WcfHelpers.ErrorHandling.Exceptions;
using Common.Wrapper.Log;

namespace Sanitisation
{

    public class Sanitise
    {
        private readonly string _whiteList;
        private readonly ILogger _logger;
        

        private string RequestingIpAddress
        {
            get
            {
                RemoteEndpointMessageProperty remoteEndpointMessageProperty = OperationContext.Current.IncomingMessageProperties[RemoteEndpointMessageProperty.Name] as RemoteEndpointMessageProperty;
                return ((remoteEndpointMessageProperty != null) ? remoteEndpointMessageProperty.Address : string.Empty);
            }
        }
        /// <summary>
        /// Provides server side escaping of Html entities, and runs the supplied whitelist character filter over the user input string.
        /// </summary>
        /// <param name="whiteList">Should be provided by DI from the ResourceFile.</param>
        /// <param name="logger">Should be provided by DI. Needs to be an asynchronous logger.</param>
        /// <example>
        /// The whitelist can be obtained from a ResourceFile like so...
        /// <code>
        /// private Resource _resource;
        /// _resource.GetString("WhiteList");
        /// </code>
        /// </example>
        public Sanitise(string whiteList = "", ILogger logger = null)
        {
            _whiteList = whiteList;
            _logger = logger ?? new Logger();
        }
        /// <summary>
        /// 1) Check field lengths.         Client side validation may have been negated.
        /// 2) Check against white list.	Client side validation may have been negated.
        /// 3) Check Html escaping.         Client side validation may have been negated.

        /// Generic Fail actions:	Drop the payload. No point in trying to massage and save, as it won't be what the user was expecting,
        ///                         Add full error to a WCFException Message and throw.
        ///                         WCF interception reads the WCFException.MessageForClient, and sends it to the user. 
        ///                         On return, log the WCFException's Message.
        ///                         
        /// Escape Fail actions:	Asynchronously Log and email full error to support.


        /// 1) BA confirmed 50 for text, and 400 for textarea.
        ///     As we don't know the field type, we'll have to go for 400."
        ///
        ///     First we need to check that we haven't been sent some huge string.
        ///     So we check that the string isn't longer than 400 * 10 = 4000.
        ///     10 is the length of our double escaped character references.
        ///     Or, we ask the business for a number."
        ///     If we fail here, perform Generic Fail actions and don't complete the following steps.
        /// 
        ///     Convert all Html Entity Encodings back to their equivalent characters, and count how many occurrences.
        ///
        ///     If the string is longer than 400, perform Generic Fail actions and don't complete the following steps.
        /// 
        /// 2) check all characters against the white list
        ///     If any don't match, perform Generic Fail actions and don't complete the following steps.
        /// 
        /// 3) re html escape (as we did in JavaScript), and count how many escapes.
        ///     If count is greater than the count of Html Entity Encodings back to their equivalent characters,
        ///     Perform Escape Fail actions. Return sanitised string.
        /// 
        ///     If we haven't returned, return sanitised string.
        
        
        /// Performs checking on the text passed in, to verify that client side escaping and whitelist validation has already been performed.
        /// Performs decoding, and re-encodes. Counts that the number of escapes was the same, otherwise we log and send email with the details to support.
        /// Throws exception if the client side validation failed to restrict the number of characters in the escaped string we received.
        ///     This needs to be intercepted at the service.
        ///     The exceptions default message for client needs to be passed back to the user.
        ///     On return, the interception needs to log the exception's message.
        /// </summary>
        /// <param name="sanitiseMe"></param>
        /// <returns></returns>
        public string UserInput(string sanitiseMe)
        {
            if (string.IsNullOrEmpty(sanitiseMe))
                return string.Empty;

            ThrowExceptionIfEscapedInputToLong(sanitiseMe);

            int numberOfDecodedHtmlEntities = 0;
            string decodedUserInput = HtmlDecodeUserInput(sanitiseMe, ref numberOfDecodedHtmlEntities);

            if(!decodedUserInput.CompliesWithWhitelist(whiteList: _whiteList))
            {
                string error = "The answer received from client with the following IP address: " +
                    "\"" + RequestingIpAddress + "\" " +
                    "had characters that failed to match the whitelist.";
                throw new SanitisationWcfException(error);
            }

            int numberOfEscapes = 0;
            string sanitisedUserInput = decodedUserInput.HtmlEncode(ref numberOfEscapes);

            if(numberOfEscapes != numberOfDecodedHtmlEntities)
            {
                AsyncLogAndEmail(sanitiseMe, sanitisedUserInput);
            }

            return sanitisedUserInput;
        }
        /// <note>
        /// Make sure the logger is setup to log asynchronously
        /// </note>
        private void AsyncLogAndEmail(string sanitiseMe, string sanitisedUserInput)
        {
            // no need for SanitisationException

            _logger.logError(
                "It appears as if someone has circumvented the client side Html entity encoding." + Environment.NewLine +
                "The requesting IP address was: " +
                "\"" + RequestingIpAddress + "\" " +
                "The sanitised input we receive from the client was the following:" + Environment.NewLine +
                "\"" + sanitiseMe + "\"" + Environment.NewLine +
                "The same input after decoding and re-escaping on the server side was the following:" + Environment.NewLine +
                "\"" + sanitisedUserInput + "\""
                );
        }

        /// <summary>
        /// This procedure may throw a SanitisationWcfException.
        /// If it does, ErrorHandlerBehaviorAttribute will need to pass the "messageForClient" back to the client from within the IErrorHandler.ProvideFault procedure.
        /// Once execution is returned, the IErrorHandler.HandleError procedure of ErrorHandlerBehaviorAttribute
        /// will continue to process the exception that was thrown in the way of logging sensitive info.
        /// </summary>
        /// <param name="toSanitise"></param>
        private void ThrowExceptionIfEscapedInputToLong(string toSanitise)
        {
            int maxLengthHtmlEncodedUserInput = int.Parse(ConfigurationManager.AppSettings["MaxLengthHtmlEncodedUserInput"]);
            if (toSanitise.Length > maxLengthHtmlEncodedUserInput)
            {
                string error = "The un-modified string received from the client with the following IP address: " +
                    "\"" + RequestingIpAddress + "\" " +
                    "exceeded the allowed maximum length of an escaped Html user input string. " +
                    "The maximum length allowed is: " +
                    maxLengthHtmlEncodedUserInput +
                    ". The length was: " +
                    toSanitise.Length + ".";
                throw new SanitisationWcfException(error, unsanitisedAnswer: toSanitise);
            }
        }

        private string HtmlDecodeUserInput(string doubleEncodedUserInput, ref int numberOfDecodedHtmlEntities)
        {
            string decodedUserInput = doubleEncodedUserInput.HtmlDecode(ref numberOfDecodedHtmlEntities).HtmlDecode(ref numberOfDecodedHtmlEntities) ?? string.Empty;
            
            // if the decoded string is longer than MaxLengthHtmlDecodedUserInput throw
            int maxLengthHtmlDecodedUserInput = int.Parse(ConfigurationManager.AppSettings["MaxLengthHtmlDecodedUserInput"]);
            if(decodedUserInput.Length > maxLengthHtmlDecodedUserInput)
            {
                throw new SanitisationWcfException(
                    "The string received from the client with the following IP address: " +
                    "\"" + RequestingIpAddress + "\" " +
                    "after Html decoding exceded the allowed maximum length of an un-escaped Html user input string." +
                    Environment.NewLine +
                    "The maximum length allowed is: " + maxLengthHtmlDecodedUserInput + ". The length was: " +
                    decodedUserInput.Length + ".",
                    unsanitisedAnswer: doubleEncodedUserInput
                    );
            }
            return decodedUserInput;
        }
    }
}

As you can see, there’s a lot more work in the server side sanitisation than the client side.