Holistic Info-Sec for Web Developers

July 24, 2015

Quick update: Fascicle 0 is now considered Done. Available as an ebook on LeanPub and hard copy on Amazon.

Holistic InfoSec for Web Developers

 

Fascicle 1 is now content complete.

Most of my spare energy is going to be going into my new book for a while. I’m going to be tweeting as I write it, so please follow @binarymist. You can also keep up with my release notes at github. You can also discuss progress or even what you would find helpful as a web developer with a focus on information security, where it’s all happening.

HolisticInfoSecForWebDevelopers

I’ve split the book up into three fascicles to allow the content to be released sooner.

 

New Blog -> binarymist.io/blog

January 6, 2018

Hi All.

After 104 posts over 8 years, I’ve finally managed to move my blog away from the BinaryMist WordPress.com platform to a new platform that will serve the community (us), better going forward.

Head on over to binarymist.io/blog for the blog, and binarymist.io for the BinaryMist business site.

I’ll be explaining the ins and outs (in the usual technical detail you’ve come to expect) of the migration in a coming post.

I’ve set up an email subscription for all of my loyal followers. If you want to be notified of my blog posts on the new platform in the future, you will need to hit the big purple Subscribe button. Subscribers will receive an email each time I make a new post, which will include an unsubscribe link for any that get board of my content.

I look forward to producing informative, educational content and participating in discussions going forward at binarymist.io/blog

-Kim

The Cloud Shared Responsibility Model

October 2, 2017

Risks

The shared responsibility model is one that many have not grasped or understood well. Let’s look at the responsibilities of the parties.

CSP Responsibility

The CSP takes care of the infrastructure, not the customer specific configuration of it, and Due to the shear scale of what they are building, are able to build in good security controls, in contrast to the average system administrator, which just does not have the resources or ability to focus on security to the same degree.

Due to the share scale, the average CSP has a concentrated group of good security professionals vs a business who’s core business is often not closely related to security. So CSPs do provide good security mechanisms, but the customer has to know and care enough to use them.

CSPs creating the infrastructural architecture, building the components, frameworks, hardware, platform software in most cases are taking security seriously and doing a reasonable job.

CSP Customer Responsibility

CSP customers are expected to take care of their own security in terms of:

  1. Their people working with the technology
  2. Application security, ultimately leading back to shortcomings in people: Lack of skills, experience, engagement, etc.
  3. Configuring the infrastructure and/or platform components: Again leading back to people defects

but all to often the customers responsibility is neglected, which renders The Cloud no better for the customer in terms of security.

The primary problem with The Cloud is: Customers have the misconception that someone else is taking care of all their security. That is not how the shared responsibility model works though. Yes the CSP is probably taking care of the infrastructure security, but other forms of security such as I just listed above, are even more important than before the shift to The Cloud, this is because these items are now the lowest hanging fruit for the attacker.

The following are a set of questions (verbatim) I have been asked recently, and that I hear similar versions of frequently:

  • As a software engineer, do I really care about physical network security and network logging?
  • Surely “as a software engineer”, I can just use TLS and that is the end of it?
  • Well if the machine is compromised, then we give up on security, we aren’t responsible for the network
  • What is the difference between application security and network security? Aren’t they just two aspects of the same thing?
  • If I have implemented TLS for communication, have I fixed all of the network security problems?

Countermeasures

The following responsibilities are those that you need to have a good understanding of in order to establish a good level of security when operating in The Cloud.

CSP Responsibility

There is not a lot you can do about this, just be aware of what you are buying into before you do so. AWS for example states: “Customers retain control of what security they choose to implement to protect their own content, platform, applications, systems and networks, no differently than they would for applications in an on-site datacenter.

CSP Customer Responsibility

If you leverage The Cloud, Make sure the following aspects of security are all at an excellent level:

  1. People security: Discussed in Fascicle 0 under the People chapter
  2. Application security: Discussed in the Web Applications chapter. The move to application security was also discussed in the VPS chapter as a response of using Docker containers
  3. Configuring the infrastructure and/or platform components: Usually CSP specific, but I cover some aspects in this chapter

The following is in response to the set of frequently asked questions under the risks subsection of CSP Customer Responsibility:

  • (Q): As a software engineer, do I really care about physical network security and network logging?
    (A): In the past, many aspects of network security were the responsibility of the Network Administrators, with the move to The Cloud, this has to large degree changed. The networks established (intentionally or not) between the components we are leveraging and creating in The Cloud are a result of Infrastructure and Configuration Management, often (and rightly so) exp3ressed as code. Infrastructure as Code (IaC). As discussed in the Network Security subsection, this is now the responsibility of the Software Engineer
  • (Q): Surely “as a software engineer”, I can just use TLS and that is the end of it?
    (A): TLS is one very small area of network security. Its implementation as HTTPS and the PKI model is effectively broken. If TLS is your only saviour, putting it bluntly, you are without hope. The Network Chapter covers the tip of the network security ice berg, network security is a huge topic, and one that has many books written along with other resources that provide more in-depth coverage than I can provide as part of a holistic view of security for Software Engineers. Software Engineers must come to grips with the fact that they need to implement defence in depth
  • (Q): Well if the machine is compromised, then we give up on security, we aren’t responsible for the network
    (A): For this statement, please refer to the VPS chapter for your responsibilities as a Software Engineer in regards to “the machine”. In regards to “the network”, please refer to the Network Security subsection
  • (Q): What is the difference between application security and network security? Aren’t they just two aspects of the same thing?
    (A): No, for application security, see the Web Applications chapter. For network security, see the Network chapter. Again, as Software Engineers, you are now responsible for all aspects of information security
  • (Q): If I have implemented TLS for communication, have I fixed all of the network security problems?
    (A): If you are still reading this, I’m pretty sure you know the answer, please share it with other Developers, Engineers as you receive the same questions

Holistic Info-Sec for Web Developers F1: Content Complete

September 12, 2017

2017-09-11

Fascicle 1 is now content complete
Weighing in at aprox 550 pages incl Additional Resources and Attributions

  • Added links to Network Security Interview between Kim Carter and Haroon Meer on Software Engineering Radio … to be released in a day or two
  • Updated threat tags
  • Code formatting changes
  • Punctuation modifications

Cloud

Ready for technical review
Strong focus on AWS, although other CSPs discussed
50 Pages of content added

  • Shared Responsibility Model: CSP Responsibility, CSP Customer Responsibility
  • CSP Evaluation
  • Cloud Service Provider vs In-house
    • Skills
    • EULA
    • Giving up Secrets
    • Location of Data
    • Vendor lock-in
    • Possible Single Points of Failure
  • People Sec
  • App Sec
  • Net Sec
  • Violations of Least Privilege
  • Storage of Secrets
    • Private Key Abuse: SSH, TLS
    • Credentials and Other Secrets
      • Entered by People
      • Entered by Software: HashiCorp Vault, Docker secrets, Ansible Vault, AWS Key Management Service and Parameter Store
  • Serverless
    • Third Party Services
    • Perimeterless
    • Functions
    • DoS of Lambda Functions
  • Infrastructure and Configuration Management

Web Applications

  • Updated OWASP Top 10 resources to 2017
  • Added AWS WAF

Additional Resources

  • Getting Secrets out of Docker images
  • Password Managers For Business Use
  • Many tooling options covered

Attributions

  • Thinkst tools (Canary tools and tokens)
  • DropboxC2C for Data Exfiltration, Infiltration
  • Hosting providers forced to give up customer secrets
  • Software Engineering Radio show on Network Security with host: Kim Carter, guest: Haroon Meer
  • Docker Image layers
  • AWS Lambda

Many other attributions added

Holistic Info-Sec for Web Developers (F1)(VPS, Network, Cloud, Web Applications)

Github

Holistic Info-Sec for Web Developers F1 Large update to VPS chapter

October 7, 2016

Holistic Info-Sec for Web Developers (F1)(VPS, Network, Cloud, Web Applications)

Git Changeset

Large number of image updates due to finding that many were not up to scratch when Fascicle 0 went to print.
Swapped text images for real images.

Many large additions to the VPS chapter and fewer to the Network chapter, such as:
* The pitfalls of logging within networks and some ideas and implementations on how to overcome
* Disabling, removing and hardening the services of a VPS
* Granular OS partitioning and locking down the mounting of partitions
* Caching apt packages for all VPS
* Reviewing VPS password strategies and making the most suitable modifications to achieve enough security for you
* Disabling root logins on as many of the consoles as possible
* SSH, Symmetric and Asymmetric crypto-systems and their place in SSH
* The ciphers used in SSH, pros, cons, some history
* Hashing and its application in SSH
* How the SSH connection procedure works
* Hardening SSH
* Configuring which hosts may access your server
* SSH Key-pair authentication
* Techniques for tunneling SSH
* Understanding enough about NFS to produce exports that will suite your environmental security concerns
* Some quick commands to provide visibility as to who is doing what and when on your servers
* VPS logging and alerting: We look at a large number of options available and the merits of them
* Managing your logs effectively, so that they will be around when you need them and not tampered with. We work through transferring them off-site in real-time. We address reliability, resilience, integrity, connectivity of the proposed solutions. Verifying that the logs being transferred are in-fact encrypted.
* Proactive server monitoring, discuss goals, and the evaluation criteria for the offerings that were evaluated
* Implementation of proactive server monitoring, what works well, what doesn’t
* Keeping your (NodeJS) applications not just running, but healthy
* We discuss the best of bread HIDS/HIPS, then go on to implement the chosen solution
* Made a start with Docker insecurities and mitigation’s.
* Quick discussion around host firewalls
* Preparing DMZ and your VPS for the DMZ
* Additional Web Server preparation
* Deployment options
* Post DMZ deployment considerations

Captcha Considerations

December 31, 2015

Risks

Exploiting Captcha

Lack of captchas are a risk, but so are captchas themselves…

Let’s look at the problem here? What are we trying to stop with captchas?

Bots submitting. What ever it is, whether:

  • Advertising
  • Creating an unfair advantage over real humans
  • Link creation in attempt to increase SEO
  • Malicious code insertion

You are more than likely not interested in accepting it.

What do we not want to block?

People submitting genuinely innocent input. If a person is prepared to fill out a form manually, even if it is spam, then a person can view the submission and very quickly delete the validated, filtered and possibly sanitised message.

Countermeasures

PreventionVERYEASY

Types

Text Recognition

recaptcha uses this technique. See below for details.

Image Recognition

Uses images which users have to perform certain operations on, like dragging them to another image. For example: “Please drag all cat images to the cat mat.”, or “Please select all images of things that dogs eat.” sweetcaptcha is an example of this type of captcha. This type completely rules out the visually impaired users.

Friend Recognition

Pioneered by… you guessed it. Facebook. This type of captcha focusses on human hackers, the idea being that they will not know who your friends are.

Instead of showing you a traditional captcha on Facebook, one of the ways we may help verify your identity is through social authentication. We will show you a few pictures of your friends and ask you to name the person in those photos. Hackers halfway across the world might know your password, but they don’t know who your friends are.

I disagree with that statement. A determined hacker will usually be able to find out who your friends are. There is another problem, do you know who all of your friends are? Every acquaintance? I am terrible with names and so are many people. This is supposed to be used to authenticate you. So you have to be able to answer the questions before you can log in.

Logic Questions

This is what textcaptcha uses. Simple logic questions designed for the intelligence of a seven year old child. These are more accessible than image and textual image recognition, but they can take longer than image recognition to answer, unless the user is visually impared. The questions are usually language specific also, usually targeting the English language.

User Interaction

This is a little like image recognition. Users have to perform actions that virtual intelligence can not work out… yet. Like dragging a slider a certain number of notches.
If an offering gets popular, creating some code to perform the action may not be that hard and would definitely be worth the effort for bot creators.
This is obviously not going to work for the visually impaired or for people with handicapped motor skills.

 

In NPM land, as usual there are many options to choose from. The following were the offerings I evaluated. None of which really felt like a good fit:

Offerings

  • total-captcha. Depends on node-canvas. Have to install cairo first, but why? No explanation. Very little of anything here. Move on. How does this work? Do not know. What type is it? Presume text recognition.
  • easy-captcha is a text recognition offering generating images
  • simple-captcha looks like another text recognition offering. I really do not want to be writing image files to my server.
  • node-captcha Depends on canvas. By the look of the package this is another text recognition in a generated image.
  • re-captcha was one of the first captcha offerings, created at the Carnegie Mellon University by Luis von Ahn, Ben Maurer, Colin McMillen, David Abraham and Manuel Blum who invented the term captcha. Google later acquired it in September 2009. recaptcha is a text recognition captcha that uses scanned text that optical character recognition (OCR) technology has failed to interpret, which has the added benefit of helping to digitise text for The New York Times and Google Books.
    recaptcha
  • sweetcaptcha uses the sweetcaptcha cloud service of which you must abide by their terms and conditions, requires another node package, and requires some integration work. sweetcaptcha is an image recognition type of captcha.
    sweetcaptcha
  • textcaptcha is a logic question captcha relying on an external service for the questions and md5 hashes of the correct lower cased answers. This looks pretty simple to set up, but again expects your users to use their brain on things they should not have to.

 

After some additional research I worked out why the above types and offerings didn’t feel like a good fit. It pretty much came down to user experience.

Why should genuine users/customers of your web application be disadvantaged by having to jump through hoops because you have decided you want to stop bots spamming you? Would it not make more sense to make life harder for the bots rather than for your genuine users?

Some other considerations I had. Ideally I wanted a simple solution requiring few or ideally no external dependencies, no JavaScript required, no reliance on the browser or anything out of my control, no images and it definitely should not cost any money.

Alternative Approaches

  • Services like Disqus can be good for commenting. Obviously the comments are all stored somewhere in the cloud out of your control and this is an external dependency. For simple text input, this is probably not what you want. Similar services such as all the social media authentication services can take things a bit too far I think. They remove freedoms from your users. Why should your users be disadvantaged by leaving a comment or posting a message on your web application? Disqus tracks users activities from hosting website to website whether you have an account, are logged in or not. Any information they collect such as IP address, web browser details, installed add-ons, referring pages and exit links may be disclosed to any third party. When this data is aggregated it is useful for de-anonymising users. If users choose to block the Disqus script, the comments are not visible. Disqus has also published its registered users entire commenting histories, along with a list of connected blogs and services on publicly viewable user profile pages. Disqus also engage in add targeting and blackhat SEO techniques from the websites in which their script is installed.
  • Services like Akismet and Mollom which take user input and analyse for spam signatures. Mollom sometimes presents a captcha if it is unsure. These two services learn from their mistakes if they mark something as spam and you unmark it, but of course you are going to have to be watching for that. Matt Mullenweg created Akismet so that his mother could blog in safety. “His first attempt was a JavaScript plugin which modified the comment form and hid fields, but within hours of launching it, spammers downloaded it, figured out how it worked, and bypassed it. This is a common pitfall for anti-spam plugins: once they get traction“. My advice to this is not to use a common plugin, but to create something custom. I discuss this soon.

The above solutions are excellent targets for creating exploits that will have a large pay off due to the fact that so many websites are using them. There are exploits discovered for these services regularly.

Still not cutting it

Given the fact that many clients count on conversions to make money, not receiving 3.2% of those conversions could put a dent in sales. Personally, I would rather sort through a few SPAM conversions instead of losing out on possible income.

Casey Henry: Captchas’ Effect on Conversion Rates

Spam is not the user’s problem; it is the problem of the business that is providing the website. It is arrogant and lazy to try and push the problem onto a website’s visitors.

Tim Kadlec: Death to Captchas

User Time Expenditure

Recording how long it takes from fetch to submit. This is another technique, in which the time is measured from fetch to submit. For example if the time span is under five seconds it is more than likely a bot, so handle the message accordingly.

Bot Pot

Spamming bots operating on custom mechanisms will in most cases just try, then move on. If you decide to use one of the common offerings from above, exploits will be more common, depending on how wide spread the offering is. This is one of the cases where going custom is a better option. Worse case is you get some spam and you can modify your technique, but you get to keep things simple, tailored to your web application, your users needs, no external dependencies and no monthly fees. This is also the simplest technique and requires very little work to implement.

Spam bots:

  • Love to populate form fields
  • Usually ignore CSS. For example, if you have some CSS that hides a form field and especially if the CSS is not inline on the same page, they will usually fail at realising that the field is not supposed to be visible.

So what we do is create a field that is not visible to humans and is supposed to be kept empty. On the server once the form is submitted, we check that it is still empty. If it is not, then we assume a bot has been at it.

This is so simple, does not get in the way of your users, yet very effective at filtering bot spam.

Client side:

form .bot-pot {
   display: none;
}
<form>
   <!--...-->
   <div>
      <input type="text" name="bot-pot" class="bot-pot">
   </div>
   <!--...-->
</form>

Server side:

I show the validation code middle ware of the route on line 30 below. The validation is performed on line 16

var form = require('express-form');
var fieldToValidate = form.field;
//...

function home(req, res) {
   res.redirect('/');
}

function index(req, res) {
   res.render('home', { title: 'Home', id: 'home', brand: 'your brand' });
}

function validate() {
   return form(
      // Bots love to populate everything.
      fieldToValidate('bot-pot').maxLength(0)
   );
}

function contact(req, res) {

   if(req.form.isValid)
      // We know the bot-pot is of zero length. So no bots.
   //...
}

module.exports = function (app) {
   app.get('/', index);
   app.get('/home', home);
   app.post('/contact', validate(), contact);
};

So as you can see, a very simple solution. You could even consider combining the above two techniques.

Lack of Visibility in Web Applications

November 26, 2015

Risks

I see this as an indirect risk to the asset of web application ownership (That’s the assumption that you will always own your web application).

Not being able to introspect your application at any given time or being able to know how the health status is, is not a comfortable place to be in and there is no reason you should be there.

Insufficient Logging and Monitoring

average-widespread-veryeasy-moderate

Can you tell at any point in time if someone or something is:

  • Using your application in a way that it was not intended to be used
  • Violating policy. For example circumventing client side input sanitisation.

How easy is it for you to notice:

  • Poor performance and potential DoS?
  • Abnormal application behaviour or unexpected logic threads
  • Logic edge cases and blind spots that stake holders, Product Owners and Developers have missed?

Countermeasures

As Bruce Schneier said: “Detection works where prevention fails and detection is of no use without response“. This leads us to application logging.

With good visibility we should be able to see anticipated and unanticipated exploitation of vulnerabilities as they occur and also be able to go back and review the events.

Insufficient Logging

PreventionAVERAGE

When it comes to logging in NodeJS, you can’t really go past winston. It has a lot of functionality and what it does not have is either provided by extensions, or you can create your own. It is fully featured, reliable and easy to configure like NLog in the .NET world.

I also looked at express-winston, but could not see why it needed to exist.

{
   ...
   "dependencies": {
      ...,
      "config": "^1.15.0",
      "express": "^4.13.3",
      "morgan": "^1.6.1",
      "//": "nodemailer not strictly necessary for this example,",
      "//": "but used later under the node-config section.",
      "nodemailer": "^1.4.0",
      "//": "What we use for logging.",
      "winston": "^1.0.1",
      "winston-email": "0.0.10",
      "winston-syslog-posix": "^0.1.5",
      ...
   }
}

winston-email also depends on nodemailer.

Opening UDP port

with winston-syslog seems to be what a lot of people are using. I think it may be due to the fact that winston-syslog is the first package that works well for winston and syslog.

If going this route, you will need the following in your /etc/rsyslog.conf:

$ModLoad imudp
# Listen on all network addresses. This is the default.
$UDPServerAddress 0.0.0.0
# Listen on localhost.
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
# Or the new style configuration.
Address <IP>
Port <port>
# Logging for your app.
local0.* /var/log/yourapp.log

I Also looked at winston-rsyslog2 and winston-syslogudp, but they did not measure up for me.

If you do not need to push syslog events to another machine, then it does not make much sense to push through a local network interface when you can use your posix syscalls as they are faster and safer. Line 7 below shows the open port.

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0015s latency).
PORT STATE SERVICE REASON VERSION
514/udp open|filtered syslog no-response
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Using Posix

The winston-syslog-posix package was inspired by blargh. winston-syslog-posix uses node-posix.

If going this route, you will need the following in your /etc/rsyslog.conf instead of the above:

# Logging for your app.
local0.* /var/log/yourapp.log

Now you can see on line 7 below that the syslog port is no longer open:

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0014s latency).
PORT STATE SERVICE REASON VERSION
514/udp closed syslog port-unreach
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Logging configuration should not be in the application startup file. It should be in the configuration files. This is discussed further under the Store Configuration in Configuration files section.

Notice the syslog transport in the configuration below starting on line 39.

module.exports = {
   logger: {
      colours: {
         debug: 'white',
         info: 'green',
         notice: 'blue',
         warning: 'yellow',
         error: 'yellow',
         crit: 'red',
         alert: 'red',
         emerg: 'red'
      },
      // Syslog compatible protocol severities.
      levels: {
         debug: 0,
         info: 1,
         notice: 2,
         warning: 3,
         error: 4,
         crit: 5,
         alert: 6,
         emerg: 7
      },
      consoleTransportOptions: {
         level: 'debug',
         handleExceptions: true,
         json: false,
         colorize: true
      },
      fileTransportOptions: {
         level: 'debug',
         filename: './yourapp.log',
         handleExceptions: true,
         json: true,
         maxsize: 5242880, //5MB
         maxFiles: 5,
         colorize: false
      },
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'debug',
         identity: 'yourapp_winston'
         //facility: 'local0' // default
            // /etc/rsyslog.conf also needs: local0.* /var/log/yourapp.log
            // If non posix syslog is used, then /etc/rsyslog.conf or one
            // of the files in /etc/rsyslog.d/ also needs the following
            // two settings:
            // $ModLoad imudp // Load the udp module.
            // $UDPServerRun 514 // Open the standard syslog port.
            // $UDPServerAddress 127.0.0.1 // Interface to bind to.
      },
      emailTransportOptions: {
         handleExceptions: true,
         level: 'crit',
         from: 'yourusername_alerts@fastmail.com',
         to: 'yourusername_alerts@fastmail.com',
         service: 'FastMail',
         auth: {
            user: "yourusername_alerts",
            pass: null // App specific password.
         },
         tags: ['yourapp']
      }
   }
}

In development I have chosen here to not use syslog. You can see this on line 3 below. If you want to test syslog in development, you can either remove the logger object override from the devbox1-development.js file or modify it to be similar to the above. Then add one line to the /etc/rsyslog.conf file to turn on. As mentioned in a comment above in the default.js config file on line 44.

module.exports = {
   logger: {
      syslogPosixTransportOptions: null
   }
}

In production we log to syslog and because of that we do not need the file transport you can see configured starting on line 30 above in the default.js configuration file, so we set it to null as seen on line 6 below in the prodbox-production.js file.

I have gone into more depth about how we handle syslogs here, where all of our logs including these ones get streamed to an off-site syslog server. Thus providing easy aggregation of all system logs into one user interface that DevOpps can watch on their monitoring panels in real-time and also easily go back in time to visit past events. This provides excellent visibility as one layer of defence.

There were also some other options for those using Papertrail as their off-site syslog and aggregation PaaS, but the solutions were not as clean as simply logging to local syslog from your applications and then sending off-site from there.

module.exports = {
   logger: {
      consoleTransportOptions: {
         level: {},
      },
      fileTransportOptions: null,
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'info',
         identity: 'yourapp_winston'
      }
   }
}
// Build creates this file.
module.exports = {
   logger: {
      emailTransportOptions: {
         auth: {
            pass: 'Z-o?(7GnCQsnrx/!-G=LP]-ib' // App specific password.
         }
      }
   }
}

The logger.js file wraps and hides extra features and transports applied to the logging package we are consuming.

var winston = require('winston');
var loggerConfig = require('config').logger;
require('winston-syslog-posix').SyslogPosix;
require('winston-email').Email;

winston.emitErrs = true;

var logger = new winston.Logger({
   // Alternatively: set to winston.config.syslog.levels
   exitOnError: false,
   // Alternatively use winston.addColors(customColours); There are many ways
   // to do the same thing with winston
   colors: loggerConfig.colours,
   levels: loggerConfig.levels
});

// Add transports. There are plenty of options provided and you can add your own.

logger.addConsole = function(config) {
   logger.add (winston.transports.Console, config);
   return this;
};

logger.addFile = function(config) {
   logger.add (winston.transports.File, config);
   return this;
};

logger.addPosixSyslog = function(config) {
   logger.add (winston.transports.SyslogPosix, config);
   return this;
};

logger.addEmail = function(config) {
   logger.add (winston.transports.Email, config);
   return this;
};

logger.emailLoggerFailure = function (err /*level, msg, meta*/) {
   // If called with an error, then only the err param is supplied.
   // If not called with an error, level, msg and meta are supplied.
   if (err) logger.alert(
      JSON.stringify(
         'error-code:' + err.code + '. '
         + 'error-message:' + err.message + '. '
         + 'error-response:' + err.response + '. logger-level:'
         + err.transport.level + '. transport:' + err.transport.name
      )
   );
};

logger.init = function () {
   if (loggerConfig.fileTransportOptions)
      logger.addFile( loggerConfig.fileTransportOptions );
   if (loggerConfig.consoleTransportOptions)
      logger.addConsole( loggerConfig.consoleTransportOptions );
   if (loggerConfig.syslogPosixTransportOptions)
      logger.addPosixSyslog( loggerConfig.syslogPosixTransportOptions );
   if (loggerConfig.emailTransportOptions)
      logger.addEmail( loggerConfig.emailTransportOptions );
};

module.exports = logger;
module.exports.stream = {
   write: function (message, encoding) {
      logger.info(message);
   }
};

When the app first starts it initialises the logger on line 7 below.

//...
var express = require('express');
var morganLogger = require('morgan');
var logger = require('./util/logger'); // Or use requireFrom module so no relative paths.
var app = express();
//...
logger.init();
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
//...
// In order to utilise connect/express logger module in our third party logger,
// Pipe the messages through.
app.use(morganLogger('combined', {stream: logger.stream}));
//...
app.use(express.static(path.join(__dirname, 'public')));
//...
require('./routes')(app);

if ('development' == app.get('env')) {
   app.use(errorHandler({ dumpExceptions: true, showStack: true }));
   //...
}
if ('production' == app.get('env')) {
   app.use(errorHandler());
   //...
}

http.createServer(app).listen(app.get('port'), function(){
   logger.info(
      "Express server listening on port " + app.get('port') + ' in '
      + process.env.NODE_ENV + ' mode'
   );
});

* You can also optionally log JSON metadata
* You can provide an optional callback to do any work required, which will be called once all transports have logged the specified message.

Here are some examples of how you can use the logger. The logger.log(<level> can be replaced with logger.<level>( where level is any of the levels defined in the default.js configuration file above:

// With string interpolation also.
logger.log('info', 'test message %s', 'my string');
logger.log('info', 'test message %d', 123);
logger.log('info', 'test message %j', {aPropertyName: 'Some message details'}, {});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);

Also consider hiding cross cutting concerns like logging using Aspect Oriented Programing (AOP)

Insufficient Monitoring

PreventionEASY

There are a couple of ways of approaching monitoring. You may want to see the health of your application even if it is all fine, or only to be notified if it is not fine (sometimes called the dark cockpit approach).

Monit is an excellent tool for the dark cockpit approach. It’s easy to configure. Has excellent short documentation that is easy to understand and the configuration file has lots of examples commented out ready for you to take as is and modify to suite your environment. I’ve personally had excellent success with Monit.

 

Risks that Solution Causes

Lack of Visibility

With the added visibility, you will have to make decisions based on the new found information you now have. There will be no more blissful ignorance if there was before.

Insufficient Logging and Monitoring

There will be learning and work to be done to become familiar with libraries and tooling. Code will have to be written around logging as in wrapping libraries, initialising and adding logging statements or hiding them using AOP.

 

Costs and Trade-offs

Insufficient Logging and Monitoring

You can do a lot for little cost here. I would rather trade off a few days work in order to have a really good logging system through your code base that is going to show you errors fast in development and then show you different errors in the places your DevOps need to see them in production.

Same for monitoring. Find a tool that you find working with a pleasure. There are just about always free and open source tools to every commercial alternative. If you are working with a start-up or young business, the free and open source tools can be excellent to keep ongoing costs down. Especially mature tools that are also well maintained like Monit.

Additional Resources

Consuming Free and Open Source

October 29, 2015

Risks

average-widespread-difficult-moderate

This is where A9 (Using Components with Known Vulnerabilities) of the 2013 OWASP Top 10 comes in.
We are consuming far more free and open source libraries than we have ever before. Much of the code we are pulling into our projects is never intentionally used, but is still adding surface area for attack. Much of it:

  • Is not thoroughly tested (for what it should do and what it should not do). We are often relying
    on developers we do not know a lot about to have not introduced defects. Most developers are more focused on building than breaking, they do not even see the defects they are introducing.
  • Is not reviewed evaluated. That is right, many of the packages we are consuming are created by solo developers with a single focus of creating and little to no focus of how their creations can be exploited. Even some teams with a security hero are not doing a lot better.
  • Is created by amateurs that could and do include vulnerabilities. Anyone can write code and publish to an open source repository. Much of this code ends up in our package management repositories which we consume.
  • Does not undergo the same requirement analysis, defining the scope, acceptance criteria, test conditions and sign off by a development team and product owner that our commercial software does.

Many vulnerabilities can hide in these external dependencies. It is not just one attack vector any more, it provides the opportunity for many vulnerabilities to be sitting waiting to be exploited. If you do not find and deal with them, I can assure you, someone else will. See Justin Searls talk on consuming all the open source.

Running install or any scripts from non local sources without first downloading them and inspecting can destroy or modify your and any other reachable systems, send sensitive information to an attacker, or any number of other criminal activities.

Countermeasures

prevention easy

Process

Dibbe Edwards discusses some excellent initiatives on how they do it at IBM. I will attempt to paraphrase some of them here:

  • Implement process and governance around which open source libraries you can use
  • Legal review: checking licenses
  • Scanning the code for vulnerabilities, manual and automated code review
  • Maintain a list containing all the libraries that have been approved for use. If not on the list, make request and it should go through the same process.
  • Once the libraries are in your product they should become as part of your own code so that they get the same rigour over them as any of your other code written in-house
  • There needs to be automated process that runs over the code base to check that nothing that is not on the approved list is included
  • Consider automating some of the suggested tooling options below

There is an excellent paper by the SANS Institute on Security Concerns in Using Open Source Software for Enterprise Requirements that is well worth a read. It confirms what the likes of IBM are doing in regards to their consumption of free and open source libraries.

Consumption is Your Responsibility

As a developer, you are responsible for what you install and consume. Malicious NodeJS packages do end up on NPM from time to time. The same goes for any source or binary you download and run. The following commands are often encountered as being “the way” to install things:

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(wget https://raw.github.com/account/repo/install.sh -O -)"
# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
sh -c "$(curl -fsSL https://raw.github.com/account/repo/install.sh)"

Below is the official way to install NodeJS. Do not do this. wget or curl first, then make sure what you have just downloaded is not malicious.

Inspect the code before you run it.

1. The repository could have been tampered with
2. The transmission from the repository to you could have been intercepted and modified.

# Fetching install.sh and running immediately in your shell.
# Do not do this. Download first -> Check and verify good -> run if good.
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install -y nodejs

Keeping Safe

wget, curl, etc

Please do not wget, curl or fetch in any way and pipe what you think is an installer or any script to your shell without first verifying that what you are about to run is not malicious. Do not download and run in the same command.

The better option is to:

  1. Verify the source that you are about to download, if all good
  2. Download it
  3. Check it again, if all good
  4. Only then should you run it

npm install

As part of an npm install, package creators, maintainers (or even a malicious entity intercepting and modifying your request on the wire) can define scripts to be run on specific NPM hooks. You can check to see if any package has hooks (before installation) that will run scripts by issuing the following command:
npm show [module-you-want-to-install] scripts

Recommended procedure:

  1. Verify the source that you are about to download, if all good
  2. npm show [module-you-want-to-install] scripts
  3. Download the module without installing it and inspect it. You can download it from
    http://registry.npmjs.org/%5Bmodule-you-want-to-install%5D/-/%5Bmodule-you-want-to-install%5D-VERSION.tgz
    

The most important step here is downloading and inspecting before you run.

Doppelganger Packages

Similarly to Doppelganger Domains, People often miss-type what they want to install. If you were someone that wanted to do something malicious like have consumers of your package destroy or modify their systems, send sensitive information to you, or any number of other criminal activities (ideally identified in the Identify Risks section. If not already, add), doppelganger packages are an excellent avenue for raising the likelihood that someone will install your malicious package by miss typing the name of it with the name of another package that has a very similar name. I covered this in my “0wn1ng The Web” presentation, with demos.

Make sure you are typing the correct package name. Copy -> Pasting works.

Tooling

For NodeJS developers: Keep your eye on the nodesecurity advisories. Identified security issues can be posted to NodeSecurity report.

RetireJS Is useful to help you find JavaScript libraries with known vulnerabilities. RetireJS has the following:

  1. Command line scanner
    • Excellent for CI builds. Include it in one of your build definitions and let it do the work for you.
      • To install globally:
        npm i -g retire
      • To run it over your project:
        retire my-project
        Results like the following may be generated:

        public/plugins/jquery/jquery-1.4.4.min.js
        ↳ jquery 1.4.4.min has known vulnerabilities:
        http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-4969
        http://research.insecurelabs.org/jquery/test/
        http://bugs.jquery.com/ticket/11290
        
    • To install RetireJS locally to your project and run as a git precommit-hook.
      There is an NPM package that can help us with this, called precommit-hook, which installs the git pre-commit hook into the usual .git/hooks/pre-commit file of your projects repository. This will allow us to run any scripts immediately before a commit is issued.
      Install both packages locally and save to devDependencies of your package.json. This will make sure that when other team members fetch your code, the same retire script will be run on their pre-commit action.

      npm install precommit-hook --save-dev
      npm install retire --save-dev
      

      If you do not configure the hook via the package.json to run specific scripts, it will run lint, validate and test by default. See the RetireJS documentation for options.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         }
      }
      

      Adding the pre-commit property allows you to specify which scripts you want run before a successful commit is performed. The following package.json defines that the lint and validate scripts will be run. validate runs our retire command.

      {
         "name": "my-project",
         "description": "my project does wiz bang",
         "devDependencies": {
            "retire": "~0.3.2",
            "precommit-hook": "~1.0.7"
         },
         "scripts": {
            "validate": "retire -n -j",
            "test": "a-test-script",
            "lint": "jshint --with --different-options"
         },
         "pre-commit": ["lint", "validate"]
      }
      

      Keep in mind that pre-commit hooks can be very useful for all sorts of checking of things immediately before your code is committed. For example running security tests using the OWASP ZAP API.

  2. Chrome extension
  3. Firefox extension
  4. Grunt plugin
  5. Gulp task
  6. Burp and OWASP ZAP plugin
  7. On-line tool you can simply enter your web applications URL and the resource will be analysed

requireSafe

provides “intentful auditing as a stream of intel for bithound“. I guess watch this space, as in speaking with Adam Baldwin, there doesn’t appear to be much happening here yet.

bithound

In regards to NPM packages, we know the following things:

  1. We know about a small collection of vulnerable NPM packages. Some of which have high fan-in (many packages depend on them).
  2. The vulnerable packages have published patched versions
  3. Many packages are still consuming the vulnerable unpatched versions of the packages that have published patched versions
    • So although we could have removed a much larger number of vulnerable packages due to their persistence on depending on unpatched packages, we have not. I think this mostly comes down to lack of visibility, awareness and education. This is exactly what I’m trying to change.

bithound supports:

  • JavaScript, TypeScript and JSX (back-end and front-end)
  • In terms of version control systems, only git is supported
  • Opening of bitbucket and github issues
  • Providing statistics on code quality, maintainability and stability. I queried Adam on this, but not a lot of information was forth coming.

bithound can be configured to not analyse some files. Very large repositories are prevented from being analysed due to large scale performance issues.

Analyses both NPM and Bower dependencies and notifies you if any are:

  • Out of date
  • Insecure. Assuming this is based on the known vulnerabilities (41 node advisories at the time of writing this)
  • Unused

Analysis of opensource projects are free.

You could of course just list all of your projects and global packages and check that there are none in the advisories, but this would be more work and who is going to remember to do that all the time?

For .Net developers, there is the likes of OWASP SafeNuGet.

Risks that Solution Causes

Some of the packages we consume may have good test coverage, but are the tests testing the right things? Are the tests testing that something can not happen? That is where the likes of RetireJS comes in.

Process

There is a danger of implementing to much manual process thus slowing development down more than necessary. The way the process is implemented will have a lot to do with its level of success. For example automating as much as possible, so developers don’t have to think about as much as possible is going to make for more productive, focused and happier developers.

For example, when a Development Team needs to pull a library into their project, which often happens in the middle of working on a product backlog item (not planned at the beginning of the Sprint), if they have to context switch while a legal review and/or manual code review takes place, then this will cause friction and reduce the teams performance even though it may be out of their hands.
In this case, the Development Team really needs a dedicated resource to perform the legal review. The manual review could be done by another team member or even themselves with perhaps another team member having a quicker review after the fact. These sorts of decisions need to be made by the Development Team, not mandated by someone outside of the team that doesn’t have skin in the game or does not have the localised understanding that the people working on the project do.

Maintaining a list of the approved libraries really needs to be a process that does not take a lot of human interaction. How ever you work out your process, make sure it does not require a lot of extra developer effort on an ongoing basis. Some effort up front to automate as much as possible will facilitate this.

Tooling

Using the likes of pre-commit hooks, the other tooling options detailed in the Countermeasures section and creating scripts to do most of the work for us is probably going to be a good option to start with.

Costs and Trade-offs

The process has to be streamlined so that it does not get in the developers way. A good way to do this is to ask the developers how it should be done. They know what will get in their way. In order for the process to be a success, the person(s) mandating it will need to get solid buy-in from the people using it (the developers).
The idea of setting up a process that notifies at least the Development Team if a library they want to use has known security defects, needs to be pitched to all stakeholders (developers, product owner, even external stakeholders) the right way. It needs to provide obvious benefit and not make anyones life harder than it already is. Everyone has their own agendas. Rather than fighting against them, include consideration for them in your pitch. I think this sort of a pitch is actually reasonably easy if you keep these factors in mind.

Attributions

Additional Resources

 

Risks and Countermeasures to the Management of Application Secrets

September 17, 2015

Risks

  • Passwords and other secrets for things like data-stores, syslog servers, monitoring services, email accounts and so on can be useful to an attacker to compromise data-stores, obtain further secrets from email accounts, file servers, system logs, services being monitored, etc, and may even provide credentials to continue moving through the network compromising other machines.
  • Passwords and/or their hashes travelling over the network.

Data-store Compromise

Exploitability

The reason I’ve tagged this as moderate is because if you take the countermeasures, it doesn’t have to be a disaster.

There are many examples of this happening on a daily basis to millions of users. The Ashley Madison debacle is a good example. Ashley Madison’s entire business relied on its commitment to keep its clients (37 million of them) data secret, provide discretion and anonymity.

Before the breach, the company boasted about airtight data security but ironically, still proudly displays a graphic with the phrase “trusted security award” on its homepage

We worked hard to make a fully undetectable attack, then got in and found nothing to bypass…. Nobody was watching. No security. Only thing was segmented network. You could use Pass1234 from the internet to VPN to root on all servers.

Any CEO who isn’t vigilantly protecting his or her company’s assets with systems designed to track user behavior and identify malicious activity is acting negligently and putting the entire organization at risk. And as we’ve seen in the case of Ashley Madison, leadership all the way up to the CEO may very well be forced out when security isn’t prioritized as a core tenet of an organization.

Dark Reading

Other notable data-store compromises were LinkedIn with 6.5 million user accounts compromised and 95% of the users passwords cracked in days. Why so fast? Because they used simple hashing, specifically SHA-1. EBay with 145 million active buyers. Many others coming to light regularly.

Are you using well salted and quality strong key derivation functions (KDFs) for all of your sensitive data? Are you making sure you are notifying your customers about using high quality passwords? Are you informing them what a high quality password is? Consider checking new user credentials against a list of the most frequently used and insecure passwords collected.

Countermeasures

Secure password management within applications is a case of doing what you can, often relying on obscurity and leaning on other layers of defence to make it harder for compromise. Like many of the layers already discussed in my book.

Find out how secret the data that is supposed to be secret that is being sent over the network actually is and consider your internal network just as malicious as the internet. Then you will be starting to get the idea of what defence in depth is about. That way when one defence breaks down, you will still be in good standing.

defence in depth

You may read in many places that having data-store passwords and other types of secrets in configuration files in clear text is an insecurity that must be addressed. Then when it comes to mitigation, there seems to be a few techniques for helping, but most of them are based around obscuring the secret rather than securing it. Essentially just making discovery a little more inconvenient like using an alternative port to SSH to other than the default of 22. Maybe surprisingly though, obscurity does significantly reduce the number of opportunistic type attacks from bots and script kiddies.

Store Configuration in Configuration files

Prevention

Do not hard code passwords in source files for all developers to see. Doing so also means the code has to be patched when services are breached. At the very least, store them in configuration files and use different configuration files for different deployments and consider keeping them out of source control.

Here are some examples using the node-config module.

node-config

is a fully featured, well maintained configuration package that I have used on a good number of projects.

To install: From the command line within the root directory of your NodeJS application, run:

npm install node-config --save

Now you are ready to start using node-config. An example of the relevant section of an app.js file may look like the following:

// Due to bug in node-config the if statement is required before config is required
// https://github.com/lorenwest/node-config/issues/202
if (process.env.NODE_ENV === 'production')
   process.env.NODE_CONFIG_DIR = path.join(__dirname, 'config');

Where ever you use node-config, in your routes for example:

var config = require('config');
var nodemailer = require('nodemailer');
var enquiriesEmail = config.enquiries.email;

// Setting up email transport.
var transporter = nodemailer.createTransport({
   service: config.enquiries.service,
   auth: {
      user: config.enquiries.user,
      pass: config.enquiries.pass // App specific password.
   }
});

A good collection of different formats can be used for the config files: .json, .json5, .hjson, .yaml.js, .coffee, .cson, .properties, .toml

There is a specific file loading order which you specify by file naming convention, which provides a lot of flexibility and which caters for:

  • Having multiple instances of the same application running on the same machine
  • The use of short and full host names to mitigate machine naming collisions
  • The type of deployment. This can be anything you set the $NODE_ENV environment variable to for example: development, production, staging, whatever.
  • Using and creating config files which stay out of source control. These config files have a prefix of local. These files are to be managed by external configuration management tools, build scripts, etc. Thus providing even more flexibility about where your sensitive configuration values come from.

The config files for the required attributes used above may take the following directory structure:

OurApp/
|
+-- config/
| |
| +-- default.js (usually has the most in it)
| |
| +-- devbox1-development.js
| |
| +-- devbox2-development.js
| |
| +-- stagingbox-staging.js
| |
| +-- prodbox-production.js
| |
| +-- local.js (creted by build)
|
+-- routes
| |
| +-- home.js
| |
| +-- ...
|
+-- app.js (entry point)
|
+-- ...

The contents of the above example configuration files may look like the following:

module.exports = {
   enquiries: {
      // Supported services:
      // https://github.com/andris9/nodemailer-wellknown#supported-services
      // supported-services actually use the best security settings by default.
      // I tested this with a wire capture, because it is always the most fool proof way.
      service: 'FastMail',
      email: 'yourusername@fastmail.com',
      user: 'yourusername',
      pass: null
   }
   // Lots of other settings.
   // ...
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox1
      pass: 'D[6F9,4fM6?%2ULnirPVTk#Q*7Z+5n' // App specific password.
   }
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox2
      pass: 'eUoxK=S9&amp;amp;amp;lt;,`@m0T1=^(EZ#61^5H;.H' // App specific password.
   }
}
{
}
{
}
// Build creates this file.
module.exports = {
   enquiries: {
      // Password created by the build.
      pass: '10lQu$4YC&amp;amp;amp;amp;x~)}lUF&amp;amp;amp;gt;3pm]Tk&amp;amp;amp;gt;@+{N]' // App specific password.
   }
}

node-config also:

  • Provides command line overrides, thus allowing you to override configuration values at application start from command
  • Allows for the overriding of environment variables with custom environment variables from a custom-environment-variables.json file

Encrypting/decrypting credentials in code may provide some obscurity, but not much more than that.
There are different answers for different platforms. None of which provide complete security, if there is such a thing, but instead focusing on different levels of obscurity.

Windows

Store database credentials as a Local Security Authority (LSA) secret and create a DSN with the stored credential. Use a SqlServer connection string with Trusted_Connection=yes

The hashed credentials are stored in the SAM file and the registry. If an attacker has physical access to the storage, they can easily copy the hashes if the machine is not running or can be shut-down. The hashes can be sniffed from the wire in transit. The hashes can be pulled from the running machines memory (specifically the Local Security Authority Subsystem Service (LSASS.exe)) using tools such as Mimikatz, WCE, hashdump or fgdump. An attacker generally only needs the hash. Trusted tools like psexec take care of this for us. All discussed in my “0wn1ng The Web” presentation.

Encrypt Sections of a web, executable, machine-level, application-level configuration files with aspnet_regiis.exe with the -pe option and name of the configuration element to encrypt and the configuration provider you want to use. Either DataProtectionConfigurationProvider (uses DPAPI) or RSAProtectedConfigurationProvider (uses RSA). the -pd switch is used to decrypt or programatically:

string connStr = ConfigurationManager.ConnectionString["MyDbConn1"].ToString();

Of course there is a problem with this also. DPAPI uses LSASS, which again an attacker can extract the hash from its memory. If the RSAProtectedConfigurationProvider has been used, a key container is required. Mimikatz will force an export from the key container to a .pvk file. Which can then be read using OpenSSL or tools from the Mono.Security assembly.

I have looked at a few other ways using PSCredential and SecureString. They all seem to rely on DPAPI which as mentioned uses LSASS which is open for exploitation.

Credential Guard and Device Guard leverage virtualisation-based security. By the look of it still using LSASS. Bromium have partnered with Microsoft and coined it Micro-virtualization. The idea is that every user task is isolated into its own micro-VM. There seems to be some confusion as to how this is any better. Tasks still need to communicate outside of their VM, so what is to stop malicious code doing the same? I have seen lots of questions but no compelling answers yet. Credential Guard must run on physical hardware directly. Can not run on virtual machines. This alone rules out many
deployments.

Bromium vSentry transforms information and infrastructure protection with a revolutionary new architecture that isolates and defeats advanced threats targeting the endpoint through web, email and documents

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

This is marketing talk. Please don’t take this literally.

vSentry empowers users to access whatever information they need from any network, application or website, without risk to the enterprise

Traditional security solutions rely on detection and often fail to block targeted attacks which use unknown “zero day” exploits. Bromium uses hardware enforced isolation to stop even “undetectable” attacks without disrupting the user.

Bromium

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and
a joy to use

Bromium

These seem like bold claims.

Also worth considering is that Microsofts new virtualization-based security also relies on UEFI Secure Boot, which has been proven insecure.

Linux

Containers also help to provide some form of isolation. Allowing you to only have the user accounts to do what is necessary for the application.

I usually use a deployment tool that also changes the permissions and ownership of the files involved with the running web application to a single system user, so unprivileged users can not access the web applications files at all. The deployment script is executed over SSH in a remote shell. Only specific commands on the server are allowed to run and a very limited set of users have any sort of access to the machine. If you are using Linux Containers then you can reduce this even more if it is not already.

One of the beauties of GNU/Linux is that you can have as much or little security as you decide. No one has made that decision for you already and locked you out of the source. You are not feed lies like all of the closed source OS vendors trying to pimp their latest money spinning product. GNU/Linux is a dirty little secrete that requires no marketing hype. It just provides complete control if you want it. If you do not know what you want, then someone else will probably take that control from you. It is just a matter of time if it hasn’t happened already.

Least Privilege

Prevention

An application should have the least privileges possible in order to carry out what it needs to do. Consider creating accounts for each trust distinction. For example where you only need to read from a data store, then create that connection with a users credentials that is only allowed to read, and so on for other privileges. This way the attack surface is minimised. Adhering to the principle of least privilege. Also consider removing table access completely from the application and only provide permissions to the application to run stored queries. This way if/when an attacker is able to
compromise the machine and retrieve the password for an action on the data-store, they will not be able to do a lot anyway.

Location

Prevention

Put your services like data-stores on network segments that are as sheltered as possible and only contain similar services.

Maintain as few user accounts on the servers in question as possible and with the least privileges as possible.

Data-store Compromise

Prevention

As part of your defence in depth strategy, you should expect that your data-store is going to get stolen, but hope that it does not. What assets within the data-store are sensitive? How are you going to stop an attacker that has gained access to the data-store from making sense of the sensitive data?

As part of developing the application that uses the data-store, a strategy also needs to be developed and implemented to carry on business as usual when this happens. For example, when your detection mechanisms realise that someone unauthorised has been on the machine(s) that host your data-store, as well as the usual alerts being fired off to the people that are going to investigate and audit, your application should take some automatic measures like:

  • All following logins should be instructed to change passwords

If you follow the recommendations below, data-store theft will be an inconvenience, but not a disaster.

Consider what sensitive information you really need to store. Consider using the following key derivation functions (KDFs) for all sensitive data. Not just passwords. Also continue to remind your customers to always use unique passwords that are made up of alphanumeric, upper-case, lower-case and special characters. It is also worth considering pushing the use of high quality password vaults. Do not limit password lengths. Encourage long passwords.

PBKDF2, bcrypt and scrypt are KDFs that are designed to be slow. Used in a process commonly known as key stretching. The process of key stretching in terms of how long it takes can be tuned by increasing or decreasing the number of cycles used. Often 1000 cycles or more for passwords. “The function used to protect stored credentials should balance attacker and defender verification. The defender needs an acceptable response time for verification of users’ credentials during peak use. However, the time required to map <credential> -> <protected form> must remain beyond threats’ hardware (GPU, FPGA) and technique (dictionary-based, brute force, etc) capabilities.

OWASP Password Storage

PBKDF2, bcrypt and the newer scrypt, apply a Pseudorandom Function (PRF) such as a crypto-graphic hash, cipher or HMAC to the data being received along with a unique salt. The salt should be stored with the hashed data.

Do not use MD5, SHA-1 or the SHA-2 family of cryptographic one-way hashing functions by themselves for cryptographic purposes like hashing your sensitive data. In-fact do not use hashing functions at all for this unless they are leveraged with one of the mentioned KDFs. Why? Because the hashing speed can not be slowed as hardware continues to get faster. Many organisations that have had their data-stores stolen and continue to on a weekly basis could avoid their secrets being compromised simply by using a decent KDF with salt and a decent number of iterations. “Using four AMD Radeon HD6990 graphics cards, I am able to make about 15.5 billion guesses per second using the SHA-1 algorithm.

Per Thorsheim

In saying that, PBKDF2 can use MD5, SHA-1 and the SHA-2 family of hashing functions. Bcrypt uses the Blowfish (more specifically the Eksblowfish) cipher. Scrypt does not have user replaceable parts like PBKDF2. The PRF can not be changed from SHA-256 to something else.

Which KDF To Use?

This depends on many considerations. I am not going to tell you which is best, because there is no best. Which to use depends on many things. You are going to have to gain understanding into at least all three KDFs. PBKDF2 is the oldest so it is the most battle tested, but there has also been lessons learnt from it that have been taken to the latter two. The next oldest is bcrypt which uses the Eksblowfish cipher which was designed specifically for bcrypt from the blowfish cipher, to be very slow to initiate thus boosting protection against dictionary attacks which were often run on custom Application-specific Integrated Circuits (ASICs) with low gate counts, often found in GPUs of the day (1999).
The hashing functions that PBKDF2 uses were a lot easier to get speed increases due to ease of parallelisation as opposed to the Eksblowfish cipher attributes such as: far greater memory required for each hash, small and frequent pseudo-random memory accesses, making it harder to cache the data into faster memory. Now with hardware utilising large Field-programmable Gate Arrays (FPGAs), bcrypt brute-forcing is becoming more accessible due to easily obtainable cheap hardware such as:

The sensitive data stored within a data-store should be the output of using one of the three key derivation functions we have just discussed. Feed with the data you want protected and a salt. All good frameworks will have at least PBKDF2 and bcrypt APIs

bcrypt brute-forcing

With well ordered rainbow tables and hardware with high FPGA counts, brute-forcing bcrypt is now feasible:

Risks that Solution Causes

Reliance on adjacent layers of defence means those layers have to actually be up to scratch. There is a possibility that they will not be.

Possibility of missing secrets being sent over the wire.

Possible reliance on obscurity with many of the strategies I have seen proposed. Just be aware that obscurity may slow an attacker down a little, but it will not stop them.

Store Configuration in Configuration files

With moving any secrets from source code to configuration files, there is a possibility that the secrets will not be changed at the same time. If they are not changed, then you have not really helped much, as the secrets are still in source control.

With good configuration tools like node-config, you are provided with plenty of options of splitting up meta-data, creating overrides, storing different parts in different places, etc. There is a risk that you do not use the potential power and flexibility to your best advantage. Learn the ins and outs of what ever system it is you are using and leverage its features to do the best at obscuring your secrets and if possible securing them.

node-config

Is an excellent configuration package with lots of great features. There is no security provided with node-config, just some potential obscurity. Just be aware of that, and as discussed previously, make sure surrounding layers have beefed up security.

Windows

As is often the case with Microsoft solutions, their marketing often leads people to believe that they have secure solutions to problems when that is not the case. As discussed previously, there are plenty of ways to get around the Microsoft so called security features. As anything else in this space, they may provide some obscurity, but do not depend on them being secure.

Statements like the following have the potential for producing over confidence:

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

Bromium

Please keep your systems patched and updated.

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and a joy to use

Bromium

There is a risk that people will believe this.

Linux

As with Microsofts “virtualisation-based security” Linux containers may slow system compromise down, but a determined attacker will find other ways to get around container isolation. Maintaining a small set of user accounts is a worthwhile practise, but that alone will not be enough to stop a highly skilled and determined attacker moving forward.
Even when technical security is very good, an experienced attacker will use other mediums to gain what they want, like social engineering, physical compromise, both, or some other attack vectors. Defence in depth is crucial in achieving good security. Concentrating on the lowest hanging fruit first and working your way up the tree.

Locking file permissions and ownership down is good, but that alone will not save you.

Least Privilege

Applying least privilege to everything can take quite a bit of work. Yes, it is probably not that hard to do, but does require a breadth of thought and time. Some of the areas discussed could be missed. Having more than one person working on the task is often effective as each person can bounce ideas off of each other and the other person is likely to notice areas that you may have missed and visa-versa.

Location

Segmentation is useful, and a common technique to helping to build resistance against attacks. It does introduce some complexity though. With complexity comes the added likely-hood of introducing a fault.

Data-store Compromise

If you follow the advice in the countermeasures section, you will be doing more than most other organisations in this area. It is not hard, but if implemented could increase complacency/over confidence. Always be on your guard. Always expect that although you have done a lot to increase your security stance, a determined and experienced attacker is going to push buttons you may have never realised you had. If they want something enough and have the resources and determination to get it, they probably will. This is where you need strategies in place to deal with post compromise. Create process (ideally partly automated) to deal with theft.

Also consider that once an attacker has made off with your data-store, even if it is currently infeasible to brute-force the secrets, there may be other ways around obtaining the missing pieces of information they need. Think about the paper shredders and the associated competitions. With patience, most puzzles can be cracked. If the compromise is an opportunistic type of attack, they will most likely just give up and seek an easier target. If it is a targeted attack by determined and experienced attackers, they will probably try other attack vectors until they get what they want.

Do not let over confidence be your weakness. An attacker will search out the weak link. Do your best to remove weak links.

Costs and Trade-offs

There is potential for hidden costs here, as adjacent layers will need to be hardened. There could be trade-offs here that force us to focus on the adjacent layers. This is never a bad thing though. It helps us to step back and take a holistic view of our security.

Store Configuration in Configuration files

There should be little cost in moving secrets out of source code and into configuration files.

Windows

You will need to weigh up whether the effort to obfuscate secrets is worth it or not. It can also make the developers job more cumbersome. Some of the options provided may be worthwhile doing.

Linux

Containers have many other advantages and you may already be using them for making your deployment processes easier and less likely to have dependency issues. They also help with scaling and load balancing, so they have multiple benefits.

Least Privilege

Is something you should be at least considering and probably doing in every case. It is one of those considerations that is worth while applying to most layers.

Location

Segmenting of resources is a common and effective measure to take for at least slowing down attacks and a cost well worth considering if you have not already.

Data-store Compromise

The countermeasures discussed here go without saying, although many organisations do not do them well if at all. It is up to you whether you want to be one of the statistics that has all of their secrets revealed. Following the countermeasures here is something that just needs to be done if you have any data that is sensitive in your data-store(s).

TL-WN722N on Kali VM on Linux Host

September 3, 2015

The following is the process I found to set-up the pass-through of the very common USB TP-LINK TL-WN722N Wifi adapter (which is known to work well with Linux) to a Virtual Host Kali Linux 1.1.0 (same process for 2.0) guest, by-passing the Linux Mint 17.1 (Rebecca) Host.

Virtualisation

VirtualBox 4.3.18_r96516

Wifi adapter

TP-LINK TL-WN722N Version 1.10

  • chip-set: Atheros ar9271
  • Vendor ID: 0cf3
  • Product ID: 9271
  • Module (driver): ath9k_htc

TL-WN722N

Useful commands

  • iwconfig
  • ifconfig
  • sudo lshw -C network
  • iwlist scan
  • lsusb
  • dmesg | grep -e wlan -e ath9
  • contents of /var/log/syslog
  • lsmod
  • Release DHCP assigned IP. Similar to Windows ipconfig /release
    dhclient -r [interface-name]
  • Renew DHCP assigned IP. Similar to Windows ipconfig /renew
    dhclient [interface-name]

Why?

I want to be able to access the internet on my laptop at the same time that I’m penetration testing a client network. I use my phone as a wireless hot-spot to access the internet. The easiest way to do this is to use the laptops on-board wireless interface to connect to the phones wireless hot-spot and pass the USB Wifi adapter straight to the guest.

Taking the following statement: “The preferred way to get Internet over wlan into a VM is to use the WLAN adapter on the host and using normal NAT for the VM. Passing USB WLAN adapters to the guest is almost untested.” from here, I like to think of more of a challenge than anything else. It can be however, something to keep in mind. if you’re prepared to persevere, you’ll get it working.

How

Reconnaissance

When you plug the Wifi adapter into your laptop and run lsusb, you should see a line that looks like:

ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n

The first four hex digits are the Vendor ID and the second four hex digits are the Product ID.

If you have a look from the bottom up of the /var/log/syslog file, you’ll see similar output to the following:

kernel: [ 98.212097] usb 2-2: USB disconnect, device number 3
kernel: [ 102.654780] usb 1-1: new high-speed USB device number 2 using ehci_hcd
kernel: [ 103.279004] usb 1-1: New USB device found, idVendor=0cf3, idProduct=7015
kernel: [ 103.279014] usb 1-1: New USB device strings: Mfr=16, Product=32, SerialNumber=48
kernel: [ 103.279020] usb 1-1: Product: UB95
kernel: [ 103.279025] usb 1-1: Manufacturer: ATHEROS
kernel: [ 103.279030] usb 1-1: SerialNumber: 12345
kernel: [ 103.597849] usb 1-1: ath9k_htc: Transferred FW: htc_7010.fw, size: 72992
kernel: [ 104.596310] ath9k_htc 1-1:1.0: ath9k_htc: Target is unresponsive
kernel: [ 104.596328] Failed to initialize the device
kernel: [ 104.605694] ath9k_htc: probe of 1-1:1.0 failed with error -22

Provide USB privileges to guest

First of all you need to add the user that controls guest to the vboxusers group on the host so that VM’s can control USB devices. logout/in of/to host.

Provide USB recognition to guest

Install the particular VirtualBox Extension Pack on to the host as discussed here. These packs can be found here. If you have an older version of VirtualBox, you can find them here. Don’t forget to checksum the pack before you add the extension.

  1. apt-get update
  2. apt-get upgrade
  3. apt-get dist-upgrade
  4. apt-get install linux-headers-$(uname -r)
  5. Shutdown Linux guest OS
  6. Apply extension to VirtualBox in the host at: File -> Preferences -> Extensions

Blacklist Wifi Module on Host

Unload the ath9k_htc module to take effect immediately and blacklist it so that it doesn’t load on boot. The module needs to be blacklisted on the host in order for the guest to be able to load it. Now we need to check to see if the module is loaded on the host with the following command:

lsmod | grep -e ath

We’re looking for ath9k_htc. If it is visible in the output produced from previous command, unload it with the following command:

modprobe -r ath9k_htc

Now you’ll need to create a blacklist file in /etc/modprobe.d/. Create /etc/modprobe.d/blacklist-ath9k.conf and add the following text into it and save:

blacklist ath9k_htc

Now go into the settings of your VM -> USB -> and add a Device Filter. I name this tl-wn722n and add the Vendor and Product ID’s we discovered with lsusb. Make sure The “Enable USB 2.0 (EHCI) Controller” is enabled also.

USBDeviceFilter

Upgrade Driver on Guest

Start the VM.

Install the latest firmware-atheros package

On the guest, check to see which version of firmware-atheros is installed:

dpkg-query -l '*atheros*'

Will probably be 0.44kali whether you’re on Kali Linux 1.0.0 or 2.

aptitude show firmware-atheros

Will provide lots more information if you’re interested. So now we need to remove this old package:

apt-get remove --purge firmware-atheros

Add the jessie-backports (that’s Debian 8.0) repository to your /etc/apt/sources.list in the following form:

deb http://ftp.nz.debian.org/debian jessie-backports main contrib non-free

Change the country prefix to your country if you like and follow it up with an update:

apt-get update

Then install the later package from the new repository we just added:

apt-get install -t jessie-backports firmware-atheros

Now if you run the dpkg-query -l '*atheros*' command again, you’re package should be on version 0.44~bp8+1

Test

Plug your Wifi adapter into your laptop.

In the Devices menu of your guest -> USB Devices, you should be able to select the “ATHEROS USB2.0 WLAN” adapter.

Run dmesg | grep htc and you should see something similar to the following printed:

[ 4.648701] usb 2-1: ath9k_htc: Firmware htc_9271.fw requested
[ 4.648805] usbcore: registered new interface driver ath9k_htc
[ 4.649951] usb 2-1: firmware: direct-loading firmware htc_9271.fw
[ 4.966479] usb 2-1: ath9k_htc: Transferred FW: htc_9271.fw, size: 50980
[ 5.217395] ath9k_htc 2-1:1.0: ath9k_htc: HTC initialized with 33 credits
[ 5.860808] ath9k_htc 2-1:1.0: ath9k_htc: FW Version: 1.3

You should now be able to select the phones wireless hot-spot you want to connect to in network manager.

Additional Resources

  1. ath9k_htc Debian Module
  2. VirtualBox information around setting up the TL-WN722N
  3. TP-LINK TL-WN722N wiki
  4. Loading and unloading Linux Kernel Modules
  5. Kernel Module Blacklisting