Archive for the ‘GNU/Linux’ Category

Lack of Visibility in Web Applications

November 26, 2015

Risks

I see this as an indirect risk to the asset of web application ownership (That’s the assumption that you will always own your web application).

Not being able to introspect your application at any given time or being able to know how the health status is, is not a comfortable place to be in and there is no reason you should be there.

Insufficient Logging and Monitoring

average-widespread-veryeasy-moderate

Can you tell at any point in time if someone or something is:

  • Using your application in a way that it was not intended to be used
  • Violating policy. For example circumventing client side input sanitisation.

How easy is it for you to notice:

  • Poor performance and potential DoS?
  • Abnormal application behaviour or unexpected logic threads
  • Logic edge cases and blind spots that stake holders, Product Owners and Developers have missed?

Countermeasures

As Bruce Schneier said: “Detection works where prevention fails and detection is of no use without response“. This leads us to application logging.

With good visibility we should be able to see anticipated and unanticipated exploitation of vulnerabilities as they occur and also be able to go back and review the events.

Insufficient Logging

PreventionAVERAGE

When it comes to logging in NodeJS, you can’t really go past winston. It has a lot of functionality and what it does not have is either provided by extensions, or you can create your own. It is fully featured, reliable and easy to configure like NLog in the .NET world.

I also looked at express-winston, but could not see why it needed to exist.

{
   ...
   "dependencies": {
      ...,
      "config": "^1.15.0",
      "express": "^4.13.3",
      "morgan": "^1.6.1",
      "//": "nodemailer not strictly necessary for this example,",
      "//": "but used later under the node-config section.",
      "nodemailer": "^1.4.0",
      "//": "What we use for logging.",
      "winston": "^1.0.1",
      "winston-email": "0.0.10",
      "winston-syslog-posix": "^0.1.5",
      ...
   }
}

winston-email also depends on nodemailer.

Opening UDP port

with winston-syslog seems to be what a lot of people are using. I think it may be due to the fact that winston-syslog is the first package that works well for winston and syslog.

If going this route, you will need the following in your /etc/rsyslog.conf:

$ModLoad imudp
# Listen on all network addresses. This is the default.
$UDPServerAddress 0.0.0.0
# Listen on localhost.
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
# Or the new style configuration.
Address <IP>
Port <port>
# Logging for your app.
local0.* /var/log/yourapp.log

I Also looked at winston-rsyslog2 and winston-syslogudp, but they did not measure up for me.

If you do not need to push syslog events to another machine, then it does not make much sense to push through a local network interface when you can use your posix syscalls as they are faster and safer. Line 7 below shows the open port.

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0015s latency).
PORT STATE SERVICE REASON VERSION
514/udp open|filtered syslog no-response
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Using Posix

The winston-syslog-posix package was inspired by blargh. winston-syslog-posix uses node-posix.

If going this route, you will need the following in your /etc/rsyslog.conf instead of the above:

# Logging for your app.
local0.* /var/log/yourapp.log

Now you can see on line 7 below that the syslog port is no longer open:

root@kali:~# nmap -p514 -sU -sV <target IP> --reason

Starting Nmap 6.47 ( http://nmap.org )
Nmap scan report for kali (<target IP>)
Host is up, received arp-response (0.0014s latency).
PORT STATE SERVICE REASON VERSION
514/udp closed syslog port-unreach
MAC Address: 34:25:C9:96:AC:E0 (My Computer)

Logging configuration should not be in the application startup file. It should be in the configuration files. This is discussed further under the Store Configuration in Configuration files section.

Notice the syslog transport in the configuration below starting on line 39.

module.exports = {
   logger: {
      colours: {
         debug: 'white',
         info: 'green',
         notice: 'blue',
         warning: 'yellow',
         error: 'yellow',
         crit: 'red',
         alert: 'red',
         emerg: 'red'
      },
      // Syslog compatible protocol severities.
      levels: {
         debug: 0,
         info: 1,
         notice: 2,
         warning: 3,
         error: 4,
         crit: 5,
         alert: 6,
         emerg: 7
      },
      consoleTransportOptions: {
         level: 'debug',
         handleExceptions: true,
         json: false,
         colorize: true
      },
      fileTransportOptions: {
         level: 'debug',
         filename: './yourapp.log',
         handleExceptions: true,
         json: true,
         maxsize: 5242880, //5MB
         maxFiles: 5,
         colorize: false
      },
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'debug',
         identity: 'yourapp_winston'
         //facility: 'local0' // default
            // /etc/rsyslog.conf also needs: local0.* /var/log/yourapp.log
            // If non posix syslog is used, then /etc/rsyslog.conf or one
            // of the files in /etc/rsyslog.d/ also needs the following
            // two settings:
            // $ModLoad imudp // Load the udp module.
            // $UDPServerRun 514 // Open the standard syslog port.
            // $UDPServerAddress 127.0.0.1 // Interface to bind to.
      },
      emailTransportOptions: {
         handleExceptions: true,
         level: 'crit',
         from: 'yourusername_alerts@fastmail.com',
         to: 'yourusername_alerts@fastmail.com',
         service: 'FastMail',
         auth: {
            user: "yourusername_alerts",
            pass: null // App specific password.
         },
         tags: ['yourapp']
      }
   }
}

In development I have chosen here to not use syslog. You can see this on line 3 below. If you want to test syslog in development, you can either remove the logger object override from the devbox1-development.js file or modify it to be similar to the above. Then add one line to the /etc/rsyslog.conf file to turn on. As mentioned in a comment above in the default.js config file on line 44.

module.exports = {
   logger: {
      syslogPosixTransportOptions: null
   }
}

In production we log to syslog and because of that we do not need the file transport you can see configured starting on line 30 above in the default.js configuration file, so we set it to null as seen on line 6 below in the prodbox-production.js file.

I have gone into more depth about how we handle syslogs here, where all of our logs including these ones get streamed to an off-site syslog server. Thus providing easy aggregation of all system logs into one user interface that DevOpps can watch on their monitoring panels in real-time and also easily go back in time to visit past events. This provides excellent visibility as one layer of defence.

There were also some other options for those using Papertrail as their off-site syslog and aggregation PaaS, but the solutions were not as clean as simply logging to local syslog from your applications and then sending off-site from there.

module.exports = {
   logger: {
      consoleTransportOptions: {
         level: {},
      },
      fileTransportOptions: null,
      syslogPosixTransportOptions: {
         handleExceptions: true,
         level: 'info',
         identity: 'yourapp_winston'
      }
   }
}
// Build creates this file.
module.exports = {
   logger: {
      emailTransportOptions: {
         auth: {
            pass: 'Z-o?(7GnCQsnrx/!-G=LP]-ib' // App specific password.
         }
      }
   }
}

The logger.js file wraps and hides extra features and transports applied to the logging package we are consuming.

var winston = require('winston');
var loggerConfig = require('config').logger;
require('winston-syslog-posix').SyslogPosix;
require('winston-email').Email;

winston.emitErrs = true;

var logger = new winston.Logger({
   // Alternatively: set to winston.config.syslog.levels
   exitOnError: false,
   // Alternatively use winston.addColors(customColours); There are many ways
   // to do the same thing with winston
   colors: loggerConfig.colours,
   levels: loggerConfig.levels
});

// Add transports. There are plenty of options provided and you can add your own.

logger.addConsole = function(config) {
   logger.add (winston.transports.Console, config);
   return this;
};

logger.addFile = function(config) {
   logger.add (winston.transports.File, config);
   return this;
};

logger.addPosixSyslog = function(config) {
   logger.add (winston.transports.SyslogPosix, config);
   return this;
};

logger.addEmail = function(config) {
   logger.add (winston.transports.Email, config);
   return this;
};

logger.emailLoggerFailure = function (err /*level, msg, meta*/) {
   // If called with an error, then only the err param is supplied.
   // If not called with an error, level, msg and meta are supplied.
   if (err) logger.alert(
      JSON.stringify(
         'error-code:' + err.code + '. '
         + 'error-message:' + err.message + '. '
         + 'error-response:' + err.response + '. logger-level:'
         + err.transport.level + '. transport:' + err.transport.name
      )
   );
};

logger.init = function () {
   if (loggerConfig.fileTransportOptions)
      logger.addFile( loggerConfig.fileTransportOptions );
   if (loggerConfig.consoleTransportOptions)
      logger.addConsole( loggerConfig.consoleTransportOptions );
   if (loggerConfig.syslogPosixTransportOptions)
      logger.addPosixSyslog( loggerConfig.syslogPosixTransportOptions );
   if (loggerConfig.emailTransportOptions)
      logger.addEmail( loggerConfig.emailTransportOptions );
};

module.exports = logger;
module.exports.stream = {
   write: function (message, encoding) {
      logger.info(message);
   }
};

When the app first starts it initialises the logger on line 7 below.

//...
var express = require('express');
var morganLogger = require('morgan');
var logger = require('./util/logger'); // Or use requireFrom module so no relative paths.
var app = express();
//...
logger.init();
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
//...
// In order to utilise connect/express logger module in our third party logger,
// Pipe the messages through.
app.use(morganLogger('combined', {stream: logger.stream}));
//...
app.use(express.static(path.join(__dirname, 'public')));
//...
require('./routes')(app);

if ('development' == app.get('env')) {
   app.use(errorHandler({ dumpExceptions: true, showStack: true }));
   //...
}
if ('production' == app.get('env')) {
   app.use(errorHandler());
   //...
}

http.createServer(app).listen(app.get('port'), function(){
   logger.info(
      "Express server listening on port " + app.get('port') + ' in '
      + process.env.NODE_ENV + ' mode'
   );
});

* You can also optionally log JSON metadata
* You can provide an optional callback to do any work required, which will be called once all transports have logged the specified message.

Here are some examples of how you can use the logger. The logger.log(<level> can be replaced with logger.<level>( where level is any of the levels defined in the default.js configuration file above:

// With string interpolation also.
logger.log('info', 'test message %s', 'my string');
logger.log('info', 'test message %d', 123);
logger.log('info', 'test message %j', {aPropertyName: 'Some message details'}, {});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'});
logger.log('info', 'test message %s, %s', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);
logger.log('info', 'test message', 'first', 'second', {aPropertyName: 'Some message details'}, logger.emailLoggerFailure);

Also consider hiding cross cutting concerns like logging using Aspect Oriented Programing (AOP)

Insufficient Monitoring

PreventionEASY

There are a couple of ways of approaching monitoring. You may want to see the health of your application even if it is all fine, or only to be notified if it is not fine (sometimes called the dark cockpit approach).

Monit is an excellent tool for the dark cockpit approach. It’s easy to configure. Has excellent short documentation that is easy to understand and the configuration file has lots of examples commented out ready for you to take as is and modify to suite your environment. I’ve personally had excellent success with Monit.

 

Risks that Solution Causes

Lack of Visibility

With the added visibility, you will have to make decisions based on the new found information you now have. There will be no more blissful ignorance if there was before.

Insufficient Logging and Monitoring

There will be learning and work to be done to become familiar with libraries and tooling. Code will have to be written around logging as in wrapping libraries, initialising and adding logging statements or hiding them using AOP.

 

Costs and Trade-offs

Insufficient Logging and Monitoring

You can do a lot for little cost here. I would rather trade off a few days work in order to have a really good logging system through your code base that is going to show you errors fast in development and then show you different errors in the places your DevOps need to see them in production.

Same for monitoring. Find a tool that you find working with a pleasure. There are just about always free and open source tools to every commercial alternative. If you are working with a start-up or young business, the free and open source tools can be excellent to keep ongoing costs down. Especially mature tools that are also well maintained like Monit.

Additional Resources

Risks and Countermeasures to the Management of Application Secrets

September 17, 2015

Risks

  • Passwords and other secrets for things like data-stores, syslog servers, monitoring services, email accounts and so on can be useful to an attacker to compromise data-stores, obtain further secrets from email accounts, file servers, system logs, services being monitored, etc, and may even provide credentials to continue moving through the network compromising other machines.
  • Passwords and/or their hashes travelling over the network.

Data-store Compromise

Exploitability

The reason I’ve tagged this as moderate is because if you take the countermeasures, it doesn’t have to be a disaster.

There are many examples of this happening on a daily basis to millions of users. The Ashley Madison debacle is a good example. Ashley Madison’s entire business relied on its commitment to keep its clients (37 million of them) data secret, provide discretion and anonymity.

Before the breach, the company boasted about airtight data security but ironically, still proudly displays a graphic with the phrase “trusted security award” on its homepage

We worked hard to make a fully undetectable attack, then got in and found nothing to bypass…. Nobody was watching. No security. Only thing was segmented network. You could use Pass1234 from the internet to VPN to root on all servers.

Any CEO who isn’t vigilantly protecting his or her company’s assets with systems designed to track user behavior and identify malicious activity is acting negligently and putting the entire organization at risk. And as we’ve seen in the case of Ashley Madison, leadership all the way up to the CEO may very well be forced out when security isn’t prioritized as a core tenet of an organization.

Dark Reading

Other notable data-store compromises were LinkedIn with 6.5 million user accounts compromised and 95% of the users passwords cracked in days. Why so fast? Because they used simple hashing, specifically SHA-1. EBay with 145 million active buyers. Many others coming to light regularly.

Are you using well salted and quality strong key derivation functions (KDFs) for all of your sensitive data? Are you making sure you are notifying your customers about using high quality passwords? Are you informing them what a high quality password is? Consider checking new user credentials against a list of the most frequently used and insecure passwords collected.

Countermeasures

Secure password management within applications is a case of doing what you can, often relying on obscurity and leaning on other layers of defence to make it harder for compromise. Like many of the layers already discussed in my book.

Find out how secret the data that is supposed to be secret that is being sent over the network actually is and consider your internal network just as malicious as the internet. Then you will be starting to get the idea of what defence in depth is about. That way when one defence breaks down, you will still be in good standing.

defence in depth

You may read in many places that having data-store passwords and other types of secrets in configuration files in clear text is an insecurity that must be addressed. Then when it comes to mitigation, there seems to be a few techniques for helping, but most of them are based around obscuring the secret rather than securing it. Essentially just making discovery a little more inconvenient like using an alternative port to SSH to other than the default of 22. Maybe surprisingly though, obscurity does significantly reduce the number of opportunistic type attacks from bots and script kiddies.

Store Configuration in Configuration files

Prevention

Do not hard code passwords in source files for all developers to see. Doing so also means the code has to be patched when services are breached. At the very least, store them in configuration files and use different configuration files for different deployments and consider keeping them out of source control.

Here are some examples using the node-config module.

node-config

is a fully featured, well maintained configuration package that I have used on a good number of projects.

To install: From the command line within the root directory of your NodeJS application, run:

npm install node-config --save

Now you are ready to start using node-config. An example of the relevant section of an app.js file may look like the following:

// Due to bug in node-config the if statement is required before config is required
// https://github.com/lorenwest/node-config/issues/202
if (process.env.NODE_ENV === 'production')
   process.env.NODE_CONFIG_DIR = path.join(__dirname, 'config');

Where ever you use node-config, in your routes for example:

var config = require('config');
var nodemailer = require('nodemailer');
var enquiriesEmail = config.enquiries.email;

// Setting up email transport.
var transporter = nodemailer.createTransport({
   service: config.enquiries.service,
   auth: {
      user: config.enquiries.user,
      pass: config.enquiries.pass // App specific password.
   }
});

A good collection of different formats can be used for the config files: .json, .json5, .hjson, .yaml.js, .coffee, .cson, .properties, .toml

There is a specific file loading order which you specify by file naming convention, which provides a lot of flexibility and which caters for:

  • Having multiple instances of the same application running on the same machine
  • The use of short and full host names to mitigate machine naming collisions
  • The type of deployment. This can be anything you set the $NODE_ENV environment variable to for example: development, production, staging, whatever.
  • Using and creating config files which stay out of source control. These config files have a prefix of local. These files are to be managed by external configuration management tools, build scripts, etc. Thus providing even more flexibility about where your sensitive configuration values come from.

The config files for the required attributes used above may take the following directory structure:

OurApp/
|
+-- config/
| |
| +-- default.js (usually has the most in it)
| |
| +-- devbox1-development.js
| |
| +-- devbox2-development.js
| |
| +-- stagingbox-staging.js
| |
| +-- prodbox-production.js
| |
| +-- local.js (creted by build)
|
+-- routes
| |
| +-- home.js
| |
| +-- ...
|
+-- app.js (entry point)
|
+-- ...

The contents of the above example configuration files may look like the following:

module.exports = {
   enquiries: {
      // Supported services:
      // https://github.com/andris9/nodemailer-wellknown#supported-services
      // supported-services actually use the best security settings by default.
      // I tested this with a wire capture, because it is always the most fool proof way.
      service: 'FastMail',
      email: 'yourusername@fastmail.com',
      user: 'yourusername',
      pass: null
   }
   // Lots of other settings.
   // ...
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox1
      pass: 'D[6F9,4fM6?%2ULnirPVTk#Q*7Z+5n' // App specific password.
   }
}
module.exports = {
   enquiries: {
      // Test password for developer on devbox2
      pass: 'eUoxK=S9&amp;amp;amp;lt;,`@m0T1=^(EZ#61^5H;.H' // App specific password.
   }
}
{
}
{
}
// Build creates this file.
module.exports = {
   enquiries: {
      // Password created by the build.
      pass: '10lQu$4YC&amp;amp;amp;amp;x~)}lUF&amp;amp;amp;gt;3pm]Tk&amp;amp;amp;gt;@+{N]' // App specific password.
   }
}

node-config also:

  • Provides command line overrides, thus allowing you to override configuration values at application start from command
  • Allows for the overriding of environment variables with custom environment variables from a custom-environment-variables.json file

Encrypting/decrypting credentials in code may provide some obscurity, but not much more than that.
There are different answers for different platforms. None of which provide complete security, if there is such a thing, but instead focusing on different levels of obscurity.

Windows

Store database credentials as a Local Security Authority (LSA) secret and create a DSN with the stored credential. Use a SqlServer connection string with Trusted_Connection=yes

The hashed credentials are stored in the SAM file and the registry. If an attacker has physical access to the storage, they can easily copy the hashes if the machine is not running or can be shut-down. The hashes can be sniffed from the wire in transit. The hashes can be pulled from the running machines memory (specifically the Local Security Authority Subsystem Service (LSASS.exe)) using tools such as Mimikatz, WCE, hashdump or fgdump. An attacker generally only needs the hash. Trusted tools like psexec take care of this for us. All discussed in my “0wn1ng The Web” presentation.

Encrypt Sections of a web, executable, machine-level, application-level configuration files with aspnet_regiis.exe with the -pe option and name of the configuration element to encrypt and the configuration provider you want to use. Either DataProtectionConfigurationProvider (uses DPAPI) or RSAProtectedConfigurationProvider (uses RSA). the -pd switch is used to decrypt or programatically:

string connStr = ConfigurationManager.ConnectionString["MyDbConn1"].ToString();

Of course there is a problem with this also. DPAPI uses LSASS, which again an attacker can extract the hash from its memory. If the RSAProtectedConfigurationProvider has been used, a key container is required. Mimikatz will force an export from the key container to a .pvk file. Which can then be read using OpenSSL or tools from the Mono.Security assembly.

I have looked at a few other ways using PSCredential and SecureString. They all seem to rely on DPAPI which as mentioned uses LSASS which is open for exploitation.

Credential Guard and Device Guard leverage virtualisation-based security. By the look of it still using LSASS. Bromium have partnered with Microsoft and coined it Micro-virtualization. The idea is that every user task is isolated into its own micro-VM. There seems to be some confusion as to how this is any better. Tasks still need to communicate outside of their VM, so what is to stop malicious code doing the same? I have seen lots of questions but no compelling answers yet. Credential Guard must run on physical hardware directly. Can not run on virtual machines. This alone rules out many
deployments.

Bromium vSentry transforms information and infrastructure protection with a revolutionary new architecture that isolates and defeats advanced threats targeting the endpoint through web, email and documents

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

This is marketing talk. Please don’t take this literally.

vSentry empowers users to access whatever information they need from any network, application or website, without risk to the enterprise

Traditional security solutions rely on detection and often fail to block targeted attacks which use unknown “zero day” exploits. Bromium uses hardware enforced isolation to stop even “undetectable” attacks without disrupting the user.

Bromium

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and
a joy to use

Bromium

These seem like bold claims.

Also worth considering is that Microsofts new virtualization-based security also relies on UEFI Secure Boot, which has been proven insecure.

Linux

Containers also help to provide some form of isolation. Allowing you to only have the user accounts to do what is necessary for the application.

I usually use a deployment tool that also changes the permissions and ownership of the files involved with the running web application to a single system user, so unprivileged users can not access the web applications files at all. The deployment script is executed over SSH in a remote shell. Only specific commands on the server are allowed to run and a very limited set of users have any sort of access to the machine. If you are using Linux Containers then you can reduce this even more if it is not already.

One of the beauties of GNU/Linux is that you can have as much or little security as you decide. No one has made that decision for you already and locked you out of the source. You are not feed lies like all of the closed source OS vendors trying to pimp their latest money spinning product. GNU/Linux is a dirty little secrete that requires no marketing hype. It just provides complete control if you want it. If you do not know what you want, then someone else will probably take that control from you. It is just a matter of time if it hasn’t happened already.

Least Privilege

Prevention

An application should have the least privileges possible in order to carry out what it needs to do. Consider creating accounts for each trust distinction. For example where you only need to read from a data store, then create that connection with a users credentials that is only allowed to read, and so on for other privileges. This way the attack surface is minimised. Adhering to the principle of least privilege. Also consider removing table access completely from the application and only provide permissions to the application to run stored queries. This way if/when an attacker is able to
compromise the machine and retrieve the password for an action on the data-store, they will not be able to do a lot anyway.

Location

Prevention

Put your services like data-stores on network segments that are as sheltered as possible and only contain similar services.

Maintain as few user accounts on the servers in question as possible and with the least privileges as possible.

Data-store Compromise

Prevention

As part of your defence in depth strategy, you should expect that your data-store is going to get stolen, but hope that it does not. What assets within the data-store are sensitive? How are you going to stop an attacker that has gained access to the data-store from making sense of the sensitive data?

As part of developing the application that uses the data-store, a strategy also needs to be developed and implemented to carry on business as usual when this happens. For example, when your detection mechanisms realise that someone unauthorised has been on the machine(s) that host your data-store, as well as the usual alerts being fired off to the people that are going to investigate and audit, your application should take some automatic measures like:

  • All following logins should be instructed to change passwords

If you follow the recommendations below, data-store theft will be an inconvenience, but not a disaster.

Consider what sensitive information you really need to store. Consider using the following key derivation functions (KDFs) for all sensitive data. Not just passwords. Also continue to remind your customers to always use unique passwords that are made up of alphanumeric, upper-case, lower-case and special characters. It is also worth considering pushing the use of high quality password vaults. Do not limit password lengths. Encourage long passwords.

PBKDF2, bcrypt and scrypt are KDFs that are designed to be slow. Used in a process commonly known as key stretching. The process of key stretching in terms of how long it takes can be tuned by increasing or decreasing the number of cycles used. Often 1000 cycles or more for passwords. “The function used to protect stored credentials should balance attacker and defender verification. The defender needs an acceptable response time for verification of users’ credentials during peak use. However, the time required to map <credential> -> <protected form> must remain beyond threats’ hardware (GPU, FPGA) and technique (dictionary-based, brute force, etc) capabilities.

OWASP Password Storage

PBKDF2, bcrypt and the newer scrypt, apply a Pseudorandom Function (PRF) such as a crypto-graphic hash, cipher or HMAC to the data being received along with a unique salt. The salt should be stored with the hashed data.

Do not use MD5, SHA-1 or the SHA-2 family of cryptographic one-way hashing functions by themselves for cryptographic purposes like hashing your sensitive data. In-fact do not use hashing functions at all for this unless they are leveraged with one of the mentioned KDFs. Why? Because the hashing speed can not be slowed as hardware continues to get faster. Many organisations that have had their data-stores stolen and continue to on a weekly basis could avoid their secrets being compromised simply by using a decent KDF with salt and a decent number of iterations. “Using four AMD Radeon HD6990 graphics cards, I am able to make about 15.5 billion guesses per second using the SHA-1 algorithm.

Per Thorsheim

In saying that, PBKDF2 can use MD5, SHA-1 and the SHA-2 family of hashing functions. Bcrypt uses the Blowfish (more specifically the Eksblowfish) cipher. Scrypt does not have user replaceable parts like PBKDF2. The PRF can not be changed from SHA-256 to something else.

Which KDF To Use?

This depends on many considerations. I am not going to tell you which is best, because there is no best. Which to use depends on many things. You are going to have to gain understanding into at least all three KDFs. PBKDF2 is the oldest so it is the most battle tested, but there has also been lessons learnt from it that have been taken to the latter two. The next oldest is bcrypt which uses the Eksblowfish cipher which was designed specifically for bcrypt from the blowfish cipher, to be very slow to initiate thus boosting protection against dictionary attacks which were often run on custom Application-specific Integrated Circuits (ASICs) with low gate counts, often found in GPUs of the day (1999).
The hashing functions that PBKDF2 uses were a lot easier to get speed increases due to ease of parallelisation as opposed to the Eksblowfish cipher attributes such as: far greater memory required for each hash, small and frequent pseudo-random memory accesses, making it harder to cache the data into faster memory. Now with hardware utilising large Field-programmable Gate Arrays (FPGAs), bcrypt brute-forcing is becoming more accessible due to easily obtainable cheap hardware such as:

The sensitive data stored within a data-store should be the output of using one of the three key derivation functions we have just discussed. Feed with the data you want protected and a salt. All good frameworks will have at least PBKDF2 and bcrypt APIs

bcrypt brute-forcing

With well ordered rainbow tables and hardware with high FPGA counts, brute-forcing bcrypt is now feasible:

Risks that Solution Causes

Reliance on adjacent layers of defence means those layers have to actually be up to scratch. There is a possibility that they will not be.

Possibility of missing secrets being sent over the wire.

Possible reliance on obscurity with many of the strategies I have seen proposed. Just be aware that obscurity may slow an attacker down a little, but it will not stop them.

Store Configuration in Configuration files

With moving any secrets from source code to configuration files, there is a possibility that the secrets will not be changed at the same time. If they are not changed, then you have not really helped much, as the secrets are still in source control.

With good configuration tools like node-config, you are provided with plenty of options of splitting up meta-data, creating overrides, storing different parts in different places, etc. There is a risk that you do not use the potential power and flexibility to your best advantage. Learn the ins and outs of what ever system it is you are using and leverage its features to do the best at obscuring your secrets and if possible securing them.

node-config

Is an excellent configuration package with lots of great features. There is no security provided with node-config, just some potential obscurity. Just be aware of that, and as discussed previously, make sure surrounding layers have beefed up security.

Windows

As is often the case with Microsoft solutions, their marketing often leads people to believe that they have secure solutions to problems when that is not the case. As discussed previously, there are plenty of ways to get around the Microsoft so called security features. As anything else in this space, they may provide some obscurity, but do not depend on them being secure.

Statements like the following have the potential for producing over confidence:

vSentry protects desktops without requiring patches or updates, defeating and automatically discarding all known and unknown malware, and eliminating the need for costly remediation.

Bromium

Please keep your systems patched and updated.

With Bromium micro-virtualization, we now have an answer: A desktop that is utterly secure and a joy to use

Bromium

There is a risk that people will believe this.

Linux

As with Microsofts “virtualisation-based security” Linux containers may slow system compromise down, but a determined attacker will find other ways to get around container isolation. Maintaining a small set of user accounts is a worthwhile practise, but that alone will not be enough to stop a highly skilled and determined attacker moving forward.
Even when technical security is very good, an experienced attacker will use other mediums to gain what they want, like social engineering, physical compromise, both, or some other attack vectors. Defence in depth is crucial in achieving good security. Concentrating on the lowest hanging fruit first and working your way up the tree.

Locking file permissions and ownership down is good, but that alone will not save you.

Least Privilege

Applying least privilege to everything can take quite a bit of work. Yes, it is probably not that hard to do, but does require a breadth of thought and time. Some of the areas discussed could be missed. Having more than one person working on the task is often effective as each person can bounce ideas off of each other and the other person is likely to notice areas that you may have missed and visa-versa.

Location

Segmentation is useful, and a common technique to helping to build resistance against attacks. It does introduce some complexity though. With complexity comes the added likely-hood of introducing a fault.

Data-store Compromise

If you follow the advice in the countermeasures section, you will be doing more than most other organisations in this area. It is not hard, but if implemented could increase complacency/over confidence. Always be on your guard. Always expect that although you have done a lot to increase your security stance, a determined and experienced attacker is going to push buttons you may have never realised you had. If they want something enough and have the resources and determination to get it, they probably will. This is where you need strategies in place to deal with post compromise. Create process (ideally partly automated) to deal with theft.

Also consider that once an attacker has made off with your data-store, even if it is currently infeasible to brute-force the secrets, there may be other ways around obtaining the missing pieces of information they need. Think about the paper shredders and the associated competitions. With patience, most puzzles can be cracked. If the compromise is an opportunistic type of attack, they will most likely just give up and seek an easier target. If it is a targeted attack by determined and experienced attackers, they will probably try other attack vectors until they get what they want.

Do not let over confidence be your weakness. An attacker will search out the weak link. Do your best to remove weak links.

Costs and Trade-offs

There is potential for hidden costs here, as adjacent layers will need to be hardened. There could be trade-offs here that force us to focus on the adjacent layers. This is never a bad thing though. It helps us to step back and take a holistic view of our security.

Store Configuration in Configuration files

There should be little cost in moving secrets out of source code and into configuration files.

Windows

You will need to weigh up whether the effort to obfuscate secrets is worth it or not. It can also make the developers job more cumbersome. Some of the options provided may be worthwhile doing.

Linux

Containers have many other advantages and you may already be using them for making your deployment processes easier and less likely to have dependency issues. They also help with scaling and load balancing, so they have multiple benefits.

Least Privilege

Is something you should be at least considering and probably doing in every case. It is one of those considerations that is worth while applying to most layers.

Location

Segmenting of resources is a common and effective measure to take for at least slowing down attacks and a cost well worth considering if you have not already.

Data-store Compromise

The countermeasures discussed here go without saying, although many organisations do not do them well if at all. It is up to you whether you want to be one of the statistics that has all of their secrets revealed. Following the countermeasures here is something that just needs to be done if you have any data that is sensitive in your data-store(s).

TL-WN722N on Kali VM on Linux Host

September 3, 2015

The following is the process I found to set-up the pass-through of the very common USB TP-LINK TL-WN722N Wifi adapter (which is known to work well with Linux) to a Virtual Host Kali Linux 1.1.0 (same process for 2.0) guest, by-passing the Linux Mint 17.1 (Rebecca) Host.

Virtualisation

VirtualBox 4.3.18_r96516

Wifi adapter

TP-LINK TL-WN722N Version 1.10

  • chip-set: Atheros ar9271
  • Vendor ID: 0cf3
  • Product ID: 9271
  • Module (driver): ath9k_htc

TL-WN722N

Useful commands

  • iwconfig
  • ifconfig
  • sudo lshw -C network
  • iwlist scan
  • lsusb
  • dmesg | grep -e wlan -e ath9
  • contents of /var/log/syslog
  • lsmod
  • Release DHCP assigned IP. Similar to Windows ipconfig /release
    dhclient -r [interface-name]
  • Renew DHCP assigned IP. Similar to Windows ipconfig /renew
    dhclient [interface-name]

Why?

I want to be able to access the internet on my laptop at the same time that I’m penetration testing a client network. I use my phone as a wireless hot-spot to access the internet. The easiest way to do this is to use the laptops on-board wireless interface to connect to the phones wireless hot-spot and pass the USB Wifi adapter straight to the guest.

Taking the following statement: “The preferred way to get Internet over wlan into a VM is to use the WLAN adapter on the host and using normal NAT for the VM. Passing USB WLAN adapters to the guest is almost untested.” from here, I like to think of more of a challenge than anything else. It can be however, something to keep in mind. if you’re prepared to persevere, you’ll get it working.

How

Reconnaissance

When you plug the Wifi adapter into your laptop and run lsusb, you should see a line that looks like:

ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n

The first four hex digits are the Vendor ID and the second four hex digits are the Product ID.

If you have a look from the bottom up of the /var/log/syslog file, you’ll see similar output to the following:

kernel: [ 98.212097] usb 2-2: USB disconnect, device number 3
kernel: [ 102.654780] usb 1-1: new high-speed USB device number 2 using ehci_hcd
kernel: [ 103.279004] usb 1-1: New USB device found, idVendor=0cf3, idProduct=7015
kernel: [ 103.279014] usb 1-1: New USB device strings: Mfr=16, Product=32, SerialNumber=48
kernel: [ 103.279020] usb 1-1: Product: UB95
kernel: [ 103.279025] usb 1-1: Manufacturer: ATHEROS
kernel: [ 103.279030] usb 1-1: SerialNumber: 12345
kernel: [ 103.597849] usb 1-1: ath9k_htc: Transferred FW: htc_7010.fw, size: 72992
kernel: [ 104.596310] ath9k_htc 1-1:1.0: ath9k_htc: Target is unresponsive
kernel: [ 104.596328] Failed to initialize the device
kernel: [ 104.605694] ath9k_htc: probe of 1-1:1.0 failed with error -22

Provide USB privileges to guest

First of all you need to add the user that controls guest to the vboxusers group on the host so that VM’s can control USB devices. logout/in of/to host.

Provide USB recognition to guest

Install the particular VirtualBox Extension Pack on to the host as discussed here. These packs can be found here. If you have an older version of VirtualBox, you can find them here. Don’t forget to checksum the pack before you add the extension.

  1. apt-get update
  2. apt-get upgrade
  3. apt-get dist-upgrade
  4. apt-get install linux-headers-$(uname -r)
  5. Shutdown Linux guest OS
  6. Apply extension to VirtualBox in the host at: File -> Preferences -> Extensions

Blacklist Wifi Module on Host

Unload the ath9k_htc module to take effect immediately and blacklist it so that it doesn’t load on boot. The module needs to be blacklisted on the host in order for the guest to be able to load it. Now we need to check to see if the module is loaded on the host with the following command:

lsmod | grep -e ath

We’re looking for ath9k_htc. If it is visible in the output produced from previous command, unload it with the following command:

modprobe -r ath9k_htc

Now you’ll need to create a blacklist file in /etc/modprobe.d/. Create /etc/modprobe.d/blacklist-ath9k.conf and add the following text into it and save:

blacklist ath9k_htc

Now go into the settings of your VM -> USB -> and add a Device Filter. I name this tl-wn722n and add the Vendor and Product ID’s we discovered with lsusb. Make sure The “Enable USB 2.0 (EHCI) Controller” is enabled also.

USBDeviceFilter

Upgrade Driver on Guest

Start the VM.

Install the latest firmware-atheros package

On the guest, check to see which version of firmware-atheros is installed:

dpkg-query -l '*atheros*'

Will probably be 0.44kali whether you’re on Kali Linux 1.0.0 or 2.

aptitude show firmware-atheros

Will provide lots more information if you’re interested. So now we need to remove this old package:

apt-get remove --purge firmware-atheros

Add the jessie-backports (that’s Debian 8.0) repository to your /etc/apt/sources.list in the following form:

deb http://ftp.nz.debian.org/debian jessie-backports main contrib non-free

Change the country prefix to your country if you like and follow it up with an update:

apt-get update

Then install the later package from the new repository we just added:

apt-get install -t jessie-backports firmware-atheros

Now if you run the dpkg-query -l '*atheros*' command again, you’re package should be on version 0.44~bp8+1

Test

Plug your Wifi adapter into your laptop.

In the Devices menu of your guest -> USB Devices, you should be able to select the “ATHEROS USB2.0 WLAN” adapter.

Run dmesg | grep htc and you should see something similar to the following printed:

[ 4.648701] usb 2-1: ath9k_htc: Firmware htc_9271.fw requested
[ 4.648805] usbcore: registered new interface driver ath9k_htc
[ 4.649951] usb 2-1: firmware: direct-loading firmware htc_9271.fw
[ 4.966479] usb 2-1: ath9k_htc: Transferred FW: htc_9271.fw, size: 50980
[ 5.217395] ath9k_htc 2-1:1.0: ath9k_htc: HTC initialized with 33 credits
[ 5.860808] ath9k_htc 2-1:1.0: ath9k_htc: FW Version: 1.3

You should now be able to select the phones wireless hot-spot you want to connect to in network manager.

Additional Resources

  1. ath9k_htc Debian Module
  2. VirtualBox information around setting up the TL-WN722N
  3. TP-LINK TL-WN722N wiki
  4. Loading and unloading Linux Kernel Modules
  5. Kernel Module Blacklisting

Keeping Your NodeJS Web App Running on Production Linux

June 27, 2015

All the following offerings that I’ve evaluated target different scenarios. I’ve listed the pros and cons for each of them and where I think they fit into a potential solution to monitor your web applications (I’m leaning toward NodeJS) and make sure they keep running. I’ve listed the goals I was looking to satisfy.

For me I have to have a good knowledge of the landscape before I commit to a decision and stand behind it. I like to know I’ve made the best decision based on all the facts that are publicly available. Therefore, as always, it’s my responsibility to make sure I’ve done my research in order to make an informed and ideally… best decision possible. I’m pretty sure my evaluation was un-biased, as I hadn’t used any of the offerings other than forever before.

I looked at quite a few more than what I’ve detailed below, but the following candidates I felt were worth spending some time on.

Keep in mind, that everyone’s requirements will be different, so rather than tell you which to use because I don’t know your situation, I’ve listed the attributes (positive, negative and neutral) that I think are worth considering when making this choice. After my evaluation I make some decisions and start the configuration.

Evaluation criterion

  1. Who is the creator. I favour teams rather than individuals, as individuals move on, then where does that leave the product?
  2. Does it do what we need it to do? Goals address this.
  3. Do I foresee any integration problems with other required components?
  4. Cost in money. Is it free? I usually gravitate toward free software. It’s usually an easier sell to clients and management. Are there catches once you get further down the road? Usually open source projects are marketed as is.
  5. Cost in time. Is the set-up painful?
  6. How well does it appear to be supported? What do the users say?
  7. Documentation. Is there any / much? What is it’s quality?
  8. Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have a valid reason).
  9. Release schedule. How often are releases being made? When was the last release?
  10. Gut feeling, Intuition. How does it feel. If you have experience in making these sorts of choices, lean on it. Believe it or not, this should probably be No. 1.

The following tools have been my choice based on the above criterion.

Goals

  1. Application should start automatically on system boot
  2. Application should be re-started if it dies or becomes un-responsive
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy (Nginx, node-http-proxy, Tinyproxy, Squid, Varnish, etc)
    2. Clustering and providing load balancing for your single threaded application
    3. Visibility of application statistics.
  4. Enough documentation to feel comfortable consuming the offering
  5. The offering should be production ready. This means: mature with a security conscious architecture.

Sysvinit, Upstart, systemd & Runit

You’ll have one of these running on your Linux box.

These are system and service managers for Linux. Upstart and the later systemd were developed as replacements for the traditional init daemon (Sysvinit), which all depend on init. Init is an essential package that pulls in the default init system. In Debian, starting with Jessie, systemd is your default system and service manager.

There’s some quite helpful info on the differences between Sysvinit and systemd here.

systemd

As I have systemd installed out of the box on my test machine (Debian Jessie), I’ll be using this for my set-up.

Documentation

  1. Well written comparison with Upstart, systemd, Runit and even Supervisor.

Running the likes of the below commands will provide some good details on how these packages interact with each other:

aptitude show sysvinit
aptitude show systemd
# and any others you think of

These system and service managers all run as PID 1 and start the rest of the system. Your Linux system will more than likely be using one of these to start tasks and services during boot, stop them during shutdown and supervise them while the system is running. Ideally you’re going to want to use something higher level to look after your NodeJS app. See the following candidates…

forever

and it’s web UI. Can run any kind of script continuously (whether it is written in node.js or not). This wasn’t always the case though. It was originally targeted toward keeping NodeJS applications running.

Requires NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential, I’d rather do without NPM on a production server where we’re actively working to reduce the installed package count and disable everything else we can. I could install forever on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

NPM Details

Does it Meet our Goals?

  1. Not without an extra script. Crontab or similar
  2. Application will be re-started if it dies, but if it’s response times go up, there’s not much forever is going to do about it. It has no way of knowing.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing
    3. Visibility of application statistics could be added later with the likes of Monit or something else, but if you used Monit, then there wouldn’t really be a need for forever as Monit does the little that forever does and is capable of so much more, but is not pushy on what to do and how to do it. All the behaviour is defined with quite a nice syntax in a config file or as many as you like.
  4. I think there is enough documentation to feel comfortable consuming it, as forever doesn’t do a lot, which doesn’t have to be a bad thing.
  5. The code it self is probably production ready, but I’ve heard quite a bit about stability issues. You’re also expected to have NPM installed (more attack surface) when we already have native package managers on the server(s).

Overall Thoughts

For me, I’m looking for a tool set that does a bit more. Forever doesn’t satisfy my requirements. There’s often a balancing act between not doing enough and doing too much.

PM2

PM2

Younger than forever, but seems to have quite a few more features and does actually look quite good. I’m not sure about production ready though?

As mentioned on the github page: “PM2 is a production process manager for Node.js applications with a built-in load balancer“. This “Sounds” and at the initial glance looks shiny. Very quickly you should realise there are a few security issues you need to be aware of though.

The word “production” is used but it requries NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential and it shouldn’t be, I’d rather do without it on a production system. I could install PM2 on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

At the time of writing this, it’s less than a year old and in nodejs land, that means it’s very much in the immature realm. Do you really want to use something that young on a production server? I’d personally advise against it.

Yes, it’s very popular currently. That doesn’t tell me it’s ready for production though. It tells me the marketing is working.

Is your production server ready for PM2? That phrase alone tells me the mind-set behind the project. I’d much sooner see it the other way around. Is PM2 ready for my production server? You’re going to need a staging server for this, unless you honestly want development tools installed on your production server (git, build-essential, NVM and an unstable version of node 0.11.14 (at time of writing)) and run test scripts on your production server? Not for me or my clients thanks.

If you’ve considered the above concerns and can justify adding the additional attack surface area, check out the features if you haven’t already.

Features that Stood Out

They’re also listed on the github repository. Just beware of some of the caveats. Like for the load balancing: “we recommend the use of node#0.11.15+ or io.js#1.0.2+. We do not support node#0.10.* cluster module anymore!” 0.11.15 is unstable, but hang-on, I thought PM2 was a “production” process manager? OK, so were happy to mix unstable in with something we label as production?

On top of NodeJS, PM2 will run the following scripts: bash, python, ruby, coffee, php, perl.

Start-up Script Generation

Although I’ve heard a few stories that this is fairly un-reliable at the time of writing this. Which doesn’t surprise me, as the project is very young.

Documentation

  1. Advanced Readme

Does it Meet our Goals?

  1. The feature exists, unsure of how reliable it is currently though?
  2. Application should be re-started if it dies shouldn’t be a problem. PM2 can also restart your application if it reaches a certain memory threshold. I haven’t seen anything around restarting based on response times or other application health issues.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Clustering and load-balancing is integrated but immature.
    3. PM2 provides a small collection of viewable statistics. Personally I’d want more, but I don’t see any reason why you’d have to swap PM2 because of this.
  4. There is reasonable official documentation for the age of the project. The community supplied documentation will need to catch up a bit, although there is a bit of that too. After working through all of the offerings and edge-cases, I feel as I usually do with NodeJS projects. The documentation doesn’t cover all the edge-cases and the development itself misses edge cases. Hopefully with time it’ll get better though as the project does look promising.
  5. I haven’t seen much that would make me think PM2 is production ready. It’s not yet mature. I don’t agree with it’s architecture.

Overall Thoughts

For me, the architecture doesn’t seem to be heading in the right direction to be used on a production web server where less is better. I’d like to see this change. If it did, I think it could be a serious contender for this space.

 


The following are better suited to monitoring and managing your applications. Other than Passenger, they should all be in your repository, which means trivial installs and configurations.

Supervisor

Supervisord

Supervisor is a process manager with a lot of features and a higher level of abstraction than the likes of the above Sysvinit, upstart, systemd, Runit, etc so it still needs to be run by an init daemon in itself.

From the docs: “It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.” Supervisor monitors the state of processes. Where as a tool like Monit can perform so many more types of tests and take what ever actions you define.

It’s in the Debian repositories  (trivial install on Debian and derivatives).

Documentation

  1. Main web site
  2. There’s a good short comparison here.

Source

Does it Meet our Goals?

  1. Application should start automatically on system boot: Yip. That’s what Supervisor does well.
  2. Application will be re-started if it dies, or becomes un-responsive. It’s often difficult to get accurate up/down status on processes on UNIX. Pidfiles often lie. Supervisord starts processes as subprocesses, so it always knows the true up/down status of its children.If your application becomes unresponsive or can’t connect to it’s database or any other service/resource it needs to work as expected. To be able to monitor these events and respond accordingly your application can expose a health-check interface, like GET /healthcheck. If everything goes well it should return HTTP 200, if not then HTTP 5**In some cases the restart of the process will solve this issue. httpok is a Supervisor event listener which makes GET requests to the configured URL. If the check fails or times out, httpok will restart the process.To enable httpok the following lines have to be placed in supervisord.conf:
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. This would be completely separate to supervisor.
    3. Visibility of application statistics could be added later with the likes of Monit or something else. For me, Supervisor doesn’t do enough. Monit does. Plus if you need what Monit offers, then you have to have three packages to think about, or Something like Supervisor, which is not an init system, so it kind of sits in the middle of the ultimate stack. So my way of thinking is, use the init system you already have to do the low level lifting and then something small to take care of everything else on your server that the init system is not really designed for and Monit has done this job really well. Just keep in mind also. This is not based on any bias. I hadn’t used Monit before this exercise.
  4. Supervisor is a mature product. It’s been around since 2004 and is still actively developed. The official and community provided docs are good.
  5. Yes it’s production ready. It’s proven itself.

 

Overall Thoughts

The documentation is quite good, easy to read and understand. I felt that the config was quite intuitive also. I already had systemd installed out of the box and didn’t see much point in installing Supervisor as systemd appeared to do everything Supervisor could do, plus systemd is an init system (it sits at the bottom of the stack). In most scenarios you’re going to have a Sysvinit or replacement of (that runs with a PID of 1), so in many cases Supervisor although it’s quite nice is kind of redundant, and of course Ubuntu has Upstart.

Supervisor is better suited to running multiple scripts with the same runtime, for example a bunch of different client applications running on Node. This can be done with systemd and the others, but Supervisor is a better fit for this sort of thing.

 

Monit

monit

Is a utility for monitoring and managing daemons or similar programs. It’s mature, actively maintained, free, open source and licensed with GNU AGPL.

It’s in the debian repositories (trivial install on Debian and derivatives). The home page told me the binary was just under 500kB. The install however produced a different number:

After this operation, 765 kB of additional disk space will be used.

Monit provides an impressive feature set for such a small package.

Monit provides far more visibility into the state of your application and control than any of the offerings mentioned above. It’s also generic. It’ll manage and/or monitor anything you throw at it. It has the right level of abstraction. Often when you start working with a product you find it’s limitations and they stop you moving forward and you end up settling for imperfection or you swap the offering for something else providing you haven’t already invested to much effort into it. For me Monit hit the sweet spot and never seems to stop you in your tracks. There always seems to be an easy to relatively easy way to get any monitoring->take action sort of task done. What I also really like is that moving away from Monit should be relatively painless also. The time investment is small and some of it will be transferable in many cases. It’s just config from the control file.

Features that Stood Out

  • Ability to monitor files, directories, disks, processes, the system and other hosts.
  • Can perform emergency logrotates if a log file suddenly grows too large too fast
  • File Checksum TestingThis is good so long as the compromised server hasn’t also had the tool your using to perform your verification (md5sum or sha1sum) modified, which would be common. That’s why in cases like this, tools such as stealth can be a good choice.
  • Testing of other attributes like ownership and access permissions. These are good, but again can easily be modified.
  • Monitoring directories using time-stamp. Good idea, but don’t rely solely on this. time-stamps are easily modified with touch -r … providing you do it between Monit’s cycles and you don’t necessarily know when they are unless you have permissions to look at Monit’s control file.
  • Monitoring space of file-systems
  • Has a built-in lightweight HTTP(S) interface you can use to browse the Monit server and check the status of all monitored services. From the web-interface you can start, stop and restart processes and disable or enable monitoring of services. Monit provides fine grained control over who/what can access the web interface or whether it’s even active or not. Again an excellent feature that you can choose to use or not even have the extra attack surface.
  • There’s also an agregator (m/monit) that allows sys-admins to monitor and manage many hosts at a time. Also works well on mobile devices and is available at a one off cost (reasonable price) to monitor all hosts.
  • Once you install Monit you have to actively enable the http daemon in the monitrc in order to run the Monit cli and/or access the Monit http web UI. At first I thought “is this broken?” I couldn’t even run monit status (it’s a Monit command). ps told me Monit was running. Then I realised… it’s secure by default. You have to actually think about it in order to expose anything. It was this that confirmed Monit for me.
  • The Control File
  • Just like SSH, to protect the security of your control file and passwords the control file must have read-write permissions no more than 0700 (u=xrw,g=,o=); Monit will complain and exit otherwise.

Documentation

The following was the documentation I used in the same order and I found that the most helpful.

  1. Main web site
  2. Official Documentation
  3. Source and links to other documentation including a QUICK START guide of about 6 lines.
  4. Adding Monit to systemd
  5. Release notes

Does it Meet our Goals?

  1. Application can start automatically on system boot
  2. Monit has a plethora of different types of tests it can perform and then follow up with actions based on the outcomes. Http is but one of them.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Yes, I don’t see any issues here
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. Monit will still monitor, restart and do what ever else you tell it to do.
    3. Monit provides application statistics to look at if that’s what you want, but it also goes further and provides directives for you to declare behaviour based on conditions that Monit checks for.
  4. Plenty of official and community supplied documentation
  5. Yes it’s production ready. It’s proven itself. Some extra education around some of the points I raised above with some of the security features would be good. If you could trust the hosts hashing programme (and other commonly trojanised programmes like find, ls, etc) that Monit uses, perhaps because you were monitoring it from a stealth controller (which had already taken a known good copy and produced it’s own bench-mark hash) or similar then yes, you could use that feature of Monit with greater assurance that the results it was producing were in fact accurate. In saying that, you don’t have to use the feature, but it’s there if you want it, which I see as very positive so long as you understand what could go wrong and where.

 

Overall Thoughts

The accepted answer here is a pretty good mix and approach to using the right tools for each job. Monit has a lot of capabilities, none of which you must use, so it doesn’t get in your way, as many opinionated tools do and like to dictate how you do things and what you must use in order to do them. Monit allows you to leverage what ever you already have in your stack. You don’t have to install package managers or increase your attack surface other than [apt-get|aptitude] install monit It’s easy to configure and has lots of good documentation.

Passenger

Passenger

I’ve looked at Passenger before and it looked quite good then. It still does, with one main caveat. It’s trying to do to much. One can easily get lost in the official documentation (example of the Monit install (handfull of commands to cover all Linux distros one page) vs Passenger install (aprx 10 pages)).  “Passenger is a web server and application server, designed to be fast, robust and lightweight. It runs your web apps with the least amount of hassle by taking care of almost all administrative heavy lifting for you.” I’d like to see the actual weight rather than just a relative term “lightweight”. To me it doesn’t look light weight. The feeling I got when evaluating Passenger was similar to the feeling produced with my Ossec evaluation.

The learning curve is quite a bit steeper than all the previous offerings. Passenger has strong opinions that once you buy into could make it hard to use the tools you may want to swap in and out. I’m not seeing the UNIX Philosophy here.

If you look at the Phusion Passenger Philosophy we see some note-worthy comments. “We believe no good software has bad documentation“. If your software is 100% intuitive, the need for documentation should be minimal. Few software products are 100% intuitive, because we only have so much time to develop it. The comment around “the Unix way” is interesting also. At this stage I’m not sure they’ve done better. I’d like to spend some time with someone or some team that has Passenger in production in a diverse environment and see how things are working out.

Passenger isn’t in the Debian repositories, so you would need to add the apt repository.

Passenger is six years old at the time of writing this, but the NodeJS support is only just over a year old.

Features that Stood Didn’t really Stand Out

Sadly there weren’t many that stood out for me.

  • Handle more traffic looked similar to Monit resource testing but without the detail. If there’s something Monit can’t do well, it’ll say “Hay, use this other tool and I’ll help you configure it to suite the way you want to work. If you don’t like it, swap it out for something else” With Passenger it seems to integrate into everything rather than providing tools to communicate loosely. Essentially locking you into a way of doing something that hopefully you like. It also talks about “Uses all available CPU cores“. If you’re using Monit you can use the NodeJS cluster module to take care of that. Again leaving the best tool for the job to do what it does best.
  • Reduce maintenance
    • Keep your app running, even when it crashesPhusion Passenger supervises your application processes, restarting them when necessary. That way, your application will keep running, ensuring that your website stays up. Because this is automatic and builtin, you do not have to setup separate supervision systems like Monit, saving you time and effort.” but this is what Monit excels at and it’s a much easier set-up than Passenger. This sort of marketing doesn’t sit right with me.
    • Host multiple apps at once. Host multiple apps on a single server with minimal effort. ” If we’re talking NodeJS web apps, then they are their own server. They host themselves. In this case it looks like Passenger is trying to solve a problem that doesn’t exist?
  • Improve security
    • Privilege separationIf you host multiple apps on the same system, then you can easily run each app as a different Unix user, thereby separating privileges.“. The Monit documentation says this: “If Monit is run as the super user, you can optionally run the program as a different user and/or group.” and goes on to provide examples how it’s done. So again I don’t see anything new here. Other than the “Slow client protections” which has side affects, that’s it for security considerations with Passenger. From what I’ve seen Monit has more in the way of security related features.
  • What I saw happening here was a lot of stuff that I actually didn’t need. Your mileage may vary.

Offerings

Phusion Passenger is a commercial product that has enterprise, custom and open source (which is free and still has loads of features).

Documentation

The following was the documentation I used in the same order and I found that the most helpful.

  1. NodeJS tutorial (This got me started with how it could work with NodeJS)
  2. Main web site
  3. Documentation and support portal
  4. Design and Architecture
  5. User Guide Index
  6. Nginx specific User Guide
  7. Standalone User Guide
  8. Twitter, blog
  9. IRC: #passenger at irc.freenode.net. I was on there for several days. There was very little activity.

Source

Does it Meet our Goals?

  1. Application should start automatically on system boot. There is no doubt that Passenger goes way beyond this aim.
  2. Application should be re-started if it dies or becomes un-responsive. There is no doubt that Passenger goes way beyond this aim.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Passenger provides Integrations into Nginx, Apache and stand-alone (provide your own proxy)
    2. Passenger scales up NodeJS processes and automatically load balances between them
    3. Passenger is advertised as offering easily viewable statistics.
  4. There is loads of official documentation. Not as much community contributed though, as it’s still young.
  5. From what I’ve seen so far, I’d say Passenger is production ready. I would like to see more around how security was baked into the architecture though before I committed to using it.

Overall Thoughts

I spent quite a while reading the documentation. I just think it’s doing to much. I prefer to have stronger single focused tools that do one job, do it well and play nicely with all the other kids in the sand pit. You pick the tool up and it’s just intuitive how to use it and you end up reading docs to confirm how you think it should work. For me, this is not how passenger is.

If you’re looking for something even more comprehensive, check out Zabbix. If you like to pay for your tools, check out Nagios if you haven’t already.


At this point it was fairly clear as to which components I’d be using and configuring to keep my NodeJS application monitored, alive and healthy along with any other scripts or processes. systemd and Monit. If you’re on Ubuntu, you’d probably use Upstart instead of systemd as it should already be your default init system. So going with the default for the init system should give you a quick start and provide plenty of power. Plus it’s well supported, reliable, feature rich and you can manage anything/everything you want without installing extra packages. For the next level up, I’d choose Monit. I’ve now used it in production and it’s taken care of everything above the init system. I feel it has a good level of abstraction, plenty of features, doesn’t get in the way and integrates nicely into your production OS.

Getting Started with Monit

So we’ve installed Monit with an apt-get install monit and we’re ready to start configuring it.

ps aux | grep -i monit

Will reveal that Monit is running:

/usr/bin/monit -c /etc/monit/monitrc

Now if you issue a sudo service monit restart, it won’t work as you can’t access the Monit CLI due to the httpd not running.

The first thing we need to do is make some changes to the control file (/etc/monit/monitrc in Debian). The control file has sensible defaults already. At this stage I don’t need a web UI accessible via localhost or any other hosts, but it still needs to be turned on and accessible by at least localhost. Here’s why:

Note that HTTP support is required for almost all Monit CLI interface operation, as CLI commands (such as “monit status”) are handled by communicating with the Monit background process via the the HTTP interface. So basically you should have this enable, though you can bind the HTTP interface to localhost only so Monit is not accessible from the outside.

In order to turn on the httpd, all you need in your control file for that is:

set httpd port 2812 and use address localhost # only accept connection from localhost
allow localhost # allow localhost to connect to the server and

If you want to receive alerts via email, then you’ll need to configure that. Then on reload you should get start and stop events (when you quit).

sudo monit reload

Now if you issue a curl localhost:2812 you should get the web UI’s response of a html page. Now you can start to play with the Monit CLI

Now to stop the Monit background process use:

monit quit

Oh, you can find all the arguments you can throw at Monit here, or just issue a:

monit -h # will list all options.

To check the control file for syntax errors:

sudo monit -t

Also keep an eye on your log file which is specified in the control file: set logfile /var/log/monit.log

Right. So what happens when Monit dies…

Keep Monit Alive

Now you’re going to want to make sure your monitoring tool that can be configured to take all sorts of actions never just stops running, leaving you flying blind. No noise from your servers means all good right? Not necessarily. Your monitoring tool just has to keep running. So lets make sure of that now.

When Monit is apt-get install‘ed on Debian it gets installed and configured to run as a daemon. This is defined in Monit’s init script.
Monit’s init script is copied to /etc/init.d/ and the run levels set-up for it. This means when ever a run level is entered the init script will be run taking either the single argument of stop (example: /etc/rc0.d/K01monit), or start (example: /etc/rc2.d/S17monit). Further details on run levels here.

systemd to the rescue

Monit is pretty stable, but if for some reason it dies, then it won’t be automatically restarted again.
This is where systemd comes in. systemd is installed out of the box on Debian Jessie on-wards. Ubuntu uses Upstart which is similar. Both SysV init and systemd can act as drop-in replacements for each other or even work along side of each other, which is the case in Debian Jessie. If you add a unit file which describes the properties of the process that you want to run, then issue some magic commands, the systemd unit file will take precedence over the init script (/etc/init.d/monit)

Before we get started, lets get some terminology established. The two concepts in systemd we need to know about are unit and target.

  1. A unit is a configuration file that describes the properties of the process that you’d like to run. There are many examples of these I can show you and I’ll point you in the direction soon. They should have a [Unit] directive at a minimum. The syntax of the unit files and the target files were derived from Microsoft Windows .ini files. Now I think the idea is that if you want to have a [Service] directive within your unit file, then you would append .service to the end of your unit file name.
  2. A target is a grouping mechanism that allows systemd to start up groups of processes at the same time. This happens at every boot as processes are started at different run levels.

Now in Debian there are two places that systemd looks for unit files… In order from lowest to highest precedence, they are as follows:

  1. /lib/systemd/system/ (prefix with /usr dir for archlinux) unit files provided by installed packages. Have a look in here for many existing examples of unit files.
  2. /etc/systemd/system/ unit files created by the system administrator

As mentioned above, systemd should be the first process started on your Linux server. systemd reads the different targets and runs the scripts within the specific target’s “target.wants” directory (which just contains a collection of symbolic links to the unit files). For example the target file we’ll be working with is the multi-user.target file (actually we don’t touch it, systemctl does that for us (as per the magic commands mentioned above)). Just as systemd has two locations in which it looks for unit files. I think this is probably the same for the target files, although there wasn’t any target files in the system administrator defined unit location but there were some target.wants files there.

systemd Monit Unit file

I found a template that Monit had already provided for a unit file in /usr/share/doc/monit/examples/monit.service. There’s also one for Upstart. Copy that to where the system administrator unit files should go and make the change so that systemd restarts Monit if it dies for what ever reason. Check the Restart= options on the systemd.service man page. The following is what my initial unit file looked like:

[Unit]
Description=Pro-active monitoring utility for unix systems
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/monit -I -c /etc/monit/monitrc
ExecStop=/usr/bin/monit -c /etc/monit/monitrc quit
ExecReload=/usr/bin/monit -c /etc/monit/monitrc reload
Restart=always

[Install]
WantedBy=multi-user.target

Now, some explanation. Most of this is pretty obvious. The After= directive just tells systemd to make sure the network.target file has been acted on first and of course network.target has After=network-pre.target which doesn’t have a lot in it. I’m not going to go into this now, as I don’t really care too much about it. It works. It means the network interfaces have to be up first. If you want to know how, why, check this documentation. Type=simple. Again check the systemd.service man page.
Now to have systemd control Monit, Monit must not run as a background process (the default). To do this, we can either add the set init statement to Monit’s control file or add the -I option when running systemd, which is exactly what we’ve done above. The WantedBy= is the target that this specific unit is part of.

Now we need to tell systemd to create the symlinks in multi-user.target.wants directory and other things. See the man page for more details about what enable actually does if you want them. You’ll also need to start the unit.

Now what I like to do here is:

systemctl status /etc/systemd/system/monit.service

Then compare this output once we enable the service:

● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; disabled)
   Active: inactive (dead)
sudo systemctl enable /etc/systemd/system/monit.service
# systemd now knows about monit.service
systemctl status /etc/systemd/system/monit.service

Outputs:

● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; enabled)
   Active: inactive (dead)

Now start the service:

sudo systemctl start monit.service # there's a stop and restart also.

Now you can check the status of your Monit service again. This shows terse runtime information about the units or PID you specify (monit.service in our case).

sudo systemctl status monit.service

By default this function will show you 10 lines of output. The number of lines can be controlled with the --lines= option

sudo systemctl --lines=20 status monit.service

Now try killing the Monit process. At the same time, you can watch the output of Monit in another terminal. tmux or screen is helpful for this:

sudo tail -f /var/log/monit.log
sudo kill -SIGTERM $(pidof monit)
# SIGTERM is a safe kill and is the default, so you don't actually need to specify it. Be patient, this may take a minute or two for the Monit process to terminate.

Or you can emulate a nastier termination with SIGKILL or even SEGV (which may kill monit faster).

Now when you run another status command you should see the PID has changed. This is because systemd has restarted Monit.

When you need to make modifications to the unit file, you’ll need to run the following command after save:

sudo systemctl daemon-reload

When you need to make modifications to the running services configuration file
/etc/monit/monitrc for example, you’ll need to run the following command after save:

sudo systemctl reload monit.service
# because systemd is now in control of Monit, rather than the before mentioned: sudo monit reload

 

Keep NodeJS Application Alive

Right, we know systemd is always going to be running. So lets use it to take care of the coarse grained service control. That is keeping your NodeJS application service alive.

Using systemd

systemd my-web-app.service Unit file

You’ll need to know where your NodeJS binary is. The following will provide the path:

which NodeJS

Now create a systemd unit file my-nodejs-app.service

[Unit]
Description=My amazing NodeJS application
After=network.target

[Service]
# systemctl start my-nodejs-app # to start the NodeJS script
ExecStart=[where nodejs binary lives] [where your app.js/index.js lives]
# systemctl stop my-nodejs-app # to stop the NodeJS script
# SIGTERM (15) - Termination signal. This is the default and safest way to kill process.
# SIGKILL (9) - Kill signal. Use SIGKILL as a last resort to kill process. This will not save data or cleaning kill the process.
ExecStop=/bin/kill -SIGTERM $MAINPID
# systemctl reload my-nodejs-app # to perform a zero-downtime restart.
# SIGHUP (1) - Hangup detected on controlling terminal or death of controlling process. Use SIGHUP to reload configuration files and open/close log files.
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=my-nodejs-app
User=my-nodejs-app
Group=my-nodejs-app # Not really needed unless it's different, as the default group of the user is chosen without this option. Self documenting though, so I like to have it present.
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

Add the system user and group so systemd can actually run your service as the user you’ve specified.

sudo groupadd --system my-nodejs-app # this is not needed if you adduser like below...
getent group # to verify which groups exist.
sudo adduser --system --no-create-home --group my-nodejs-app # This will create a system group with the same name and ID of the user.
groups my-nodejs-app # to verify which groups the new user is in.

Now as we did above, go through the same procedure enable‘ing, start‘ing and verifying your new service.

Make sure you have your directory permissions set-up correctly and you should have a running NodeJS application that when it dies will be restarted automatically by systemd.

Don’t forget to backup all your new files and changes in case something happens to your server.

We’re done with systemd for now. Following are some useful resources I’ve used:

 

Using Monit

Now just configure your Monit control file. You can spend a lot of time here tweaking a lot more than just your NodeJS application. There are loads of examples around and the control file itself has lots of commented out examples also. You’ll find the following the most helpful:

There are a few things that had me stuck for a bit. By default Monit only sends alerts on change, not on every cycle if the condition stays the same, unless when you set-up your

set alert your-ame@your.domain

Append receive all alerts, so that it looks like this:

set alert your-ame@your.domain receive all alerts

There’s quite a few things you just work out as you go. The main part I used to health-check my NodeJS app was:

check host myhost with address 1.2.3.4
   start program = "/bin/systemctl start my-nodejs-app.service"
   stop program = "/bin/systemctl stop my-nodejs-app.service"
   if failed ping then alert
   if failed
      port 80 and
      protocol http and
      status = 200 # The default without status is failure if status code >= 400
      request /testdir with content = "some text on my web page" and
         then restart
   if 5 restarts within 5 cycles then alert

I carry on and check things like:

  1. cpu and memory usage
  2. load averages
  3. File system space on all the mount points
  4. Check SSH that it hasn’t been restarted by anything other than Monit (potentially swapping the binary or it’s config). Of course if an attacker kills Monit, systemd immediately restarts it and we get Monit alert(s). We also get real-time logging hopefully to an off-site syslog server. Ideally your off-site syslog server also has alerts set-up on particular log events. On top of that you should also have inactivity alerts set-up so that if your log files are not generating events that you expect, then you also receive alerts. Services like Dead Man’s Snitch or packages like Simple Event Correlator with Cron are good for this. On top of all that, if you have a file integrity checker that resides on another system that your host reveals no details of and you’ve got it configured to check all the right file check-sums, dates, permissions, etc, you’re removing a lot of low hanging fruit for someone wanting to compromise your system.
  5. Directory permissions, uid, gid and checksums. Of course you’re also going to have to make sure the tools that Monit uses to do these checks haven’t been modified.

 

If you find anything I haven’t explained clearly, or you need a hand with any of this just leave a comment. Cheers.

Evaluation of Host Intrusion Detection Systems (HIDS)

May 30, 2015

Followed up with a test deployment and drive.

The best time to install a HIDS is on a fresh install before you open the host up to the internet or even your LAN if it’s corporate. Of course if you don’t have that luxury, there are a bunch of tools that can help you determine if you’re already owned. Be sure to run one or more over your target system before your HIDS bench-marks it.

The reason I chose Stealth and OSSEC to take further into an evaluation was because they rose to the top of a preliminary evaluation I performed during a recent Debian Web server hardening process where I looked at several other HIDS as well.

OSSEC

ossec-hids

Source

ossec-hids on github

Who’s Behind Ossec

  • Many developers, contributors, managers, reviewers, translators (infact the OSSEC team looks almost as large as the Stealth user base)

Documentation

Lots of documentation. Not always the easiest to navigate. Lots of buzz on the inter-webs.
Several books.

Community / Communication

IRC channel #ossec at irc.freenode.org Although it’s not very active.

Components

  • Manager (sometimes called server): does most of the work monitoring the Agents. It stores the file integrity checking databases, the logs, events and system auditing entries, rules, decoders, major configuration options.
  • Agents: small collections of programs installed on the machines we are interested in monitoring. Agents collect information and forward it to the manager for analysis and correlation.

There are quite a few other ancillary components also.

Architecture

You can also go the agent-less route which may allow the Manager to perform file integrity checks using agent-less scripts. As with Stealth, you’ve still got the issue of needing to be root in order to read some of the files.

Agents can be installed on VMware ESX but from what I’ve read it’s quite a bit of work.

Features (What does it do)

  • File integrity checking
  • Rootkit detection
  • Real-time log file monitoring and analysis (you may already have something else doing this)
  • Intrusion Prevention System (IPS) features as well: blocking attacks in real-time
  • Alerts can go to a databases MySQL or PostgreSQL or other types of outputs
  • There is a PHP web UI that runs on Apache if you would rather look at pretty outputs vs log files.

What I like

To me, the ability to scan in real-time off-sets the fact that the agents need binaries installed. This hinders the attacker from covering their tracks.

Can be configured to scan systems in realtime based on inotify events.

Backed by a large company Trend Micro.

Options. Install options for starters. You’ve got the options of:

  • Agent-less installation as described above
  • Local installation: Used to secure and protect a single host
  • Agent installation: Used to secure and protect hosts while reporting back to a
    central OSSEC server
  • Server installation: Used to aggregate information

Can install a web UI on the manager, so you need Apache, PHP, MySQL.

What I don’t like

  • Unlike Stealth, The fact that something has to be installed on the agents
  • The packages are not in the standard repositories. The downloads, PGP keys and directions are here.
  • I think Ossec may be doing to much and if you don’t like the way it does one thing, you may be stuck with it. Personally I really like the idea of a tool doing one thing, doing it well and providing plenty of configuration options to change the way it does it’s one thing. This provides huge flexibility and minimises your dependency on a suite of tools and/or libraries
  • Information overload. There seems to be a lot to get your head around to get it set-up. There are a lot of install options documented (books, interwebs, official docs). It takes a bit to workout exactly the best procedure for your environment. Following are some of the resources I used:
    1. Some official documentation
    2. Perving in the repository
    3. User take one install
    4. User take two install
    5. User take three install

Stealth

Stealth file integrity checker

Source

sourceforge

Who’s Behind Stealth

Author: Frank B. Brokken. An admirable job for one person. Frank is not a fly-by-nighter though. Stealth was first presented to Congress in 2003. It’s still actively maintained and used by a few. It’s one of GNU/Linux’s dirty little secrets I think. It’s a great idea, makes a tricky job simple and does it in an elegant way.

Documentation

  • 3.00.00 (2014-08-29)
  • 2.11.03 (2013-06-18)
    • Once you install Stealth, all the documentation can be found by sudo updatedb && locate stealth. I most commonly used: HTML docs (/usr/share/doc/stealth-doc/manual/html/) and (/usr/share/doc/stealth-doc/manual/pdf/stealth.pdf) for easy searching across the HTML docs
    • man page (/usr/share/doc/stealth/stealthman.html)
    • Examples: (/usr/share/doc/stealth/examples/)
  • More covered in my demo set-up below

Binaries

  • 3.00.00 (2014-08-29) is in the repository for Debain Jessie
  • 2.11.03-2 (2013-06-18) This is as recent as you can go for Linux Mint Qiana (17) and Rebecca (17.1) within the repositories, unless you want to go out of band. There are quite a few dependencies ldd will show you:
    libbobcat.so.3 =&amp;gt; /usr/lib/libbobcat.so.3 (0xb765d000)
    libpthread.so.0 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libpthread.so.0 (0xb7641000)
    libstdc++.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb754e000)
    libm.so.6 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libm.so.6 (0xb7508000)
    libgcc_s.so.1 =&amp;gt; /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74eb000)
    libc.so.6 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb7340000)
    libX11.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libX11.so.6 (0xb71ee000)
    libcrypto.so.1.0.0 =&amp;gt; /usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0 (0xb7022000)
    libreadline.so.6 =&amp;gt; /lib/i386-linux-gnu/libreadline.so.6 (0xb6fdc000)
    libmilter.so.1.0.1 =&amp;gt; /usr/lib/i386-linux-gnu/libmilter.so.1.0.1 (0xb6fcb000)
    /lib/ld-linux.so.2 (0xb7748000)
    libxcb.so.1 =&amp;gt; /usr/lib/i386-linux-gnu/libxcb.so.1 (0xb6fa5000)
    libdl.so.2 =&amp;gt; /lib/i386-linux-gnu/i686/cmov/libdl.so.2 (0xb6fa0000)
    libtinfo.so.5 =&amp;gt; /lib/i386-linux-gnu/libtinfo.so.5 (0xb6f7c000)
    libXau.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libXau.so.6 (0xb6f78000)
    libXdmcp.so.6 =&amp;gt; /usr/lib/i386-linux-gnu/libXdmcp.so.6 (0xb6f72000)
    
  • 2.10.01 (2012-10-04) is in the repository for Debian Wheezy

Community / Communication

No community really. I see it as one of the dirty little secretes that I’m surprised many diligent sysadmins haven’t jumped on. The author is happy to answer emails. The author doesn’t market.

Components

Controller

The computer initiating the scan.

Needs two kinds of outgoing services:

  1. ssh to reach the clients
  2. mail transport agent (MTA)(sendmail, postfix)

Considerations for the controller:

  1. No public access.
  2. All inbound services should be denied.
  3. Access only via its console.
  4. Physically secure location (one would think goes without saying, but you may be surprised).
  5. Sensitive information of the clients are stored on the controller.
  6. Password-less access to the clients for anyone who gains controller root access, unless ssh-cron is used, which appears to have been around since 2014-05.

Client

The computer/s being scanned. I don’t see any reason why a Stealth solution couldn’t be set-up to look after many clients.

Architecture

The controller stores one to many policy files. Each of which is specific to a single client and contains use directives and commands. It’s recommended policy to take copies of the client programmes such as the hashing programme sha1sum, find and others that are used extensively during the integrity scans and copy them to the controller to take bench-mark hashes. Subsequent runs will do the same to compare with the initial hashes stored.

Features (What does it do)

File integrity tests leaving virtually no sediments on the tested client.

Stealth subscribes to the “dark cockpit” approach. I.E. no mail is sent when no changes were detected. If you have a MTA, Stealth can be configured to send emails on changes it finds.

What I like

  • It’s simplicity. There is one package to install on the controller. Nothing to install on the client machines. Client just needs to have the controllers SSH public key. You will need a Mail Transfer Agent on your controller if you don’t already have one. My test machine (Linux Mint) didn’t have one.
  • Rather than just modifying the likes of sha1sum on the clients that Stealth uses to perform it’s integrity checks, Stealth would somehow have to be fooled into thinking that the changed hash of the sha1sum it’s just copied to the controller is the same as the previously recorded hash that it did the same with. If the previously recorded hash is removed or does not match the current hash, then Stealth will fire an alert off.
  • It’s in the Debian repositories. Which is a little surprising considering I couldn’t find any test suite results.
  • The whole idea behind it. Systems being monitored give little appearance that they’re being monitored, other than I think the presence of a single SSH login when Stealth first starts in the auth.log. This could actually be months ago, as the connection remains active for the life of Stealth. The login could be from a user doing anything on the client. It’s very discrete.
  • unpredictability of Stealth runs is offered through Stealth’s --random-interval and --repeat options. E.g., --repeat 60 --random-interval 30 results in new Stealth-runs on average every 75 seconds.
  • Subscribes to the Unix philosophy: “do one thing and do it well”
  • Stealth’s author is very approachable and open. After talking with Frank and suggesting some ideas to promote Stealth and it’s community, Frank started a discussion list.

What I don’t like

  • Lack of visible code reviews and testing.Yes it’s in Debian, but so was OpenSSL and Bash
  • One man band. Support provided via one person alone via email. Comparing with the likes of Ossec which has …
  • Lack of use cases. I don’t see anyone using/abusing it. Although Frank did send me some contacts of other people that are using it, so again, a very helpful author. Can’t find much on the interwebs. The documentation has clear signs that it’s been written and is targeting people already familiar with the tool. This is understandable as the author has been working on this project for nine years and could possibly be disconnected with what’s involved for someone completely new to the project to dive in and start using it. In saying that, that’s what I did and so far it worked out well.
  • This tells me that either very few are using it or it has no bugs and the install and configuration is stupidly straight forward or both.
  • Small user base. This is how many people are happy to reveal they are using Stealth.
  • Reading through the userguide, the following put me off a little: “preferably the public ssh-identity key of the controller should be placed in the client’s root .ssh/authorized_keys file, granting the controller root access to the client. Root access is normally needed to gain access to all directories and files of the client’s file system.” I never allow SSH root access to servers. So I’m not about to start. What’s worse, is that Stealth SSHing from server to client with key-pair can only do so automatically if the pass-phrase is blank. If it’s not blank then someone has to drive Stealth and the whole idea of Stealth (as far as I can tell) is to be an automatic file integrity checker.
    There are however some work-arounds to this. ssh-cron can run scheduled Stealth jobs, but needs to aquire the pass-phrase from the user once. It does this via ssh-askpass which is a X11 based pass-phrase input dialog. If your running ssh-cron and Stealth from a server (which would be a large number of potential users I would think) you won’t have X11. So if that’s the case ssh-cron is out of the question. At least that’s how I understand it from the man page and emails from Frank. Frank mentioned in an email: “Stealth itself could perfectly well be started `by hand’ setting up the connection using, e.g., an existing ssh private-key, which could thereafter completely be removed from the system running Stealth. The running Stealth process would then continue to use the established connection for as long as it’s running. This may be a very long time if the –daemon option is used.” I’ve used Monit to do any checks on the client that need root access. This way Stealth can run as a low privileged user.
  • In order to use an SSH key-pair with passphrase and have your controller resume scans on reboot, you’re going to need ssh-cron. Some distros like Mint and Ubuntu only have very old versions of libbobcat (3.19.01 (December 2013)) in their repositories. You could re-build if you fancy the dependency issues it may bring. Failing that, use Debian which is way more up to date, or just fire stealth off each time you reboot your controller machine manually and run it as a daemon with arguments such as --keep-alive (or --daemon if running stealth >= 4.00.00), --repeat. This will cause Stealth to keep running (sleeping) most of the time, then doing it’s checks at the intervals you define.

Outcomes

In making all of my considerations, I changed my mind quite a few times on which offerings were most suited to which environments. I think this is actually a good thing, as I think it means my evaluations were based on the real merits of each offering rather than any biases.

If you already have real-time logging to an off-site syslog server set-up and alerting. OSSEC would provide redundant features.

If you don’t already have real-time logging to an off-site syslog server, then OSSEC can help here.

The simplicity of Stealth, flatter learning curve and it’s over-all philosophy is what won me over.

Stealth Up and Running

I tested this out on a Mint installation.

Installed stealth and stealth-doc (2.11.03-2) via synaptic package manager. Then just did a locate for stealth to find the docs and other example files. The following are the files I used for documentation, how I used them and the tab order that made sense to me:

  1. The main documentation index: file:///usr/share/doc/stealth-doc/manual/html/stealth.html
  2. Chapter one introduction: file:///usr/share/doc/stealth-doc/manual/html/stealth01.html
  3. Chapter three to help build up a policy file: file:///usr/share/doc/stealth-doc/manual/html/stealth03.html
  4. Chapter five for running Stealth and building up policy file: file:///usr/share/doc/stealth-doc/manual/html/stealth05.html
  5. Chapter six for running Stealth: file:///usr/share/doc/stealth-doc/manual/html/stealth06.html
  6. Chapter seven for arguements to pass to Stealth: file:///usr/share/doc/stealth-doc/manual/html/stealth07.html
  7. Chapter eight for error messages: file:///usr/share/doc/stealth-doc/manual/html/stealth08.html
  8. The Man page: file:///usr/share/doc/stealth/stealthman.html
  9. Policy file examples: file:///usr/share/doc/stealth/examples/
  10. Useful scripts to use with Stealth: file:///usr/share/doc/stealth/scripts/usr/bin/
  11. All of the documentation in simple text format (good for searching across chapters for strings): file:///usr/share/doc/stealth-doc/manual/text/stealth.txt

Files I would need to copy and modify were:

  • /usr/share/doc/stealth/scripts/usr/bin/stealthcleanup.gz
  • /usr/share/doc/stealth/scripts/usr/bin/stealthcron.gz
  • /usr/share/doc/stealth/scripts/usr/bin/stealthmail.gz

Files I used for reference to build up a policy file:

  • /usr/share/doc/stealth/examples/demo.pol.gz
  • /usr/share/doc/stealth/examples/localhost.pol.gz
  • /usr/share/doc/stealth/examples/simple.pol.gz

As mentioned above, providing you have a working MTA, then Stealth will just do it’s thing when you run it. The next step is to schedule it’s runs. This can be also (as mentioned above) with a pseudo random interval.

Feel free to leave a comment if you need help setting Stealth up, as it did take a bit of fiddling, but does what it says it does on the box very well.

Web Server Log Management

April 25, 2015

As part of the ongoing work around preparing a Debian web server to host applications accessible from the WWW I performed some research, analysis, made decisions along the way and implemented a first stage logging strategy. I’ve done similar set-ups many times before, but thought it worth sharing my experience for all to learn something from it and/or provide input, recommendations, corrections to the process so we all get to improve.

The main system loggers I looked into

  • GNU syslogd which I don’t think is being developed anymore? Correct me if I’m wrong. Most Linux distributions no longer ship with this. Only supports UDP. It’s also a bit lacking in features. From what I gather is single-threaded. I didn’t spend long looking at this as there wasn’t much point. The following two offerings are the main players.
  • rsyslog: which ships with Debian and most other Linux distros now I believe. I like to do as little as possible and rsyslog fits this description for me. The documentation seems pretty good. Rainer Gerhards wrote rsyslog and his blog provides some good insights. Supports UDP, TCP. Can send over TLS. There is also the Reliable Event Logging Protocol (RELP) which Rainer created.
    rsyslog is great at gathering, transporting, storing log messages and includes some really neat functionality for dividing the logs. It’s not designed to alert on logs. That’s where the likes of Simple Event Correlator (SEC) comes in. Rainer discusses why TCP isn’t as reliable as many think here.
  • syslog-ng: I didn’t spend to long here, as I didn’t see any features that I needed that were better than the default of rsyslog. Can correlate log messages, both real-time and off-line. Supports reliable and encrypted transport using TCP and TLS. message filtering, sorting, pre-processing, log normalisation.

There are are few comparisons around. Most of the ones I’ve seen are a bit biased and often out of date.

Aims

  • Record events and have them securely transferred to another syslog server in real-time, or as close to it as possible, so that potential attackers don’t have time to modify them on the local system before they’re replicated to another location
  • Reliability (resilience / ability to recover connectivity)
  • Extensibility: ability to add more machines and be able to aggregate events from many sources on many machines
  • Receive notifications from the upstream syslog server of specific events. No HIDS is going to remove the need to reinstall your system if you are not notified in time and an attacker plants and activates their root-kit.
  • Receive notifications from the upstream syslog server of lack of events. The network is down for example.

Environmental Considerations

A couple of servers in the mix:

FreeNAS File Server

Recent versions can send their syslog events to a syslog server. With some work, it looks like FreeNAS can be setup to act as a syslog server.

pfSense Router

Can send log events, but only by UDP by the look of it.

Following are the two strategies that emerged. You can see by the detail that I went down the path of the first one initially. It was the path of least resistance / quickest to setup. I’m going to be moving away from papertrail toward strategy two. Mainly because I’ve had a few issues where messages have been getting lost that have been very hard to track down (I’ve spent over a week on it). As the sender, you have no insight into what papertrail is doing. The support team don’t provide a lot of insight into their service when you have to trouble-shoot things. They have been as helpful as they can be, but I’ve expressed concern around them being unable to trouble-shoot their own services.

Outcomes

Strategy One

Rsyslog, TCP, local queuing, TLS, papertrail for your syslog server (PT doesn’t support RELP, but say that’s because their clients haven’t seen any issues with reliability in using plain TCP over TLS with local queuing). My guess is they haven’t looked hard enough. I must be the first then. Beware!

As I was setting this up and watching both ends. We had an internet outage of just over an hour. At that stage we had very few events being generated, so it was trivial to verify both ends. I noticed that once the ISP’s router was back on-line and the events from the queue moved to papertrail, that there was in fact one missing.

Why did Rainer Gerhards create RELP if TCP with queues was good enough? That was a question that was playing on me for a while. In the end, it was obvious that TCP without RELP isn’t good enough.
At this stage it looks like the queues may loose messages. Rainer says things like “In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that the action is not ready, which means the remote server is off-line. This can be detected with plain TCP syslog and RELP“, but it can be detected without RELP.

You can aggregate log files with rsyslog or by using papertrails remote_syslog daemon.

Alerting is available, including for inactivity of events.

Papertrails documentation is good and support is reasonable. Due to the huge amounts of traffic they have to deal with, they are unable to trouble-shoot any issues you may have. If you still want to go down the papertrail path, to get started, work through this which sets up your rsyslog to use UDP (specified in the /etc/rsyslog.conf by a single ampersand in front of the target syslog server). I want something more reliable than that, so I use two ampersands, which specifies TCP.

As we’re going to be sending our logs over the internet for now, we need TLS. Check papertrails CA server bundle for integrity:

curl https://papertrailapp.com/tools/papertrail-bundle.pem | md5sum

Should be: c75ce425e553e416bde4e412439e3d09

If all good throw the contents of that URL into a file called papertrail-bundle.pem.
Then scp the papertrail-bundle.pem into the web servers /etc dir. The command for that will depend on whether you’re already on the web server and you want to pull, or whether you’re somewhere else and want to push. Then make sure the ownership is correct on the pem file.

chown root:root papertrail-bundle.pem

install rsyslog-gnutls

apt-get install rsyslog-gnutls

Add the TLS config

$DefaultNetstreamDriverCAFile /etc/papertrail-bundle.pem # trust these CAs
$ActionSendStreamDriver gtls # use gtls netstream driver
$ActionSendStreamDriverMode 1 # require TLS
$ActionSendStreamDriverAuthMode x509/name # authenticate by host-name
$ActionSendStreamDriverPermittedPeer *.papertrailapp.com

to your /etc/rsyslog.conf. Create egress rule for your router to let traffic out to dest port 39871.

sudo service rsyslog restart

To generate a log message that uses your system syslogd config /etc/rsyslog.conf, run:

logger "hi"

should log “hi” to /var/log/messages and also to papertrail, but it wasn’t.

# Show a live update of the last 10 lines (by default) of /var/log/messages
sudo tail -f [-n <number of lines to tail>] /var/log/messages

OK, so lets run rsyslog in config checking mode:

/usr/sbin/rsyslogd -f /etc/rsyslog.conf -N1

Output all good looks like:

rsyslogd: version <the version number>, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.

Trouble-shooting

  1. https://www.loggly.com/docs/troubleshooting-rsyslog/
  2. http://help.papertrailapp.com/
  3. http://help.papertrailapp.com/kb/configuration/troubleshooting-remote-syslog-reachability/
  4. /usr/sbin/rsyslogd -version will provide the installed version and supported features.

Which didn’t help a lot, as I don’t have telnet installed. I can’t ping from the DMZ as ICMP is not allowed out and I’m not going to install tcpdump or strace on a production server. The more you have running, the more surface area you have, the greater the opportunities to exploit.

So how do we tell if rsyslogd is actually running if it doesn’t appear to be doing anything useful?

pidof rsyslogd

or

/etc/init.d/rsyslog status

Showing which files rsyslogd has open can be useful:

lsof -p <rsyslogd pid>

or just combine the results of pidof rsyslogd

sudo lsof -p $(pidof rsyslogd)

To start with I had a line like:

rsyslogd 3426 root 8u IPv4 9636 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (SYN_SENT)

Which obviously showed rsyslogd‘s SYN packets were not getting through. I’ve had some discussion with Troy from PT support around the reliability of plain TCP over TLS without RELP. I think if the server is business critical, then strategy two “maybe” the better option. Troy has assured me that they’ve never had any issues with logs being lost due to lack of reliability with out RELP. Troy also pointed me to their recommended local queue options. After adding the queue tweaks and a rsyslogd restart, it resulted in:

rsyslogd 3615 root 8u IPv4 9766 0t0 TCP <web server IP>:<sending port>->logs2.papertrailapp.com:39871 (ESTABLISHED)

I could now see events in the papertrail web UI in real-time.

Socket Statistics (ss)(the better netstat) should also show the established connection.

By default papertrail accepts TCP over TLS (TLS encryption check-box on, Plain text check-box off) and UDP. So if your TLS isn’t setup properly, your events won’t be accepted by papertrail. I later confirmed this to be true.

Check that our Logs are Commuting over TLS

Now without installing anything on the web server or router, or physically touching the server sending packets to papertrail or the router. Using a switch (ubiquitous) rather than a hub. No wire tap or multi-network interfaced computer. No switch monitoring port available on expensive enterprise grade switches (along with the much needed access). We’re basically down to two approaches I can think of and I really couldn’t be bothered getting up out of my chair.

  1. MAC flooding with the help of macof which is a utility from the dsniff suite. This essentially causes your switch to go into a “failopen mode” where it acts like a hub and broadcasts it’s packets to every port.

    MAC Flooding

    Or…
  2. Man in the Middle (MiTM) with some help from ARP spoofing or poisoning. I decided to choose the second option, as it’s a little more elegant.

    ARP Spoofing

On our MitM box, I set a static IP: address, netmask, gateway in /etc/network/interfaces and add domain, search and nameservers to the /etc/resolv.conf.

Follow that up with a service network-manager restart

On the web server run:

ifconfig -a

to get MAC: <MitM box MAC> On MitM box run the same command to get MAC: <web server MAC>
On web server run:

ip neighbour

to find MACs associated with IP’s (the local ARP table). Router was: <router MAC>.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> REACHABLE
<router IP> dev eth0 lladdr <router MAC> REACHABLE

Now you need to turn your MitM box into a router temporarily. On the MitM box run

cat /proc/sys/net/ipv4/ip_forward

You’ll see a ‘1’ if forwarding is on. If it’s not, throw a ‘1’ into the file:

echo 1 > /proc/sys/net/ipv4/ip_forward

and check again to make sure. Now on the MitM box run

arpspoof -t <web server IP> <router IP>

This will continue to notify <web server IP> that our (MitM box) MAC address belongs to <router IP>. Essentially… we (MitM box) are <router IP> to the <web server IP> box, but our IP address doesn’t change. Now on the web server you can see that it’s ARP table has been updated and because arpspoof keeps running, it keeps telling <web server IP> that our MitM box is the router.

myuser@webserver:~$ ip neighbour
<MitM box IP> dev eth0 lladdr <MitM box MAC> STALE
<router IP> dev eth0 lladdr <MitM box MAC> REACHABLE

Now on our MitM box, while our arpspoof continues to run, we start Wireshark listening on our eth0 interface or what ever interface your using, and you can see that all packets that the web server is sending, we are intercepting and forwarding (routing) on to the gateway.

Now Wireshark clearly showed that the data was encrypted. I commented out the five TLS config lines in the /etc/rsyslog.conf file -> saved -> restarted rsyslog -> turned on “Plain text” in papertrail and could now see the messages in clear text. Now when I turned off “Plain text” papertrail would no longer accept syslog events. Excellent!

One of the nice things about arpspoof is that it re-applies the original ARP’s once it’s done.

You can also tell arpspoof to poison the routers ARP table. This way any traffic going to the web server via the router, not originating from the web server will be routed through our MitM box also.

Don’t forget to revert the change to /proc/sys/net/ipv4/ip_forward.

Exporting Wireshark Capture

You can use the File->Save As… option here for a collection of output types, or the way I usually do it is:

  1. First completely expand all the frames you want visible in your capture file
  2. File->Export Packet Dissections->as “Plain Text” file…
  3. Check the “All packets” check-box
  4. Check the “Packet summary line” check-box
  5. Check the “Packet details:” check-box and the “As displayed”
  6. OK

Trouble-shooting messages that papertrail never shows

To run rsyslogd in debug

Check to see which arguments get passed into rsyslogd to run as a daemon in /etc/init.d/rsyslog and /etc/default/rsyslog. You’ll probably see a RSYSLOGD_OPTIONS="". There may be some arguments between the quotes.

sudo service rsyslog stop
sudo /usr/sbin/rsyslogd [your options here] -dn >> ~/rsyslog-debug.log

The debug log can be quite useful for trouble-shooting. Also keep your eye on the stderr as you can see if it’s writing anything out (most system start-up scripts throw this away).
Once you’ve finished collecting log:
ctrl+C

sudo service rsyslog start

To see if rsyslog is running

pidof rsyslogd
# or
/etc/init.d/rsyslog status
Turn on the impstats module

The stats it produces show when you run into errors with an output, and also the state of the queues.
You can also run impstats on the receiving machine if it’s in your control. Papertrail obviously is not.
Put the following into your rsyslog.conf file at the top and restart rsyslog:

# Turn on some internal counters to trouble-shoot missing messages
module(load="impstats"
interval="600"
severity="7"
log.syslog="off"

# need to turn log stream logging off
log.file="/var/log/rsyslog-stats.log")
# End turn on some internal counters to trouble-shoot missing messages

Now if you get an error like:

rsyslogd-2039: Could not open output pipe '/dev/xconsole': No such file or directory [try http://www.rsyslog.com/e/2039 ]

You can just change the /dev/xconsole to /dev/console
xconsole is still in the config file for legacy reasons, it should have been cleaned up by the package maintainers.

GnuTLS error in rsyslog-debug.log

By running rsyslogd manually in debug mode, I found an error when the message failed to send:

unexpected GnuTLS error -53 in nsd_gtls.c:1571

Standard Error when running rsyslogd manually produces:

GnuTLS error: Error in the push function

With some help from the GnuTLS mailing list:

That means that send() returned -1 for some reason.” You can enable more output by adding an environment variable GNUTLS_DEBUG_LEVEL=9 prior to running the application, and that should at least provide you with the errno. This didn’t actually provide any more detail to stderr. However, thanks to Rainer we do now have debug.gnutls parameter in the rsyslog code that if you specify this global variable in the rsyslog.conf and assign it a value between 0-10 you’ll have gnutls debug output going to rsyslog’s debug log.

Strategy Two

Rsyslog, TCP, local queuing, TLS, RELP, SEC, syslog server on local network. Notification for inactivity of events could be performed by cron and SEC?
LogAnalyzer also created by Rainer Gerhards (rsyslog author), but more work to setup than an on-line service you don’t have to setup. In saying that. You would have greater control and security which for me is the big win here.
Normalisation also looks like Rainer has his finger in this pie.

In theory Adding RELP to TCP with local queues is a step-up in terms of reliability. Others have said, the reliability of TCP over TLS with local queues is excellent anyway. I’ve yet to confirm it’s excellence. At the time of writing this post,I’m seriously considering moving toward RELP to help solve my reliability issues.

Additional Resource

gentoo rsyslog wiki

Keeping Your Linux Server/s In Time With Your Router

March 28, 2015

Your NTP Server

With this set-up, we’ve got one-to-many Linux servers in a network that all want to be synced with the same up-stream Network Time Protocol (NTP) server/s that your router (or what ever server you choose to be your NTP authority) uses.

On your router or what ever your NTP server host is, add the NTP server pools. Now how you do this really depends on what your using for your NTP server, so I’ll leave this part out of scope. There are many NTP pools you can choose from. Pick one or a collection that’s as close to you’re NTP server as possible.

If your NTP daemon is running on your router, you’ll need to decide and select which router interfaces you want the NTP daemon supplying time to. You almost certainly won’t want it on the WAN interface (unless you’re a pool member) if you have one on your router.

Make sure you restart your NTP daemon.

Your Client Machines

If you have ntpdate installed, /etc/default/ntpdate says to look at /etc/ntp.conf which doesn’t exist without ntp being installed. It looks like this:

# Set to "yes" to take the server list from /etc/ntp.conf, from package ntp,
# so you only have to keep it in one place.
NTPDATE_USE_NTP_CONF=yes

but you’ll see that it also has a default NTPSERVERS variable set which is overridden if you add your time server to /etc/ntp.conf. If you enter the following and ntpdate is installed:

dpkg-query -W -f='${Status} ${Version}\n' ntpdate

You’ll get output like:

install ok installed 1:4.2.6.p5+dfsg-3ubuntu2

Otherwise install it:

apt-get install ntp

The public NTP server/s can be added straight to the bottom of the /etc/ntp.conf file, but because we want to use our own NTP server, we add the IP address of our server that’s configured with our NTP pools to the bottom of the file.

server <IP address of your local NTP server here>

Now if your NTP daemon is running on your router, hopefully you have everything blocked on its interface/s by default and are using a white-list for egress filtering.

In which case you’ll need to add a firewall rule to each interface of the router that you want NTP served up on.

NTP talks over UDP and listens on port 123 by default.

After any configuration changes to your ntpd make sure you restart it. On most routers this is done via the web UI.

On the client (Linux) machines:

sudo service ntp restart

Now issuing the date command on your Linux machine will provide the current time, yes with seconds.

Trouble-shooting

The main two commands I use are:

sudo ntpq -c lpeer

Which should produce output like:

            remote                       refid         st t when poll reach delay offset jitter
===============================================================================================
*<server name>.<domain name> <upstream ntp ip address> 2  u  54   64   77   0.189 16.714 11.589

and the standard NTP query program followed by the as argument:

ntpq

Which will drop you at ntpq’s prompt:

ntpq> as

Which should produce output like:

ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1 15720  963a   yes   yes  none  sys.peer    sys_peer  3

Now in the first output, the * in front of the remote means the server is getting it’s time successfully from the upstream NTP server/s which needs to be the case in our scenario. Often you may also get a refid of .INIT. which is one of the “Kiss-o’-Death Codes” which means “The association has not yet synchronized for the first time”. See the NTP parameters. I’ve found that sometimes you just need to be patient here.

In the second output, if you get a condition of reject, it’s usually because your local ntp can’t access the NTP server you set-up. Check your firewall rules etc.

Now check all the times are in sync with the date command.

GnuPG Key-Pair with Sub-Keys

January 31, 2015

There are quite a few other posts on this topic, but my set-up hasn’t been exactly the same as any I found, so I found myself using quite a few resources to achieve exactly what I wanted.

Synopsis

For my personal work, I mostly use GNU/Linux distributions. All of the following operations have been carried out on such platforms and should work on any Debian derivative.

The initial set-up was performed on a machine other than a laptop. Then I discuss the process I took to get my key pairs into a laptop environment.

All keys are created using the RSA cryptosystem.

I’m going to create a large (4096 bit) RSA key-pair as my master (often called primary) key and then create a smaller (2048 bit) key-pair for signing and then another (2048 bit) key-pair for encrypting/decrypting.

Most of the work is done on the command line.

If you haven’t already got gnupg installed (accessed by the gpg command), run the following command as root. More than likely it’s already installed by default though:

apt-get install gnupg

Run gpg from command line. If it’s the first time it’s been run it’ll produce output like the following. This initialises your .gnupg directory and configuration:

gpg: directory `/home/<you>/.gnupg' created
gpg: new configuration file `/home/<you>/.gnupg/gpg.conf' created
gpg: WARNING: options in `/home/<you>/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/home/<you>/.gnupg/secring.gpg' created
gpg: keyring `/home/<you>/.gnupg/pubring.gpg' created
gpg: Go ahead and type your message ...

Just press Ctrl+d to terminate gpg.

Use the sks key-server pool

This section is optional apart from the first three lines that need to be added to the ~/.gnupg/gpg.conf file. The step of using the pool over TLS can of course be done later.

Rather than rely on a single specific key-server and also over an encrypted channel by using the hkps protocol. If a single server is not functioning properly it’s automatically removed from the pool.

In order to use the hkps protocol (hkp over TLS):

sudo apt-get install gnupg-curl

Now you will have a ~/.gnupg/gpg.conf file you can add the following lines to the end of the config file (SHA-1 (the default) is no longer considered secure).

personal-digest-preferences SHA512
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed
keyid-format 0xlong
with-fingerprint

There may be a keyserver and keyserver-options option in the ~/.gnupg/gpg.conf already. If so, modify it, if not, add it.

keyserver hkps://hkps.pool.sks-keyservers.net
keyserver-options ca-cert-file=/home/kim/.gnupg/sks-keyservers.netCA.pem

This assumes you downloaded the sks-keyservers.net CA certificate and put it in ~/.gnupg/ . You can of course put it anywhere, but the keyserver-options path will need to reflect your placement.

Verify the certificate’s fingerprint. Compare the fingerprint from the previous link with the output from the following command. It should be the same:

openssl x509 -in sks-keyservers.netCA.pem -fingerprint -noout

Anywhere below where the --keyserver option is specified, can be omitted if you’ve set-up the key-server pool.

Master Key-Pair Generation

This process will create a master key-pair that can be used for signing and a sub key-pair for encrypting/decrypting. We’re actually only going to use the master key-pair that’s created out of this process and we won’t use it for anything other than simply being a master, creating other key-pairs with it, signing other peoples keys etc. We won’t be using it for signing, encrypting/decrypting. We will create two additional sub-keys for this purpose in a bit.

This allows us to remove the master key from our computer and put it in a safe place (disconnected entirely from the network) that can’t be easily accessed. This means that if any of our computers are compromised, the attacker only gets access to our sub-keys which are the keys we use to actually do our day to day work of signing, encrypting outgoing messages and decrypting incoming messages.

On top of this they also need our pass phrase in order to compromise our identity. If in fact an attacker is able to compromise this as well, then we bring our master key out of hiding and can easily revoke the compromised sub key-pair(s) of which the public part is probably on a key-server or your blog or website. This way, when ever anyone gets your public sub-keys from one of the many key-servers or your blog or website, they will see that the public key(s) have been compromised and thus deprecated.

Now run:

gpg --gen-key

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection?

I chose 1. That’s (1) RSA and RSA (default)

RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)

Now because this is the master and we’re not actually going to be using it for signing our own messages and encrypting/decrypting and in theory we’ll probably just keep extending it’s expiry date indefinitely, we make it 4096 bit. Why? Because hardware is getting faster all the time and at some stage 2048 bit keys will not be large enough for cryptographic security. Why would we keep extending the master key-pair expiry date? Because we’ve worked hard to acquire other peoples trust (signatures of it) and we don’t really want to go through all that again. That’s why I’ve decided to not actually use the master for day to day work and do everything in my power to make sure it’s never compromised. If somehow the master key-pair was compromised, then I’d still have a revocation certificate that I could use to revoke it. It’d just be a pain though. I go through the creation of the revocation certificate for the master key-pair below.

4096 # Use smaller for sub-keys, as we can replace them easily when it becomes easier to crack them.
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)

I chose 5y

Because I want my master key to expire eventually if it’s compromised along with the pass-phrase and somehow I lost the multiple copies of the master revocation certificate. If it never gets compromised, I’ll just keep extending the expiry date.

Key expires at Fri 06 Dec 2019 23:32:56 NZDT
Is this correct? (y/N)

I chose:

y
You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name:

Enter your real name:

Kim Carter
Email address:

Enter your email address:

First.Last@provider.com
Comment:

Here you can enter something like your website address or your on-line handle or what ever is useful for providing some more identification

lethalduck
You selected this USER-ID:
    "Kim Carter (lethalduck) <First.Last@provider.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?

Enter ‘O’ to continue:

O

Now you’re asked for a passphrase. Make this long and hard to guess. I don’t remember this myself. That’s why I use a password vault. To have unique credentials for everything.

You need a Passphrase to protect your secret key.

This is not my passphrase, but it’s a good example of one. Adding the extra characters that are all the same actually makes for a much harder to crack code. Oh, you’ll be prompted to enter this twice.

....................MW$]T&LP[=:[f/8=RQQ0M!++kMreX"....................

Now you’re asked to generate the entropy. This is done by interacting with the computer. keystrokes, mouse movements, storage media work. I find running my rsync scripts now is quite effective.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 187 more bytes)

I added a pass phrase and waited for the entropy to be collected.
Once gpg has enough entropy, your key-pairs (master for signing, sub-key for encrypting/decrypting) will be created.

gpg: /home/kim/.gnupg/trustdb.gpg: trustdb created
gpg: key F90A5A4E marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2019-12-06
pub 4096R/F90A5A4E 2014-12-07 [expires: 2019-12-06]
Key fingerprint = D6B6 1E46 4DC9 A3E9 F450 F7F8 C9FA 6F23 F90A 5A4E
uid Kim Carter (lethalduck) <First.Last@provider.com>
sub 4096R/65CA12E5 2014-12-07 [expires: 2019-12-06]

Add photo to a uid

Now I wanted to add a photo to my master key-pair.
PGP specifies that the image be no grater than 120×144. GPG recommends it be 240×288. So I chose the smaller size and reduced the quality as much as possible. Could only get it down to 10kb before the image became unrecognisable.

gpg --edit-key F90A5A4E
# or safer...
gpg --edit-key '<your fingerprint>'
# Don't know your fingerprint?
gpg --list-keys
gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <First.Last@provider.com>

gpg>

To add a jpeg:

addphoto

gpg complained that my 10kb image was very large, so I ditched adding the photo.

Add a sub-key for signing

Now before we go any further I just want to make note of the prefixes and suffixes that you’ll often encounter with gpg commands.

Listing your keys with

gpg -K # list secret keys

or

gpg -k # list public keys

will show the following prefixes for your keys.

sec === (sec)ret key
ssb === (s)ecret (s)u(b)-key
pub === (pub)lic key
sub === public (sub)-key

Roles of the key-pair will be represented by the middle character below.

Constant Character Explanation
PUBKEY_USAGE_SIG S Key is good for signing
PUBKEY_USAGE_CERT C Key is good for certifying other signatures
PUBKEY_USAGE_ENC E Key is good for encryption
PUBKEY_USAGE_AUTH A Key is good for authentication

When we add sub-keys, they are bound to the master key. The master key is modified to reference the sub-keys

What we want to do is add a sub-key for signing so we can move the master key-pair off of the machine and into a safe place.
We also want to change the expiry date and reduce the size to 2048 of both the new signing sub-key and also create another sub-key for encryption with a shorter expiry date.

Create backup of your ~/.gnupg directory:

umask 077; tar -cf $HOME/gnupg-backup.tar -C $HOME .gnupg

To add a signing sub-key:

gpg --edit-key F90A5A4E
# or safer...
gpg --edit-key '<your fingerprint>'
# Don't know your fingerprint?
gpg --list-keys

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <First.Last@provider.com>

gpg>

Now we add the key

addkey
Key is protected.

You need a passphrase to unlock the secret key for
user: "Kim Carter (lethalduck) <First.Last@provider.com>"
4096-bit RSA key, ID F90A5A4E, created 2014-12-07

Please select what kind of key you want:
   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
Your selection?

Now we want (4) RSA (sign only)

4

Output:

RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)

Choose 2048 because we can easily regenerate this key-pair or extend the expiry date and at this stage 2048 is secure enough.

2048

Output:

Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)

I set this to 2y

Key expires at Wed 07 Dec 2016 01:21:11 NZDT
Is this correct? (y/N)

y

Really create? (y/N)

y

After this gpg collects more entropy. When it’s done it dumps you back to the gpg prompt

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 186 more bytes)
.......+++++
.+++++

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
[ultimate] (1). Kim Carter (lethalduck) <First.Last@provider.com>

gpg>

Now you can see from the ‘S’ suffix that we do now have a sub-key that’s “good for signing”

Same again but for encrypting

While still at the gpg prompt, run addkey again but choose option 6.

That’s (6) RSA (encrypt only)
Choose 2048 for the keysize.
Choose 2y (two years) for how long the key is valid for.

Eventually you’ll see:

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <First.Last@provider.com>

gpg>

Now you can see from the ‘E’ suffix that we do now have a sub-key that’s “good for encryption”

To save the new keys before finishing with gpg, type save.

Create Revocation Certificate for Master Key

gpg --output F90A5A4E.gpg-revocation-certificate --gen-revoke F90A5A4E

Output:

sec  4096R/F90A5A4E 2014-12-07 Kim Carter (lethalduck) <First.Last@provider.com>

Create a revocation certificate for this key? (y/N)

Type y

Please select the reason for the revocation:
  0 = No reason specified
  1 = Key has been compromised
  2 = Key is superseded
  3 = Key is no longer used
  Q = Cancel
(Probably you want to select 1 here)
Your decision?

Type 1

Enter an optional description; end it with an empty line:
>

Enter anything you like here.

This revocation certificate was generated when the key was created.
>
Reason for revocation: Key has been compromised
This revocation certificate was generated when the key was created.
Is this okay? (y/N)

y

Output:

You need a passphrase to unlock the secret key for
user: "Kim Carter (lethalduck) <First.Last@provider.com>"
4096-bit RSA key, ID F90A5A4E, created 2014-12-07

ASCII armored output forced.
Revocation certificate created.

Please move it to a medium which you can hide away; if Mallory gets
access to this certificate he can use it to make your key unusable.
It is smart to print this certificate and store it away, just in case
your media become unreadable.  But have some caution:  The print system of
your machine might store the data and make it available to others!

Now store your master key-pair revocation certificate somewhere off of the network. Preferably in more than one place also.

Copy ~/.gnupg to an external device (/media/) for safe keeping before we remove the master key-pair from your computer.

Remove master key

Following are the commands to do this.

gpg --export-secret-subkeys F90A5A4E > /media/<your encrypted USB device>/subkeys
gpg --delete-secret-key F90A5A4E

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

sec  4096R/F90A5A4E 2014-12-07 Kim Carter (lethalduck) <First.Last@provider.com>

Delete this key from the keyring? (y/N)

Type y

This is a secret key! - really delete? (y/N)

Type y

gpg --import /media/<your encrypted USB device>/subkeys

Output:

gpg: key F90A5A4E: secret key imported
gpg: key F90A5A4E: "Kim Carter (lethalduck) <First.Last@provider.com>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1

Now check to make sure that the master key-pair is no longer on your computer but is on your USB device:

gpg -K

Output:

sec#  4096R/F90A5A4E 2014-12-07 [expires: 2019-12-06]
uid                  Kim Carter (lethalduck) <First.Last@provider.com>
ssb   4096R/65CA12E5 2014-12-07
ssb   2048R/7A3122BD 2014-12-07
ssb   2048R/8FF9669C 2014-12-07
gpg --home=/media/<your encrypted USB device>/.gnupg/ -K

Output:

sec   4096R/F90A5A4E 2014-12-07 [expires: 2019-12-06]
uid                  Kim Carter (lethalduck) <First.Last@provider.com>
ssb   4096R/65CA12E5 2014-12-07
ssb   2048R/7A3122BD 2014-12-07
ssb   2048R/8FF9669C 2014-12-07

You can see that the first command shows sec#. This means there is no master key-pair in your ~/.gnupg/ directory.

Upload your Public Keys to KeyServer

Remember if you used a key-server pool, anywhere the --keyserver option is specified, can be omitted.

I’ve chosen https://pgp.mit.edu/
You can choose any public keyserver. They all communicate with each other and sync updates at least daily. You can also send more than one public key by adding additional Ids after the –send-keys.

gpg --keyserver hkp://pgp.mit.edu/ --send-keys F90A5A4E

Output:

gpg: sending key F90A5A4E to hkp server pgp.mit.edu

Download public keys from KeyServer

gpg --keyserver hkp://pgp.mit.edu/ --recv-keys <key id to receive and merge signatures>

A safer way to do this is to not just trust every key from a key-server, but rather to verify the key belongs to who you think it belongs to before you download and trust it. Try one at a time and use the fingerprint rather than just the short Id.

gpg --keyserver hkp://pgp.mit.edu/ --recv-key '<fingerprint>'

The single quotes are mandatory around the fingerprint. Double quotes will also work.

Refreshing local Keys from Key-Server

gpg --refresh-keys

Set-up the Laptop with your key-pairs

Copy the contents of the desktops ~/.gnupg/ to the laptops ~/.gnupg/ . I just used the same USB drive for this, but made sure I didn’t mix this .gnupg/ up with the one that had the master key. Then delete the copy without the master key once copied to save any confusion. Also keep in mind that when you delete files from a flash drive they are not actually deleted. That’s why it’s important to use an encrypted USB drive. Also keep it in a very safe place, make a copy of it and keep that off site in a very safe place also.

Make sure you check the permissions of the ~/.gnupg files you just copied to the laptop so that they are the same as the files crated with the gpg command.

Adding another E-Mail Address

Now it’s easier if you do this here in the sequence, but I didn’t think about it until after I’d uploaded the public keys. If you do want to add another uid once you’ve moved the master key, copied your master key’less sub-keys to your laptop, it just means you’ve got to operate on the master key that you moved into /media//.gnupg/, then copy the contents of /media//.gnupg/ back to ~/.gnupg/ on both your desktop and laptop machines not forgetting to change file permissions again, remove master key from ~/.gnupg/ and upload the modified public keys again.

This is how you would add the additional uid:

gpg --home=/media/<your encrypted USB device>/.gnupg --edit-key F90A5A4E
# or safer...
gpg --home=/media/<your encrypted USB device>/.gnupg --edit-key '<your fingerprint>'
# Don't know your fingerprint?
gpg --list-keys

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

gpg: DBG: locking for `/media/<your encrypted USB device>/.gnupg/trustdb.gpg.lock' done via O_EXCL
pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1). Kim Carter (lethalduck) <First.Last@provider.com>

gpg>

Add the extra uid now:

adduid

Output:

Real name:

Enter your real name:

Kim Carter

Output:

Email address:

Enter the additional email address you want:

kim.carter@owasp.org

Output:

Comment:

Add the web page that adds some proof of identity:

https://www.owasp.org/index.php/New_Zealand

Output:

You selected this USER-ID:
    "Kim Carter (https://www.owasp.org/index.php/New_Zealand) <kim.carter@owasp.org>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?

Type O

Output:

You need a passphrase to unlock the secret key for
user: "Kim Carter (lethalduck) <First.Last@provider.com>"
4096-bit RSA key, ID F90A5A4E, created 2014-12-07

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1)  Kim Carter (lethalduck) <First.Last@provider.com>
[ unknown] (2). Kim Carter (https://www.owasp.org/index.php/New_Zealand) <First.Last@owasp.org>

gpg>

Now we want the same trust level applied to the second uid as the existing:

trust

Output:

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1)  Kim Carter (lethalduck) <First.Last@provider.com>
[ unknown] (2). Kim Carter (https://www.owasp.org/index.php/New_Zealand) <First.Last@owasp.org>

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision?

Type 5

Output:

Do you really want to set this key to ultimate trust? (y/N)

Type y

Output

pub  4096R/F90A5A4E  created: 2014-12-07  expires: 2019-12-06  usage: SC
                     trust: ultimate      validity: ultimate
sub  4096R/65CA12E5  created: 2014-12-07  expires: 2019-12-06  usage: E
sub  2048R/7A3122BD  created: 2014-12-07  expires: 2016-12-06  usage: S
sub  2048R/8FF9669C  created: 2014-12-07  expires: 2016-12-06  usage: E
[ultimate] (1)  Kim Carter (lethalduck) <First.Last@provider.com>
[ unknown] (2). Kim Carter (https://www.owasp.org/index.php/New_Zealand) <First.Last@owasp.org>

gpg>

Don’t worry that it still looks like it’s unknown. Once you save and try to edit again, you’ll see the change has been saved.

If you want to make the uid that you’ve tentatively just added your primary, select it:

uid 2

issue the command:

primary

and finally save:

save

Sign Someone Else’s Public Key

You’re going to have to download, import the persons key into your ~/.gnupg/pubring.gpg

If you’ve got a key-server pool configured, you won’t need the --keyserver option.

gpg --recv-key '<fingerprint of public key you want to import>'
gpg --home=/media/<your encrypted USB device>/.gnupg/ --primary-keyring=~/.gnupg/pubring.gpg --sign-key '<fingerprint of public key you want to sign>'

There will be some other output here. I wasn’t actually asked which trust level I wanted to provide, so I carried out the following edit.

gpg --edit-key '<fingerprint of public key you want to sign>'

Output:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

pub  4096R/<id of public key you just signed>  created: 2014-05-09  expires: never       usage: SC
                               trust: unknown       validity: unknown
sub  4096R/<a sub-key>  created: 2014-05-09  expires: never       usage: E
sub  4096R/<another sub-key>  created: 2014-05-09  expires: 2019-05-08  usage: S
[ unknown] (1). <key holders name> (4096 bit key generated 9/5/2014) <e-mail1@gmail.com>
[ unknown] (2)  <key holders name> (Their key) <e-mail@somethingelse.com>
[ unknown] (3)  <key holders name> (Their Yahoo) <e-mail@yahoo.com>
[ unknown] (4)  <key holders name> (Their Other Email Account) <e-mail@whatever.org>

gpg>

Issue the trust command:

trust

Output:

pub  4096R/<id of public key you just signed>  created: 2014-05-09  expires: never       usage: SC
                               trust: unknown       validity: unknown
sub  4096R/<a sub-key>  created: 2014-05-09  expires: never       usage: E
sub  4096R/<another sub-key>  created: 2014-05-09  expires: 2019-05-08  usage: S
[ unknown] (1). <key holders name> (4096 bit key generated 9/5/2014) <e-mail1@gmail.com>
[ unknown] (2)  <key holders name> (Their key) <e-mail@somethingelse.com>
[ unknown] (3)  <key holders name> (Their Yahoo) <e-mail@yahoo.com>
[ unknown] (4)  <key holders name> (Their Other Email Account) <e-mail@whatever.org>

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision?
3

Output:

pub  4096R/<id of public key you just signed>  created: 2014-05-09  expires: never       usage: SC
                               trust: marginal      validity: unknown
sub  4096R/<a sub-key>  created: 2014-05-09  expires: never       usage: E
sub  4096R/<another sub-key>  created: 2014-05-09  expires: 2019-05-08  usage: S
[ unknown] (1). <key holders name> (4096 bit key generated 9/5/2014) <e-mail1@gmail.com>
[ unknown] (2)  <key holders name> (Their key) <e-mail@somethingelse.com>
[ unknown] (3)  <key holders name> (Their Yahoo) <e-mail@yahoo.com>
[ unknown] (4)  <key holders name> (Their Other Email Account) <e-mail@whatever.org>
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg>

Email the Signed Public-Key

In order to send an email with the freshly signed public-key, attach the file generated with the following command, encrypt and send the email to the owner of the public key specified by their uid. Details on how to encrypt the e-mail are specific to the e-mail client you choose to use.

gpg --armor --output <long id of receivers public key>.signed-by.0xc9fa6f23f90a5a4e --export '<fingerprint of public key you just signed>'

Upload the Signed Public-Key to a Key Server

 

gpg --send-key '<fingerprint of public key you just signed>'

Output:

gpg: sending key <long id of receivers public key> to hkps server hkps.pool.sks-keyservers.net

Verify to make sure you’re public domain signing is good.

Import Your Public-Key Signed by Someone Else

At some stage you may need to import a copy of your public-key in the form of a file that someone else has added their signature to

gpg --import ./0xC9FA6F23F90A5A4E.signed-by-<someone else's long id>.asc

Then view your new signatures:

gpg --list-sigs 0xC9FA6F23F90A5A4E

Then upload them again with --send-key
and pull them down to your other machines with --refresh-keys. You’ll also need to --recv-key their keys so that your key recognises who the signatories are. Or… just simply copy over your ~/.gnupg/ directory. Make sure to check your permissions before and after the copy though. We don’t want anyone other than you being able to read these files. Especially the secring.gpg and any pem certs you have.

Browser based E-Mail

Two browser extensions that look OK are:

  1. Mailvelope for Firefox and Chrome (I’m using this)
    Getting set-up details
    Details of how this works here
  2. Mymail-Crypt for Gmail

Desktop based E-Mail

Thunderbird with enigmail

I also found that to send or reply to someone and encrypt, that I had to make a change in Thunderbird, as Thunderbird wrongly thinks I’m not trusting identities when I have specifically set trust levels. I’ve heard comments that if you set the trust level in gpg to “I trust ultimately” then Enigmail is happy to send your mail. I only trust myself ultimately so I found another way.
If you go into the Edit menu of Thunderbird -> Account Settings. For each email account in your gpg signature… OpenPGP Security -> Enigmail Preferences… -> Change “To send encrypted, accept” from Only trusted keys to “All usable keys”. Then when you get the final confirmation of sending encrypted email, you are asked to confirm the 8 digit ID. I just double check by

gpg --edit-key '<keyID that Thunderbird says it's using>'

Additional Resources I’ve Collected

Posts/articles, Documentation

Podcasts

Installation and Hardening of Debian Web Server

December 27, 2014

These are the steps I took to set-up and harden a Debian web server before being placed into a DMZ and undergoing additional hardening before opening the port from the WWW to it. Most of the steps below are fairly simple to do, and in doing so, remove a good portion of the low hanging fruit for nasty entities wanting to gain a foot-hold on your server->network.

Install and Set-up

Debian wheezy, currently stable (supported by the Debian security team for a year or so).

Creating ESXi 5.1 guest

First thing to do is to setup a virtual switch for the host under the Configuration tab. Now I had several quad port Gbit Ethernet adapters in this server. So I created a virtual switch and assigned a physical adapter to it. Now when you create your VM, you choose the VM Network assigned to the virtual switch you created. Provision your disks. Check the “Edit the virtual machine settings before completion” and Continue. You will now be able to modify your settings before you boot the VM. I chose 512MB of RAM at this stage which is far more than it actually needs. While I’m provisioning and hardening the Debian guest, I have the new virtual switch connected to the clients LAN.

ESX Network Configuration

Once we’re done, we can connect the virtual switch up to the new DMZ physical switch or strait into the router. Upload the debian .iso that you downloaded to the ESXi datastore. Then edit the VM settings and select the CD/DVD drive. Select the “Datastore ISO File” option and browse to the .iso file and select the “Connect at power on” option.

6_NewVMSelectIso

Kick the VM in the guts and flick to the VM’s Console tab.

OS Installation

Partitioning

Deleted all the current partitions and added the following. / was added to the start and the rest to the end, in the following order.
/, /var, /tmp, /opt, /usr, /home, swap.

Partitioning Disks

Now the sizes should be setup according to your needs. If you have plenty of RAM, make your swap small, if you have minimal RAM (barely (if) sufficient), you could double the RAM size for your swap. It’s usually a good idea to think about what mount options you want to use for your specific directories. This may shape how you setup your partitions. For example, you may want to have options nosuid,noexec on /var but you can’t because there are shell scripts in /var/lib/dpkg/info so you could setup four partitions. /var without nosuid,noexec and /var/tmp, /var/log, /var/account with nosuid,noexec. Look ahead to the Mounting of Partitions section for more info on this.
In saying this, you don’t need to partition as finely grained as you want options for. You can still mount directories on directories and alter the options at that point. This can be done in the /etc/fstab file and also ad-hoc (using the mount command) if you want to test options out.

You can think about changing /opt (static data) to mount read-only in the future as another security measure.

Continuing with the Install

When you’re asked for a mirror to pull packages from, if you have an apt-cacher[-ng] proxy somewhere on your network, this is the chance to make it work for you thus speeding up your updates and saving internet bandwidth. Enter the IP address and port and leave the rest as default. From the Software selection screen, select “Standard system utilities” and “SSH server”.

10_SoftwareSelection

When prompted to boot into your new system, we need to remove our installation media from the VMs settings. Under the Device Status settings for your VM (if you’re using ESXi), Uncheck “Connected” and “Connect at power on”. Make sure no other boot media are connected at power on. Now first thing we do is SSH into our new VM because it’s a right pain working through the VM hosts console. When you first try to SSH to it you’ll be shown the ECDSA key fingerprint to confirm that the machine you think you are SSHing to is in fact the machine you want to SSH to. Follow the directions here but change that command line slightly to the following:

ssh-keygen -lf ssh_host_ecdsa_key.pub

This will print the keys fingerprint from the actual machine. Compare that with what you were given from your remote machine. Make sure they match and accept and you should be in. Now I use terminator so I have a lovely CLI experience. Of course you can take things much further with Screen or Tmux if/when you have the need.

Next I tell apt about the apt-proxy-ng I want it to use to pull it’s packages from. This will have to be changed once the server is plugged into the DMZ. Create the file /etc/apt/apt.conf if it doesn’t already exist and add the following line:

Acquire::http::Proxy "http://[IP address of the machine hosting your apt cache]:[port that the cacher is listening on]";

Replace the apt proxy references in /etc/apt/sources.list with the internet mirror you want to use, so we contain all the proxy related config in one line in one file. This will allow the requests to be proxied and packages cached via the apt cache on your network when requests are made to the mirror of your choosing.

Update the list of packages then upgrade them with the following command line. If your using sudo, you’ll need to add that to each command:

apt-get update && apt-get upgrade # only run apt-get upgrade if apt-get update is successful (exits with a status of 0)


The steps you take to harden a server that will have many user accounts will be considerably different to this. Many of the steps I’ve gone through here will be insufficient for a server with many users.
The hardening process is not a one time procedure. It ends when you decommission the server. Be prepared to stay on top of your defenses. It’s much harder to defend against attacks than it is to exploit a vulnerability.

Passwords

After a quick look at this, I can in fact verify that we are shadowing our passwords out of the box. It may be worth looking at and modifying /etc/shadow . Consider changing the “maximum password age” and “password warning period”. Consult the man page for shadow for full details. Check that you’re happy with which encryption algorithms are currently being used. The files you’ll need to look at are: /etc/shadow and /etc/pam.d/common-password . The man pages you’ll probably need to read in conjunction with each other are the following:

  • shadow
  • pam.d
  • crypt 3
  • pam_unix

Out of the box crypt supports MD5, SHA-256, SHA-512 with a bit more work for blowfish via bcrypt. The default of SHA-512 enables salted passwords. How can you tell which algorithm you’re using, salt size etc? the crypt 3 man page explains it all.
So by default we’re using SHA-512 which is better than MD5 and the smaller SHA-256.

Now by default I didn’t have a “rounds” option in my /etc/pan.d/common-password module-arguments. Having a large iteration count (number of times the encryption algorithm is run (key stretching)) and an attacker not knowing what that number is, will slow down an attack. I’d suggest adding this and re creating your passwords. As your normal user run:

passwd

providing your existing password then your new one twice. You should now be able to see your password in the /etc/shadow file with the added rounds parameter

$6$rounds=[chosen number of rounds specified in /etc/pam.d/common-password]$[8 character salt]$0LxBZfnuDue7.n5<rest of string>

Check /var/log/auth.log
Reboot and check you can still log in as your normal user. If all good. Do the same with the root account.

Using bcrypt with slowpoke blowfish is a much slower algorithm, so it’s even better for password encryption, but more work to setup at this stage.

Some References

Consider setting a password for GRUB, especially if your server is directly on physical hardware. If it’s on a hypervisor, an attacker has another layer to go through before they can access the guests boot screen. If an attacker can access your VM through the hypervisors management app, you’re pretty well screwed anyway.

Disable Remote Root Logins

Review /etc/pam.d/login so we’re only permitting local root logins. By default this was setup that way.
Review /etc/security/access.conf . Make sure root logins are limited as much as possible. Un-comment rules that you want. I didn’t need to touch this.
Confirm which virtual consoles and text terminal devices you have by reviewing /etc/inittab then modify /etc/securetty by commenting out all the consoles you don’t need (all of them preferably). Or better just issue the following command to fill the file with nothing:

cat /dev/null > /etc/securetty

I back up this file before I do this.
Now test that you can’t log into any of the text terminals listed in /etc/inittab . Just try logging into the likes of your ESX/i vSphere guests console as root. You shouldn’t be able to now.

Make sure if your server is not physical hardware but a VM, then the hosts password is long and made up of a random mix of upper case, lower case, numbers and special characters.

Additional Resources

http://www.debian.org/doc/manuals/securing-debian-howto/ch4.en.html#s-restrict-console-login

SSH

My feeling after a lot of reading is that currently RSA with large keys (The default RSA size is 2048 bits) is a good option for key pair authentication. Personally I like to go for 4096, but with the current growth of processing power (following Moore’s law), 2048 should be good until about 2030. Update: I’m not so sure about the 2030 date for this now.

Create your key pair if you haven’t already and setup key pair authentication. Key-pair auth is more secure and allows you to log in without a password. Your pass-phrase should be stored in your keyring. You’ll just need to provide your local password once (each time you log into your local machine) when the keyring prompts for it. Of course your pass-phrase needs to be kept secret. If it’s compromised, it won’t matter how much you’ve invested into your hardening effort. To tighten security up considerably Make the necessary changes to your servers /etc/ssh/sshd_config file. Start with the changes I’ve listed here.
When you change things like setting up AllowUsers or any other potential changes that could lock you out of the server. It’s a good idea to be logged in via one shell when you exit another and test it. This way if you have locked yourself out, you’ll still be logged in on one shell to adjust the changes you’ve made. Unless you have a need for multiple users, lock it down to a single user. You can even lock it down to a single user from a specific host.
After a set of changes, issue the following restart command as root or sudo:

service ssh restart

You can check the status of the daemon with the following command:

service ssh status

Consider changing the port that SSH listens on. May slow down an attacker slightly. Consider whether it’s worth adding the extra characters to your SSH command. Consider keeping the port that sshd binds to below 1025 where only root can bind a process to.

We’ll need to tunnel SSH once the server is placed into the DMZ. I’ve discussed that in this post.

Additional Resources

Check SSH login attempts. As root or via sudo, type the following to see all failed login attempts:

cat /var/log/auth.log | grep 'sshd.*Invalid'

If you want to see successful logins, type the following:

cat /var/log/auth.log | grep 'sshd.*opened'

Consider installing and configuring denyhosts

Disable Boot Options

All the major hypervisors should provide a way to disable all boot options other than the device you will be booting from. VMware allows you to do this in vSphere Client.

Set BIOS passwords.

Lock Down the Mounting of Partitions

Getting started with your fstab.

Make a backup of your /etc/fstab before you make changes. I ended up needing this later. Read the man page for fstab and also the options section in the mount man page. The Linux File System Hierarchy (FSH) documentation is worth consulting also for directory usages.
Add the noexec mount option to /tmp but not /var because executable shell scripts such as pre, post and removal reside within /var/lib/dpkg/info .
You can also add the nodev nosuid options.
You can add the nodev option to /var, /usr, /opt, /home also.
You can also add the nosuid option to /home .
You can add ro to /usr

To add mount options nosuid,noexec to /var/tmp, /var/log, /var/account, we need to bind the target mount onto an existing directory. The following procedure details how to do this for /var/tmp. As usual, you can do all of this without a reboot. This way you can modify until your hearts content, then be confident that a reboot will not destroy anything or lock you out of your system.
Your /etc/fstab unmounted mounts can be tested like this

sudo mount -a

Then check the difference with

mount

mount options can be set up on a directory by directory basis for finer grained control. For example my /var mount in my /etc/fstab may look like this:

UUID=<block device ID goes here> /var ext4 defaults,nodev 0 2

Then add another line below that in your /etc/fstab that looks like this:

/var /var/tmp none nosuid,noexec,bind 0 2

The file system type above should be specified as none (as stated in the “The bind mounts” section of the mount man page http://man.he.net/man8/mount). The bind option binds the mount. There was a bug with the suidperl package in debian where setting nosuid created an insecurity. suidperl is no longer available in debian.

If you want this to take affect before a reboot, execute the following command:

sudo mount --bind /var/tmp /var/tmp

Then to pickup the new options from /etc/fstab:

sudo mount -o remount /var/tmp

For further details consult the remount option of the mount man page.

At any point you can check the options that you have your directories mounted as, by issuing the following command:

mount

You can test this by putting a script in /var and copying it to /var/tmp. Then try running each of them. Of course the executable bits should be on. You should only be able to run the one that is in the directory mounted without the noexec option. My file “kimsTest” looks like this:

#!/bin/sh
echo "Testing testing testing kim"

Then I…

myuser@myserver:/var$ ./kimsTest
Testing testing testing kim
myuser@myserver:/var$ ./tmp/kimsTest
-bash: ./tmp/kimsTest: Permission denied

You can set the same options on the other /var sub-directories (not /var/lib/dpkg/info).

Enable read-only / mount

There are some contradictions on /run/shm size allocation. Increase the size vs Don’t increase the size

Additional Resources

Work Around for Apt Executing Packages from /tmp

Disable Services we Don’t Need

RPC portmapper

dpkg-query -l '*portmap*'

portmap is not installed by default, so we don’t need to remove it.

Exim

dpkg-query -l '*exim*'

Exim4 is installed.
You can see from the netstat output below (in the “Remove Services” area) that exim4 is listening on localhost and it’s not publicly accessible. Nmap confirms this, but we don’t need it, so lets disable it. We should probably be using ss too.

When a run level is entered, init executes the target files that start with k with a single argument of stop, followed with the files that start with s with a single argument of start. So by renaming /etc/rc2.d/s15exim4 to /etc/rc2.d/k15exim4 you’re causing init to run the service with the stop argument when it moves to run level 2. Just out of interest sake, the scripts at the end of the links with the lower numbers are executed before scripts at the end of links with the higher two digit numbers. Now go ahead and check the directories for run levels 3-5 as well and do the same. You’ll notice that all the links in /etc/rc0.d (which are the links executed on system halt) start with ‘K’. Making sense?

Follow up with

sudo netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0: 0.0.0.0:* LISTEN 1910/sshd
tcp6 0 0 ::: :::* LISTEN 1910/sshd

And that’s all we should see.

Additional resources for the above

Disable Network Information Service (NIS). NIS lets several machines in a network share the same account information, such as the password file (Allows password sharing between machines). Originally known as Yellow Pages (YP). If you needed centralised authentication for multiple machines, you could set-up an LDAP server and configure PAM on your machines in order to contact the LDAP server for user authentication. We have no need for distributed authentication on our web server at this stage.

dpkg-query -l '*nis*'

Nis is not installed by default, so we don’t need to remove it.

Additional resources for the above

Remove Services

First thing I did here was run nmap from my laptop

nmap -p 0-65535 <serverImConfiguring>
PORT STATE SERVICE
23/tcp filtered telnet
111/tcp open rpcbind
/tcp open

Now because I’m using a non default port for SSH, nmap thinks some other service is listening. Although I’m sure if I was a bad guy and really wanted to find out what was listening on that port it’d be fairly straight forward.

To obtain a list of currently running servers (determined by LISTEN) on our web server. Not forgetting that man is your friend.

sudo netstat -tap | grep LISTEN

or

sudo netstat -tlp

I also like to add the ‘n’ option to see the ports. This output was created before I had disabled exim4 as detailed above.

tcp 0 0 *:sunrpc *:* LISTEN 1498/rpcbind
tcp 0 0 localhost:smtp *:* LISTEN 2311/exim4
tcp 0 0 *:57243 *.* LISTEN 1529/rpc.statd
tcp 0 0 *: *:* LISTEN 2247/sshd
tcp6 0 0 [::]:sunrpc [::]:* LISTEN 1498/rpcbind
tcp6 0 0 localhost:smtp [::]:* LISTEN 2311/exim4
tcp6 0 0 [::]:53309 [::]:* LISTEN 1529/rpc.statd
tcp6 0 0 [::]: [::]:* LISTEN 2247/sshd

Rpcbind

Here we see that sunrpc is listening on a port and was started by rpcbind with the PID of 1498.
Now Sun Remote Procedure Call is running on port 111 (also the portmapper port) netstat can tell you the port, confirmed with the nmap scan above. This is used by NFS and as we don’t need NFS as our server isn’t a file server, we can get rid of the rpcbind package.

dpkg-query -l '*rpc*'

Shows us that rpcbind is installed and gives us other details. Now if you’ve been following along with me and have made the /usr mount read only, some stuff will be left behind when we try to purge:

sudo apt-get purge rpcbind

Following are the outputs of interest:

The following packages will be REMOVED:
nfs-common* rpcbind*
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
Do you want to continue [Y/n]? y
Removing nfs-common ...
[ ok ] Stopping NFS common utilities: idmapd statd.
dpkg: error processing nfs-common (--purge):
cannot remove `/usr/share/man/man8/rpc.idmapd.8.gz': Read-only file system
Removing rpcbind ...
[ ok ] Stopping rpcbind daemon....
dpkg: error processing rpcbind (--purge):
cannot remove `/usr/share/doc/rpcbind/changelog.gz': Read-only file system
Errors were encountered while processing:
nfs-common
rpcbind
E: Sub-process /usr/bin/dpkg returned an error code (1)

Another

dpkg-query -l '*rpc*'

Will result in pH. That’s a desired action of (p)urge and a package status of (H)alf-installed.
Now the easiest thing to do here is rename your /etc/fstab to something else and rename the /etc/fstab you backed up before making changes to it back to /etc/fstab then because you know the fstab is good,

reboot

Then try the purge, dpkg-query and netstat commands again to make sure rpcbind is gone and of course no longer listening. I had to actually do the purge twice here as config files were left behind from the fist purge.

Also you can remove unused dependencies now after you get the following message:

The following packages were automatically installed and are no longer required:
libevent-2.0-5 libgssglue1 libnfsidmap2 libtirpc1
Use 'apt-get autoremove' to remove them.
The following packages will be REMOVED:
rpcbind*

sudo apt-get -s autoremove

Because I want to simulate what’s going to be removed because I”m paranoid and have made stupid mistakes with autoremove years ago and that pain has stuck with me. I autoremoved a meta-package which depended on many other packages. A subsequent autoremove for packages that had a sole dependency on the meta-package meant they would be removed. Yes it was a painful experience. /var/log/apt/history.log has your recent apt history. I used this to piece back together my system.

Then follow up with the real thing… Just remove the -s and run it again. Just remember, the less packages your system has the less code there is for an attacker to exploit.

Telnet

telnet installed:

dpkg-query -l '*telnet*'
sudo apt-get remove telnet

telnet gone:

dpkg-query -l '*telnet*'

Ftp

We’ve got scp, why would we want ftp?
ftp installed:

dpkg-query -l '*ftp*'
sudo apt-get remove ftp

ftp gone:

dpkg-query -l '*ftp*'

Don’t forget to swap your new fstab back and test that the mounts are mounted as you expect.

Secure Services

The following provide good guidance on securing what ever is left.

Scheduled Backups

Make sure all data and VM images are backed up routinely. Make sure you test that restoring your backups work. Backup system files and what ever else is important to you. There is a good selection of tools here to help. Also make sure you are backing up the entire VM if your machine is a virtual guest by export / import OVF files. I also like to backup all the VM files. Disk space is cheap. Is there such a thing as being too prepared for disaster? It’s just a matter of time before you’ll be calling on your backups.

Keep up to date

Consider whether it would make sense for you or your admin/s to set-up automatic updates and possibly upgrades. Start out the way you intend to go. Work out your strategy for keeping your system up to date and patched. There are many options here.

Logging, Alerting and Monitoring

From here on, I’ve made it less detailed and more about just getting you to think about things and ways in which you can improve your stance on security. Also if any of the offerings cost money to buy, I make note of it because this is the exception to my rule. Why? Because I prefer free software and especially when it’s Open Source FOSS.

Some of the following cross the “logging” boundaries, so in many cases it’s difficult to put them into categorical boxes.

Attackers like to try and cover their tracks by modifying information that’s distributed to the various log files. Make sure you know who has write access to these files and keep the list small. As a Sysadmin you need to read your log files often and familiarise yourself with them so you get used to what they should look like.

SWatch

Monitors “a” log file for each instance you run (or schedule), matches your defined patterns and acts. You can define different message types with different font styles. If you want to monitor a lot of log files, it’s going to be a bit messy.

Logcheck

Monitors system log files, emails anomalies to an administrator. Once installed it needs to be set-up to run periodically with cron. Not a bad we run down here. How to use and customise it. Man page and more docs here.

NewRelic

Is more of a performance monitoring tool than a security tool. It has free plans which are OK, It comes into it’s own in larger deployments. I’ve used this and it’s been useful for working out what was causing performance issues on the servers.

Advanced Web Statistics (AWStats)

Unlike NewRelic which is a Software as a Service (SaaS), AWStats is FOSS. It kind of fits a similar market space as NewRelic though, but also has Host Intrusion Prevention System (HIPS) features. Docs here.

Pingdom

Similar to NewRelic but not as feature rich. Update: Recently stumbled into Monit which is a better alternative. Free and open source. I’ve been writing about it here.

Multitail

Does what its name sounds like. Tails multiple log files at once. Provides realtime multi log file monitoring. Example here. Great for seeing strange happenings before an intruder has time to modify logs, if your watching them that is. Good for a single system if you’ve got a spare screen to throw on the wall.

PaperTrail

Targets a similar problem to MultiTail except that it collects logs from as many servers as you want and copies them off-site to PaperTrails service and aggregates them into a single easily searchable web interface. Allows you to set-up alerts on anything. Has a free plan, but you only get 100MB per month. The plans are reasonably cheap for the features it provides and can scale as you grow. I’ve used this and have found it to be excellent.

Logwatch

Monitors system logs. Not continuously, so they could be open to modification without you knowing, like SWatch and Logcheck from above. You can configure it to reduce the number of services that it analyses the logs of. It creates a report of what it finds based on your level of paranoia. It’s easy to set-up and get started though. Source and docs here.

Logrotate

Use logrotate to make sure your logs will be around long enough to examine them. Some usage examples here. Ships with Debian. It’s just a matter of applying any extra config.

Logstash

Targets a similar problem to logrotate, but goes a lot further in that it routes and has the ability to translate between protocols. Requires Java to be installed.

Fail2ban

Ban hosts that cause multiple authentication errors. or just email events. Of course you need to think about false positives here too. An attacker can spoof many IP addresses potentially causing them all to be banned, thus creating a DoS.

Rsyslog

Configure syslog to send copy of the most important data to a secure system. Mitigation for an attacker modifying the logs. See @ option in syslog.conf man page. Check the /etc/(r)syslog.conf file to determine where syslogd is logging various messages. Some important notes around syslog here, like locking down the users that can read and write to /var/log.

syslog-ng

Provides a lot more flexibility than just syslogd. Checkout the comprehensive feature-set.

Some Useful Commands

  • Checking who is currently logged in to your server and what they are doing with the who and w commands
  • Checking who has recently logged into your server with the last command
  • Checking which user has failed login attempts with the faillog command
  • Checking the most recent login of all users, or of a given user with the lastlog command. lastlog comes from the binary file /var/log/lastlog.

This, is a list of log files and their names/locations and purpose in life.

Host-based Intrusion Detection System (HIDS)

Tripwire

Is a HIDS that stores a good know state of vital system files of your choosing and can be set-up to notify an administrator upon change in the files. Tripwire stores cryptographic hashes (delta’s) in a database and compares them with the files it’s been configured to monitor changes on. Not a bad tutorial here. Most of what you’ll find with tripwire now are the commercial offerings.

RkHunter

A similar offering to Tripwire. It scans for rootkits, backdoors, checks on the network interfaces and local exploits by running tests such as:

  • MD5 hash changes
  • Files commonly created by root-kits
  • Wrong file permissions for binaries
  • Suspicious strings in kernel modules
  • Hidden files in system directories
  • Optionally scan within plain-text and binary files

Version 1.4.2 (24/02/2014) now checks ssh, sshd and telent (although you shouldn’t have telnet installed). This could be useful for mitigating non-root users running a modified sshd on a 1025-65535 port. You can run ad-hoc scans, then set them up to be run with cron. Debian Jessie has this release in it’s repository. Any Debian distro before Jessie is on 1.4.0-1 or earlier.

The latest version you can install for Linux Mint Qiana (17) and Rebecca (17.1) within the repositories is 1.4.0-3 (01/05/2012)

Change-log here.

Chkrootkit

It’s a good idea to run a couple of these types of scanners. Hopefully what one misses the other will not. Chkrootkit scans for many system programs, some of which are cron, crontab, date, echo, find, grep, su, ifconfig, init, login, ls, netstat, sshd, top and many more. All the usual targets for attackers to modify. You can specify if you don’t want them all scanned. Runs tests such as:

  • System binaries for rootkit modification
  • If the network interface is in promiscuous mode
  • lastlog deletions
  • wtmp and utmp deletions (logins, logouts)
  • Signs of LKM trojans
  • Quick and dirty strings replacement

Stealth

The idea of Stealth is to do a similar job as the above file integrity scanners, but to leave almost no sediments on the tested computer (called the client). A potential attacker therefore has no clue that Stealth is in fact scanning the integrity of its client files. Stealth is installed on a different machine (called the controller) and scans over SSH.

Ossec

Is a HIDS that also has some preventative features. This is a pretty comprehensive offering with a lot of great features.

Unhide

While not strictly a HIDS, this is quite a useful forensics tool for working with your system if you suspect it may have been compromised.

Unhide is a forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. Unhide runs in Unix/Linux and Windows Systems. It implements six main techniques.

  1. Compare /proc vs /bin/ps output
  2. Compare info gathered from /bin/ps with info gathered by walking thru the procfs. ONLY for unhide-linux version
  3. Compare info gathered from /bin/ps with info gathered from syscalls (syscall scanning)
  4. Full PIDs space ocupation (PIDs bruteforcing). ONLY for unhide-linux version
  5. Compare /bin/ps output vs /proc, procfs walking and syscall. ONLY for unhide-linux version. Reverse search, verify that all thread seen by ps are also seen in the kernel.
  6. Quick compare /proc, procfs walking and syscall vs /bin/ps output. ONLY for unhide-linux version. It’s about 20 times faster than tests 1+2+3 but maybe give more false positives.

It includes two utilities: unhide and unhide-tcp.

unhide-tcp identifies TCP/UDP ports that are listening but are not listed in /bin/netstat through brute forcing of all TCP/UDP ports available.

Can also be used by rkhunter in it’s daily scans. Unhide was number one in the top 10 toolswatch.org security tools pole

Web Application Firewalls (WAF’s)

which are just another part in the defense in depth model for web applications, get more specific in what they are trying to protect. They operate at the application layer, so they don’t have to deal with all the network traffic. They apply a set of rules to HTTP conversations. They can also be either Network or Host based and able to block attacks such as Cross Site Scripting (XSS), SQL injection.

ModSecurity

Is a mature and feature full WAF that is designed to work with such web servers as IIS, Apache2 and NGINX. Loads of documentation. They also look to be open to committers and challengers a-like. You can find the OWASP Core Rule Set (CRS) here to get you started which has the following:

  • HTTP Protocol Protection
  • Real-time Blacklist Lookups
  • HTTP Denial of Service Protections
  • Generic Web Attack Protection
  • Error Detection and Hiding

Or for about $500US a year you get the following rules:

  • Virtual Patching
  • IP Reputation
  • Web-based Malware Detection
  • Webshell/Backdoor Detection
  • Botnet Attack Detection
  • HTTP Denial of Service (DoS) Attack Detection
  • Anti-Virus Scanning of File Attachments

Fusker

for Node.js. Although doesn’t look like a lot is happening with this project currently. You could always fork it if you wanted to extend.

The state of the Node.js echosystem in terms of security is pretty poor, which is something I’d like to invest time into.

Fire-walling

This is one of the last things you should look at when hardening an internet facing or parameterless system. Why? Because each machine should be hard enough that it doesn’t need a firewall to cover it like a blanket with services underneath being soft and vulnerable. Rather all the services should be either un-exposed or patched and securely configured.

Most of the servers and workstations I’ve been responsible for over the last few years I’ve administered as though there was no firewall and they were open to the internet. Most networks are reasonably easy to penetrate, so we really need to think of the machines behind them as being open to the internet. This is what De-perimeterisation (the concept initialised by the Jericho Forum) is all about.

Some thoughts on firewall logging.

Keep your eye on nftables too, it’s looking good!

Additional Resources

Just keep in mind the above links are quite old. A lot of it’s still relevant though.

Machine Now Ready for DMZ

Confirm DMZ has

  • Network Intrusion Detection System (NIDS), Network Intrusion Prevention System (NIPS) installed and configured. Snort is a pretty good option for the IDS part, although with some work Snort can help with the Prevention also.
  • incoming access from your LAN or where ever you plan on administering it from
  • rules for outgoing and incoming access to/from LAN, WAN tightly filtered.

Additional Web Server Preparation

  • setup and configure soft web server
  • setup and configure caching proxy. Ex:
    • node-http-proxy
    • TinyProxy
    • Varnish
    • nginx
  • deploy application files
  • Hopefully you’ve been baking security into your web app right from the start. This is an essential part of defense in depth. Rather than having your application completely rely on other entities to protect it, it should also be standing up for itself and understanding when it’s under attack and actually fighting back.
  • set static IP address
  • double check that the only open ports on the web server are 80 and what ever you’ve chosen for SSH.
  • setup SSH tunnel
  • decide on and document VM backup strategy and set it up.

Machine Now In DMZ

Setup your CNAME or what ever type of DNS record you’re using.

Now remember, keeping any machine on (not just the internet, but any) a network requires constant consideration and effort in keeping the system as secure as possible.

Work through using the likes of harden and Lynis for your server and harden-surveillance for monitoring your network.

Consider combining “Port Scan Attack Detector” (psad) with fwsnort and Snort.

Hack your own server and find the holes before someone else does. If you’re not already familiar with the tricks of how systems on the internet get attacked read up on the “Attacks and Threats” Run OpenVAS, Run Web Vulnerability Scanners

From here on is in scope for other blog posts.

Up and Running with Kali Linux and Friends

March 29, 2014

When it comes to measuring the security posture of an application or network, the best defence against an attacker is offence. What does that mean? It means your best defence is to have someone with your best interests (generally employed by you), if we’re talking about your asset, assess the vulnerabilities of your asset and attempt to exploit them.

In the words of Offensive Security (Creators of Kali Linux), Kali Linux is an advanced Penetration Testing and Security Auditing Linux distribution. For those that are familiar with BackTrack, basically Kali is a new creation based on Debian rather than Ubuntu, with significant improvements over BackTrack.

When it comes to actually getting Kali on some hardware, there is a multitude of options available.

All externally listening services by default are disabled, but very easy to turn on if/when required. The idea being to reduce chances of detecting the presence of Kali.

I’ve found the Kali Linux documentation to be of a high standard and plentiful.

In this article I’ll go over getting Kali Linux installed and set-up. I’ll go over a few of the packages in a low level of detail (due to the share number of them) that come out of the box. On top of that I’ll also go over a few programmes I like to install separately. In a subsequent article I’d like to continue with additional programmes that come with Kali Linux as there are just to many to cover in one go.

System Requirements

  1. Minimum of 8 GB disk space is required for the Kali install
  2. Minimum RAM 512 MB
  3. CD/DVD Drive or USB boot support

Supported Hardware

Officially supported architectures

i386, amd64, ARM (armel and armhf)

Unofficial (but maintained) images

You can download official Kali Linux images for the following, these are maintained on a best effort basis by Offensive Security.

  • VMware (pre-made vm with VMware tools installed)

ARM images

  • rk3306 mk/ss808CPU: dual-core 1.6 GHz A9

    RAM: 1 GB

  • Raspberry Pi
  • ODROID U2CPU: quad-core 1.7 GHz

    RAM: 2GB

    Ethernet: 10/100Mbps

  • ODROID X2CPU: quad-core Cortex-A9 MPCore

    RAM: 2GB

    USB 2: 6 ports

    Ethernet: 10/100Mbps

  • MK802/MK802 II
  • Samsung Chromebook
  • Galaxy Note 10.1
  • CuBox
  • Efika MX
  • BeagleBone Black

Create a Customised Kali Image

Kali also provides a simple way to create your own ISO image from the latest source. You can include the packages you want and exclude the ones you don’t. You can customise the kernel. The options are virtually limitless.

The default desktop environment is Gnome, but Kali also provides an easy way to configure which desktop environment you use before building your custom ISO image.

The alternative options provided are: KDE, LXDE, XFCE, I3WM and MATE.

Kali has really embraced the Debian ethos of being able to be run on pretty well any hardware with extreme flexibility. This is great to see.

Installation

You should find most if not all of what you need here. Just follow the links specific to your requirements.

As with BackTrack, the default user is “root” without the quotes. If your installing, make sure you use a decent password. Not a dictionary word or similar. It’s generally a good idea to use a mix of upper case, lower case characters, numbers and special characters and of a decent length.

I’m not going to repeat what’s already documented on the Kali site, as I think they’ve done a pretty good job of it already, but I will go over some things that I think may not be 100% clear at first attempt. Also just to be clear, I’ve done this on a Linux box.

Now once you have down loaded the image that suites your target platform,

you’re going to want to check its validity by verifying the SHA1 checksums. Now this is where the instructions can be a little confusing. You’ll need to make sure that the SHA1SUMS file that contains the specific checksum you’re going to use to verify the checksum of the image you downloaded, is in fact the authentic SHA1SUMS file. instructions say “When you download an image, be sure to download the SHA1SUMS and SHA1SUMS.gpg files that are next to the downloaded image (i.e. in the same directory on the server).”. You’ve got to read between the lines a bit here. A little further down the page has the key to where these files are. It’s buried in a wget command. Plus you have to add another directory to find them. The location was here. Now that you’ve got these two files downloaded in the same directory, verify the SHA1SUMS.gpg signature as follows:

$ gpg --verify SHA1SUMS.gpg SHA1SUMS
gpg: Signature made Thu 25 Jul 2013 08:05:16 NZST using RSA key ID 7D8D0BF6
gpg: Good signature from "Kali Linux Repository <devel@kali.org>

You’ll also get a warning about the key not being certified with a trusted signature.

Now verify the checksum of the image you downloaded with the checksum within the (authentic) SHA1SUMS file

Compare the output of the following two commands. They should be the same.

# Calculate the checksum of your downloaded image file.
$ sha1sum [name of your downloaded image file]
# Print the checksum from the SHA1SUMS file for your specific downloaded image file name.
$ grep [name of your downloaded image file] SHA1SUMS

Kali also has a live USB Install including persistence to your USB drive.

Community

IRC: #kali-linux on FreeNode. Stick to the rules.

What’s Included

> 300 security programmes packaged with the operating system:

Before installation you can view the tools included in the Kali repository.

Or once installed by issuing the following command:

# prints complete list of installed packages.
dpkg --get-selections | less

To find out a little more about the application:

dpkg-query -l '*[some text you think may exist in the package name]*'

Or if you know the package name your after:

dpkg -l [package name]

Want more info still?

man [package name]

Some of the notable applications installed by default

Metasploit

Framework that provides the infrastructure to create, re-use and automate a wide variety of exploitation tasks.

If you require database support for Metasploit, start the postgresql service.

# I like to see the ports that get opened, so I run ss -ant before and after starting the services.
ss -ant
service postgresql start
ss -ant

ss or “socket statistics” which is a new replacement programme for the old netstat command. ss gets its information from kernel space via Netlink.

Start the Metasploit service:

ss -ant
service metasploit start
ss -ant

When you start the metasploit service, it will create a database and user, both with the names msf3, providing you have your database service started. Now you can run msfconsole.

Start msfconsole:

msfconsole

The following is an image of terminator where I use the top pane for stopping/starting services, middle pane for checking which ports are opened/closed, bottom pane for running msfconsole. terminator is not installed by default. It’s as simple as apt-get install terminator

metasploit

You can find full details of setting up Metasploits database and start/stopping the services here.

You can also find the Metasploit frameworks database commands simply by typing help database at the msf prompt.

# Print the switches that you can run msfconsole with.
msfconsole -h

Once your in msf type help at the prompt to get yourself started.

There is also a really easy to navigate all encompassing set of documentation provided for msfconsole here.

You can also set-up PostgreSQL and Metasploit to launch on start-up like this:

update-rc.d postgresql enable
update-rc.d metasploit enable

Offensive Security also has a Metasploit online course here.

Armitage

Just as it was included in BackTrack, which is no longer supporting Armitage, you’ll also find Armitage comes installed out of the box in version 1.0.4 of Kali Linux. Armitage is a GUI to assist in metasploit visualisation. You can find the official documentation here. Offensive Security has also done a good job of providing their own documentation for Armitage over here. To get started with Armitage, just make sure you’ve got the postgresql service running. Armitage will start the metasploit service for you if it’s not already running. Armitage allows your red team to collaborate by using a single instance of Metasploit. There is also a commercial offering developed by Raphael Mudge’s company “Strategic Cyber LLC” which also created Armitage, called Cobalt Strike. Cobalt Strike currently costs $2500 per user per year. There is a 21 day trial though. Cobalt Strike offers a bunch of great features. Check them out here. Armitage can connect to an existing instance of Metasploit on another host.

NMap

Target use is network discovery and auditing. Provides host information for anything it can access from a network. Also now has a scripting engine that can execute arbitrary custom tasks.

I’m guessing we’ve probably all used NMap? ZenMap which Kali Linux also provides out of the box Is a gui for NMap. This was also included in BackTrack.

Intercepting Web Proxies

Burp Suite

I use burp quite regularly and have a few blog posts where I’ve detailed some of it’s use. In fact I’ve used it to reverse engineer the comms between VMware vSphere and ESXi to create a UPS solution that deals with not only virtual hosts but also the clients.

WebScarab

I haven’t really found out what webscarab’s sweet spot is if it has one. I’d love to know what it does better than burp, zap and w3af combined? There is also a next generation version which according to the google code repository hasn’t had any work done on it since March 2011, where as the classic version is still receiving fixes. The documentation has always seemed fairly minimalistic also.

In terms of web proxy/interceptors I’ve also used fiddler which relies on the .NET framework and as mono is not installed out of the box on Kali, neither is fiddler.

OWASP Zed Attack Proxy (ZAP)

Which is an OWASP flagship project, so it’s free and open source. Cross platform. It was forked from the Paros Proxy project which is not longer supported. Includes automated, passive, brute force and port scanners. Traditional and AJAX spiders. Can even find unlinked files. Provides fuzzing, port scanning. Can be run without the UI in headless mode and can be accessed via a REST API. Supports Anti CSRF tokens. The Script Console that is one of the add-ons supports any language that JSR (Java Specification Requests) 223 supports. That’s languages such as JavaScript Groovy, Python, Ruby and many more. There is plenty of info on the add-ons here. OWASP also provide directions on how to write your own extensions and they provide some sample templates. Following is the list of current extensions, which can also be managed from within Zap. “Manage Add-ons” menu → Marketplace tab. Select and click “Install Selected”

OWASP Zap

The idea is to first set Zap up as a proxy for your browser. Fetch some web pages (build history). Zap will create a history of URLs. You then right click the item of interest and click Attack->[one of the spider options], then click the play button and watch the progress bar. which will crawl all the pages you have access to according to your permissions. Then under the Analyse menu → Scan Policy… Setup your scan policy so your only scanning what you want to scan. Then hit Scan to assess your target application. Out of the box, you’ve got many scan options. Zap does a lot for you. I’m really loving this tool OWASP!

As usual with OWASP, zap has a wealth of documentation. If zap doesn’t provide enough out of the box, extend it. OWASP also provide an API for zap.

You can find the user group here (also accessible from the ZAP ‘Online’ menu.), which is good for getting help if the help file (which can also be found via ZAP itself) fails to yeild. There is also a getting started guide which is a work in progress. There is also the ZAP Blog.

FoxyProxy

Although nothing to do with Kali Linux and could possibly be in the IceWeasel add-ons section below, I’ve added it here instead as it really reduces friction with web proxy interception. FoxyProxy is a very handy add-on for both firefox and chromium. Although it seems to have more options for firefox, or at least they are more easily accessible. It allows you to set-up a list of proxies and then switch between them as you need. When I run chromium as a non root user I can’t change the proxy settings once the browser is running. I have to run the following command in order to set the proxy to my intermediary before run time like this:

chromium-browser --temp-profile –proxy-server=localhost:3001

Firefox is a little easier, but neither browsers allow you to build up lists of proxies and then switch them in mid flight. FoxyProxy provides a menu button, so with two clicks you can disable the add-on completely to revert to your previous settings, or select any or your predefined proxies. This is a real time saver.

Vulnerability Scanners

Open Vulnerability Assessment System (OpenVAS)

Forked from the last free version (closed in 2005) of Nessus. OpenVAS plugins are written in the same language that Nessus uses. OpenVAS looks for known misconfigurations and vulnerabilities common in out of date software. In fact it covers the following OWASP Top 10 items:

  • No.5 Security Misconfiguration
  • No.7 Missing Function Level Access Control (formerly known as “failure to restrict URL access”)
  • No.9 Using Components with Known Vulnerabilities.

OpenVAS also has some SQLi and other probes to test application input, but it’s primary purpose is to scan networks of machines with out of date software and bad configurations.

Tests continue to be added. Now currently at 32413 Network Vulnerability Tests (NVTs) details here.

OpenVAS

Greenbone Security Desktop (gsd) who’s package is a GUI that uses the Greenbone Security Manager, OpenVAS Manager or any other service that offers the OpenVAS Management Protocol (omp) protocol. Currently at version 1.2.2 and licensed under the GPLv2. The Greenbone Security Assistant (gsad) is currently at version 4.0.0. The Germany government also sponsor OpenVAS.

From the menu: Kali Linux → Vulnerability Analysis → OpenVAS, we have a couple of short-cuts visible. openvas-gsd is actually just the gsd package and openvas-setup which is the set-up script.

Before you run openvas-gsd, you can either:

  1. Run openvas-setup which will do all the setup which I think is already done on Kali. At the end of this, you will be prompted to add a password for a user to the Admin role. The password you add here is for a new user called “admin” (of course it doesn’t say that, so can be a little confusing as to what the password is for).
  2. Or you can just run the following command, which is much quicker because you don’t run the set-up procedure:
openvasad -c 'add_user' -n [a new administrative username of your choosing] -r Admin

You’ll be prompted to add a new password. Make sure you remember it.

Check out the man page for further options. For example the -c switch is a shortened –command and it lists a selection of commands you can use.

I think -n is for –name although not listed in the man page. -r switch is –role. Either User or Admin.

The user you’ve just added is used to connect the gsd to the:

  1. openvasmd (OpenVAS Manager daemon) which listens on port 9390
  2. openvassd (OpenVAS Scanner daemon) which listens on port 9391
  3. gsad (Greenbone Security Assistant daemon) which listens on port 9392. This is a web app, which also listens on port 443
  4. openvasad (OpenVAS Administrator daemon) which listens on 9393

The core functionality is provided by the scanner and the manager. The manager handles and organises scan results. The gsad or assistant connects to the manager and administrator to provide a fully featured user interface. There is also a CLI (omp) but I haven’t been able to get this going on Kali Linux yet. You’ll also find that the previous link has links to all the man pages for OpenVAS. You can read more about the architecture and how the different components fit together.

I’ve also found that sometimes the daemons don’t automatically start when gsd starts. So you have to start them manually.

openvasmd && openvassd && gsad && openvasad

You can also use the web app https://127.0.0.1/omp

Then try logging in to the openvasmd. When your finished with gsd you can kill the running daemons if you like. I like to keep an eye on the listening ports when I’m done to keep things as quite as possible.

Check the ports.

ss -anp

Optional to see the processes running, but not necessary.

ps -e
kill -9 <PID of openvasad> <PID of gsad> <PID of openvassd> <PID of openvasmd>

There are also plenty of options when it comes to the report. This can be output in HTML, PDF, XML, Emailed and quite a few others. The reports are colour coded and you can choose what to have put in them. The vulnerabilities are classified by risk: High, Medium, Low, OpenVAS can take quite a while to scan as it runs so many tests.

This is how to get started with gsd.

Web Vulnerability Scanners

This is the generally accepted criteria of a tool to be considered a Web Application Security Scanner.

SkipFish

A high performance active reconnaissance tool written in C. From the documentation “Multiplexing single-thread, fully asynchronous network I/O and data processing model that eliminates memory management, scheduling, and IPC inefficiencies present in some multi-threaded clients.”. OK. So it’s fast.

which prepares an interactive sitemap by carrying out a recursive crawl and probes based on existing dictionaries or ones you build up yourself. Further details in the documentation linked below.

Doesn’t conform to most of the criteria outlined in the above Web Application Security Scanner criteria.

SkipFish v2.05 is the current version packaged with Kali Linux.

SkipFish v2.10b (released Dec 2012)

Free and you can view the source code. Apache license 2.0

Performs a similar role to w3af.

Project details can be found here.

You can find the tests here.

How do you use it though? This is a good place to start. Instead of reading through the non-existent doc/dictionaries.txt, I think you can do as well by reading through /usr/share/skipfish/dictionaries/README-FIRST.

The other two documentation sources are the man page and skipfish with the -h option.

Web Application Attack and Audit Framework (w3af)

Andres Riancho has created a masterpiece. The main behavior of this application is to assess and identify vulnerabilities in a web application by sending customised HTTP requests. Results can be output in quite a few formats including email. It can also proxy, but burp suite is more focused on this role and does it well.

Can be run with a gui: w3af_gui or from the terminal: w3af_console. Written in Python and Runs on Linux BSD or Mac. Older versions used to work on Windows, but it’s not currently being tested on Windows. Open source on GitHub and released under the GPLv2 license.

You can write your own plug-ins, but check first to make sure it doesn’t already exist. The plugins are listed within the application and on the w3af.org web site along with links to their source code, unit tests and descriptions. If it doesn’t appear that the plug-in you want exists, contact Andres Riancho to make sure, write it and submit a pull request. Also looks like Andres Riancho is driving the development TDD style, which means he’s obviously serious about creating quality software. Well done Andres!

w3af provides the ability to inject your payloads into almost every part of the HTTP request by way of it’s fuzzing engine. Including: query string, POST data, headers, cookie values, content of form files, URL file-names and paths.

There’s a good set of documentation found here and you can watch the training videos. I’m really looking forward to using this in anger.

w3af

Nikto

Is a web server scanner that’s not overly stealthy. It’s built on “Rain Forest Puppies” LIbWhisker2 which has a BSD license.

Nikto is free and open source with GPLv3 license. Can be run on any platform that runs a perl interpreter. It’s source can be found here. The first release of Nikto was in December of 2001 and is still under active development. Pull requests encouraged.

Suports SSL. Supports HTTP proxies, so you can see what Nikto is actually sending. Host authentication. Attack encoding. Update local databases and plugins via the -update argument. Checks for server configuration items like multiple index files and HTTP server options. Attempts to identify installed web servers and software.

Looks like the LibWhisker web site no longer exists. Last release of LibWhisker was at the beginning of 2010.

Nikto v2.1.4 (Released Feb 20 2011) is the current version packaged with Kali Linux. Tests for multiple items, including > 6400 potentially dangerous files/CGIs. Outdated versions of > 1200 servers. Insecurities of specific versions of > 270 servers.

Nikto v2.1.5 (released Sep 16 2012) is the latest version. Tests for multiple items, including > 6500 potentially dangerous files/CGIs. Outdated versions of > 1250 servers. Insecurities of specific versions of > 270 servers.

Just spoke with the Kali developers about the old version. They are now building a package of 2.1.5 as I write this. So should be an apt-get update && apt-get upgrade away by the time you read this all going well. Actually I can see it in the repo now. Man those guys are responsive!

Most of the info you will need can be found here.

SQLNinja

sqlninja: Targets Microsoft SQL Servers. Uses SQL injection vulnerabilities on a web app. Focuses on popping remote shells on the target database server and uses them to gain a foothold over the target network. You can set-up graphical access via a VNC server injection. Can upload executables by using HTTP requests via vbscript or debug.exe. Supports direct and reverse bindshell. Quite a few other methods of obtaining access. Documentation here.

Text Editors

  1. Vim. Shouldn’t need much explanation.
  2. Leafpad. This is a very basic graphical text editor. A bit like Windows Notepad.
  3. Gvim. This is the Graphical version of Vim. I’ve mostly used sublime text 2 & 3, gedit on Linux, but Gvim is really quite powerful too.

Note Keeping

  1. KeepNote. Supported on Linux, Windows and MacOS X. Easy to transport notes by zipping or copying a folder. Notes stored in HTML and XML.
  2. Zim Desktop Wiki.

Other Notable Features

  • Offensive Securities Kali Linux is free and always will be. It’s also completely open (as it’s based on debian) to modification of it’s OS or programmes.
  • FHS compliant. That means the file system complies to the Linux Filesystem Hierarchy Standard
  • Wireless device support is vast. Including USB devices.
  • Forensics Mode. As with BackTrack 5, the Kali ISO also has an option to boot into the forensic mode. No drives are written to (including swap). No drives will be auto mounted upon insertion.

Customising installed Kali

Wireless Card

I had a little trouble with my laptop wireless card not being activated. Turned out to be me just not realising that an external wi-fi switch had to be turned on. I had wireless enabled in the BIOS. The following where the steps I took to resolve it:

Read Kali Linux documentation on Troubleshooting Wireless Drivers  and found the card listed with lspci. Opened /var/log/dmesg with vi. Searched for the name of the card:

#From command mode to make search case insensitive:
:set ic
#From command mode to search
/[name of my wireless card]

There were no errors. So ran iwconfig (similar to ifconfig but dedicated to wireless interfaces). I noticed that the card was definitely present and the Tx-Power was off. I then thought I’d give rfkill a spin and it’s output made me realise I must have missed a hardware switch somewhere.

rfkill

Found the hard switch and turned it on and we now have wireless.

Adding Shortcuts to your Panel

[Alt]+[right click]->[Add to Panel…]

Or if your Kali install is on VirtualBox:

[Windows]+[Alt]+[right click]->[Add to Panel…]

Caching Debian Packages

If you want to:

  1. save on bandwidth
  2. have a large number of your packages delivered at your network speed rather than your internet speed
  3. have several debian based machines on your network

I’d recommend using apt-cacher-ng. If not already, you’ll have to set this up on a server and add the following file to each of your debian based machines.

/etc/apt/apt.conf with the following contents and set it’s permissions to be the same as your sources.list:

Acquire::http::Proxy “http://[ip address of your apt-cacher server]:3142”;

IceWeasel add-ons

  • Firebug
  • NoScript
  • Web Developer
  • FoxyProxy (more details mentioned above)
  • HackBar. Somewhat useful for (en/de)coding (Base64, Hex, MD5, SHA-(1/256), etc), manipulating and splitting URLs

SQL Inject Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. SQL Inject Me is a component of the Exploit-Me suite. Allows you to test all or any number of input fields on all or any of a pages forms. You just fill in the fields with valid data, then test with all the tools attacks or with the top that you’ve defined in the options menu. It then looks for database errors which are rendered into the returned HTML as a result of sending escape strings, so doesn’t cater for blind injection. You can also add remove escape strings and resulting error strings that SQL Inject Me should look for on response. The order in which each escape string can be tried can also be changed. All you need to know can be found here.

XSS Me

Nothing to do with Kali Linux, but still a good place to start for running a quick vulnerability assessment. Open source software (GPLv3) from Security Compass Labs. XSS Me is also a component of the Exploit-Me suite. This tool’s behaviour is very similar to SQL Inject Me (follows the POLA) which makes using the tools very easy. Both these add-ons have next to no learning curve. The level of entry is very low and I think are exactly what web developers that make excuses for not testing their own security need. The other thing is that it helps developers understand how these attacks can be carried out. XSS Me currently only tests for reflected XSS. It doesn’t attempt to compromise the security of the target system. Both XSS Me and SQL Inject Me are reconnaissance tools, where the information is the vulnerabilities found. XSS Me doesn’t support stored XSS or user supplied data from sources such as cookies, links, or HTTP headers. How effective XSS Me is in finding vulnerabilities is also determined by the list of attack strings the tool has available. Out of the box the list of XSS attack strings are derived from RSnakes collection which were donated to OWASP who now maintains it as one of their cheatsheets.. Multiple encodings are not yet supported, but are planned for the future. You can help to keep the collection up to date by submitting new attack strings.

Chromium

Because it’s got great developer tools that I’m used to using. In order to run this under the root account, you’ll need to add the following parameter to /etc/chromium/default between the quotes for CHROMIUM_FLAGS=””

--user-data-dir

I like to install the following extensions: Cookies, ScriptSafe

Terminator

Because I like a more powerful console than the default. Terminator adds split screen on top of multi tabs. If you live at the command line, you owe it to yourself to get the best console you can find. So far terminator still fits this bill for me.

KeePass

The password database app. Because I like passwords to be long, complex, unique for everything and as secure as possible.

Exploits

I was going to go over a few exploits we could carry out with the Kali Linux set-up, but I ran out of time and page space. In fact there are still many tools I wanted to review, but there just isn’t enough time or room in this article. Feel free to subscribe to my blog and you’ll get an update when I make posts. I’d like to extend on this by reviewing more of the tools offered in Kali Linux

Input Sanitisation

This has been one of my pet topics for a while. Why? Because the lack of it is so often abused. In fact this is one of the primary techniques for No.1 (Injection) and No.3 (XSS) of this years OWASP Top 10 List (unchanged from 2010). I’d encourage any serious web developers to look at my Sanitising User Input From Browser. Part 1” and Part 2

Part 1 deals with the client side (untrused) code.

Part 2 deals with the server side (trusted) code.

I provide source code, sources and discuss the following topics:

  1. Minimising the attack surface
  2. Defining maximum field lengths (validation)
  3. Determining a white list of allowable characters (validation)
  4. Escaping untrusted data
  5. External libraries, cheat sheets, useful code and sites, I used. Also discuss the less useful resources and why.
  6. The point of validating client side when the server side is going to do it again anyway
  7. Full set of server side tests to test the sanitisation is doing what is expected