Posts Tagged ‘VPS’

Keeping Your NodeJS Web App Running on Production Linux

June 27, 2015

All the following offerings that I’ve evaluated target different scenarios. I’ve listed the pros and cons for each of them and where I think they fit into a potential solution to monitor your web applications (I’m leaning toward NodeJS) and make sure they keep running. I’ve listed the goals I was looking to satisfy.

For me I have to have a good knowledge of the landscape before I commit to a decision and stand behind it. I like to know I’ve made the best decision based on all the facts that are publicly available. Therefore, as always, it’s my responsibility to make sure I’ve done my research in order to make an informed and ideally… best decision possible. I’m pretty sure my evaluation was un-biased, as I hadn’t used any of the offerings other than forever before.

I looked at quite a few more than what I’ve detailed below, but the following candidates I felt were worth spending some time on.

Keep in mind, that everyone’s requirements will be different, so rather than tell you which to use because I don’t know your situation, I’ve listed the attributes (positive, negative and neutral) that I think are worth considering when making this choice. After my evaluation I make some decisions and start the configuration.

Evaluation criterion

  1. Who is the creator. I favour teams rather than individuals, as individuals move on, then where does that leave the product?
  2. Does it do what we need it to do? Goals address this.
  3. Do I foresee any integration problems with other required components?
  4. Cost in money. Is it free? I usually gravitate toward free software. It’s usually an easier sell to clients and management. Are there catches once you get further down the road? Usually open source projects are marketed as is.
  5. Cost in time. Is the set-up painful?
  6. How well does it appear to be supported? What do the users say?
  7. Documentation. Is there any / much? What is it’s quality?
  8. Community. Does it have an active one? Are the users getting their questions answered satisfactorily? Why are the unhappy users unhappy (do they have a valid reason).
  9. Release schedule. How often are releases being made? When was the last release?
  10. Gut feeling, Intuition. How does it feel. If you have experience in making these sorts of choices, lean on it. Believe it or not, this should probably be No. 1.

The following tools have been my choice based on the above criterion.

Goals

  1. Application should start automatically on system boot
  2. Application should be re-started if it dies or becomes un-responsive
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy (Nginx, node-http-proxy, Tinyproxy, Squid, Varnish, etc)
    2. Clustering and providing load balancing for your single threaded application
    3. Visibility of application statistics.
  4. Enough documentation to feel comfortable consuming the offering
  5. The offering should be production ready. This means: mature with a security conscious architecture.

Sysvinit, Upstart, systemd & Runit

You’ll have one of these running on your Linux box.

These are system and service managers for Linux. Upstart and the later systemd were developed as replacements for the traditional init daemon (Sysvinit), which all depend on init. Init is an essential package that pulls in the default init system. In Debian, starting with Jessie, systemd is your default system and service manager.

There’s some quite helpful info on the differences between Sysvinit and systemd here.

systemd

As I have systemd installed out of the box on my test machine (Debian Jessie), I’ll be using this for my set-up.

Documentation

  1. Well written comparison with Upstart, systemd, Runit and even Supervisor.

Running the likes of the below commands will provide some good details on how these packages interact with each other:

aptitude show sysvinit
aptitude show systemd
# and any others you think of

These system and service managers all run as PID 1 and start the rest of the system. Your Linux system will more than likely be using one of these to start tasks and services during boot, stop them during shutdown and supervise them while the system is running. Ideally you’re going to want to use something higher level to look after your NodeJS app. See the following candidates…

forever

and it’s web UI. Can run any kind of script continuously (whether it is written in node.js or not). This wasn’t always the case though. It was originally targeted toward keeping NodeJS applications running.

Requires NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential, I’d rather do without NPM on a production server where we’re actively working to reduce the installed package count and disable everything else we can. I could install forever on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

NPM Details

Does it Meet our Goals?

  1. Not without an extra script. Crontab or similar
  2. Application will be re-started if it dies, but if it’s response times go up, there’s not much forever is going to do about it. It has no way of knowing.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing
    3. Visibility of application statistics could be added later with the likes of Monit or something else, but if you used Monit, then there wouldn’t really be a need for forever as Monit does the little that forever does and is capable of so much more, but is not pushy on what to do and how to do it. All the behaviour is defined with quite a nice syntax in a config file or as many as you like.
  4. I think there is enough documentation to feel comfortable consuming it, as forever doesn’t do a lot, which doesn’t have to be a bad thing.
  5. The code it self is probably production ready, but I’ve heard quite a bit about stability issues. You’re also expected to have NPM installed (more attack surface) when we already have native package managers on the server(s).

Overall Thoughts

For me, I’m looking for a tool set that does a bit more. Forever doesn’t satisfy my requirements. There’s often a balancing act between not doing enough and doing too much.

PM2

PM2

Younger than forever, but seems to have quite a few more features and does actually look quite good. I’m not sure about production ready though?

As mentioned on the github page: “PM2 is a production process manager for Node.js applications with a built-in load balancer“. This “Sounds” and at the initial glance looks shiny. Very quickly you should realise there are a few security issues you need to be aware of though.

The word “production” is used but it requries NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Installing NPM just adds more attack surface area. Unless it’s essential and it shouldn’t be, I’d rather do without it on a production system. I could install PM2 on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes offerings like Supervisor, Monit and Passenger look even more attractive.

At the time of writing this, it’s less than a year old and in nodejs land, that means it’s very much in the immature realm. Do you really want to use something that young on a production server? I’d personally advise against it.

Yes, it’s very popular currently. That doesn’t tell me it’s ready for production though. It tells me the marketing is working.

Is your production server ready for PM2? That phrase alone tells me the mind-set behind the project. I’d much sooner see it the other way around. Is PM2 ready for my production server? You’re going to need a staging server for this, unless you honestly want development tools installed on your production server (git, build-essential, NVM and an unstable version of node 0.11.14 (at time of writing)) and run test scripts on your production server? Not for me or my clients thanks.

If you’ve considered the above concerns and can justify adding the additional attack surface area, check out the features if you haven’t already.

Features that Stood Out

They’re also listed on the github repository. Just beware of some of the caveats. Like for the load balancing: “we recommend the use of node#0.11.15+ or io.js#1.0.2+. We do not support node#0.10.* cluster module anymore!” 0.11.15 is unstable, but hang-on, I thought PM2 was a “production” process manager? OK, so were happy to mix unstable in with something we label as production?

On top of NodeJS, PM2 will run the following scripts: bash, python, ruby, coffee, php, perl.

Start-up Script Generation

Although I’ve heard a few stories that this is fairly un-reliable at the time of writing this. Which doesn’t surprise me, as the project is very young.

Documentation

  1. Advanced Readme

Does it Meet our Goals?

  1. The feature exists, unsure of how reliable it is currently though?
  2. Application should be re-started if it dies shouldn’t be a problem. PM2 can also restart your application if it reaches a certain memory threshold. I haven’t seen anything around restarting based on response times or other application health issues.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Clustering and load-balancing is integrated but immature.
    3. PM2 provides a small collection of viewable statistics. Personally I’d want more, but I don’t see any reason why you’d have to swap PM2 because of this.
  4. There is reasonable official documentation for the age of the project. The community supplied documentation will need to catch up a bit, although there is a bit of that too. After working through all of the offerings and edge-cases, I feel as I usually do with NodeJS projects. The documentation doesn’t cover all the edge-cases and the development itself misses edge cases. Hopefully with time it’ll get better though as the project does look promising.
  5. I haven’t seen much that would make me think PM2 is production ready. It’s not yet mature. I don’t agree with it’s architecture.

Overall Thoughts

For me, the architecture doesn’t seem to be heading in the right direction to be used on a production web server where less is better. I’d like to see this change. If it did, I think it could be a serious contender for this space.

 


The following are better suited to monitoring and managing your applications. Other than Passenger, they should all be in your repository, which means trivial installs and configurations.

Supervisor

Supervisord

Supervisor is a process manager with a lot of features and a higher level of abstraction than the likes of the above Sysvinit, upstart, systemd, Runit, etc so it still needs to be run by an init daemon in itself.

From the docs: “It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.” Supervisor monitors the state of processes. Where as a tool like Monit can perform so many more types of tests and take what ever actions you define.

It’s in the Debian repositories  (trivial install on Debian and derivatives).

Documentation

  1. Main web site
  2. There’s a good short comparison here.

Source

Does it Meet our Goals?

  1. Application should start automatically on system boot: Yip. That’s what Supervisor does well.
  2. Application will be re-started if it dies, or becomes un-responsive. It’s often difficult to get accurate up/down status on processes on UNIX. Pidfiles often lie. Supervisord starts processes as subprocesses, so it always knows the true up/down status of its children.If your application becomes unresponsive or can’t connect to it’s database or any other service/resource it needs to work as expected. To be able to monitor these events and respond accordingly your application can expose a health-check interface, like GET /healthcheck. If everything goes well it should return HTTP 200, if not then HTTP 5**In some cases the restart of the process will solve this issue. httpok is a Supervisor event listener which makes GET requests to the configured URL. If the check fails or times out, httpok will restart the process.To enable httpok the following lines have to be placed in supervisord.conf:
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: I don’t see a problem
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. This would be completely separate to supervisor.
    3. Visibility of application statistics could be added later with the likes of Monit or something else. For me, Supervisor doesn’t do enough. Monit does. Plus if you need what Monit offers, then you have to have three packages to think about, or Something like Supervisor, which is not an init system, so it kind of sits in the middle of the ultimate stack. So my way of thinking is, use the init system you already have to do the low level lifting and then something small to take care of everything else on your server that the init system is not really designed for and Monit has done this job really well. Just keep in mind also. This is not based on any bias. I hadn’t used Monit before this exercise.
  4. Supervisor is a mature product. It’s been around since 2004 and is still actively developed. The official and community provided docs are good.
  5. Yes it’s production ready. It’s proven itself.

 

Overall Thoughts

The documentation is quite good, easy to read and understand. I felt that the config was quite intuitive also. I already had systemd installed out of the box and didn’t see much point in installing Supervisor as systemd appeared to do everything Supervisor could do, plus systemd is an init system (it sits at the bottom of the stack). In most scenarios you’re going to have a Sysvinit or replacement of (that runs with a PID of 1), so in many cases Supervisor although it’s quite nice is kind of redundant, and of course Ubuntu has Upstart.

Supervisor is better suited to running multiple scripts with the same runtime, for example a bunch of different client applications running on Node. This can be done with systemd and the others, but Supervisor is a better fit for this sort of thing.

 

Monit

monit

Is a utility for monitoring and managing daemons or similar programs. It’s mature, actively maintained, free, open source and licensed with GNU AGPL.

It’s in the debian repositories (trivial install on Debian and derivatives). The home page told me the binary was just under 500kB. The install however produced a different number:

After this operation, 765 kB of additional disk space will be used.

Monit provides an impressive feature set for such a small package.

Monit provides far more visibility into the state of your application and control than any of the offerings mentioned above. It’s also generic. It’ll manage and/or monitor anything you throw at it. It has the right level of abstraction. Often when you start working with a product you find it’s limitations and they stop you moving forward and you end up settling for imperfection or you swap the offering for something else providing you haven’t already invested to much effort into it. For me Monit hit the sweet spot and never seems to stop you in your tracks. There always seems to be an easy to relatively easy way to get any monitoring->take action sort of task done. What I also really like is that moving away from Monit should be relatively painless also. The time investment is small and some of it will be transferable in many cases. It’s just config from the control file.

Features that Stood Out

  • Ability to monitor files, directories, disks, processes, the system and other hosts.
  • Can perform emergency logrotates if a log file suddenly grows too large too fast
  • File Checksum TestingThis is good so long as the compromised server hasn’t also had the tool your using to perform your verification (md5sum or sha1sum) modified, which would be common. That’s why in cases like this, tools such as stealth can be a good choice.
  • Testing of other attributes like ownership and access permissions. These are good, but again can easily be modified.
  • Monitoring directories using time-stamp. Good idea, but don’t rely solely on this. time-stamps are easily modified with touch -r … providing you do it between Monit’s cycles and you don’t necessarily know when they are unless you have permissions to look at Monit’s control file.
  • Monitoring space of file-systems
  • Has a built-in lightweight HTTP(S) interface you can use to browse the Monit server and check the status of all monitored services. From the web-interface you can start, stop and restart processes and disable or enable monitoring of services. Monit provides fine grained control over who/what can access the web interface or whether it’s even active or not. Again an excellent feature that you can choose to use or not even have the extra attack surface.
  • There’s also an agregator (m/monit) that allows sys-admins to monitor and manage many hosts at a time. Also works well on mobile devices and is available at a one off cost (reasonable price) to monitor all hosts.
  • Once you install Monit you have to actively enable the http daemon in the monitrc in order to run the Monit cli and/or access the Monit http web UI. At first I thought “is this broken?” I couldn’t even run monit status (it’s a Monit command). ps told me Monit was running. Then I realised… it’s secure by default. You have to actually think about it in order to expose anything. It was this that confirmed Monit for me.
  • The Control File
  • Just like SSH, to protect the security of your control file and passwords the control file must have read-write permissions no more than 0700 (u=xrw,g=,o=); Monit will complain and exit otherwise.

Documentation

The following was the documentation I used in the same order and I found that the most helpful.

  1. Main web site
  2. Official Documentation
  3. Source and links to other documentation including a QUICK START guide of about 6 lines.
  4. Adding Monit to systemd
  5. Release notes

Does it Meet our Goals?

  1. Application can start automatically on system boot
  2. Monit has a plethora of different types of tests it can perform and then follow up with actions based on the outcomes. Http is but one of them.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Yes, I don’t see any issues here
    2. Integrate NodeJS’s core module cluster into your NodeJS application for load balancing. Monit will still monitor, restart and do what ever else you tell it to do.
    3. Monit provides application statistics to look at if that’s what you want, but it also goes further and provides directives for you to declare behaviour based on conditions that Monit checks for.
  4. Plenty of official and community supplied documentation
  5. Yes it’s production ready. It’s proven itself. Some extra education around some of the points I raised above with some of the security features would be good. If you could trust the hosts hashing programme (and other commonly trojanised programmes like find, ls, etc) that Monit uses, perhaps because you were monitoring it from a stealth controller (which had already taken a known good copy and produced it’s own bench-mark hash) or similar then yes, you could use that feature of Monit with greater assurance that the results it was producing were in fact accurate. In saying that, you don’t have to use the feature, but it’s there if you want it, which I see as very positive so long as you understand what could go wrong and where.

 

Overall Thoughts

The accepted answer here is a pretty good mix and approach to using the right tools for each job. Monit has a lot of capabilities, none of which you must use, so it doesn’t get in your way, as many opinionated tools do and like to dictate how you do things and what you must use in order to do them. Monit allows you to leverage what ever you already have in your stack. You don’t have to install package managers or increase your attack surface other than [apt-get|aptitude] install monit It’s easy to configure and has lots of good documentation.

Passenger

Passenger

I’ve looked at Passenger before and it looked quite good then. It still does, with one main caveat. It’s trying to do to much. One can easily get lost in the official documentation (example of the Monit install (handfull of commands to cover all Linux distros one page) vs Passenger install (aprx 10 pages)).  “Passenger is a web server and application server, designed to be fast, robust and lightweight. It runs your web apps with the least amount of hassle by taking care of almost all administrative heavy lifting for you.” I’d like to see the actual weight rather than just a relative term “lightweight”. To me it doesn’t look light weight. The feeling I got when evaluating Passenger was similar to the feeling produced with my Ossec evaluation.

The learning curve is quite a bit steeper than all the previous offerings. Passenger has strong opinions that once you buy into could make it hard to use the tools you may want to swap in and out. I’m not seeing the UNIX Philosophy here.

If you look at the Phusion Passenger Philosophy we see some note-worthy comments. “We believe no good software has bad documentation“. If your software is 100% intuitive, the need for documentation should be minimal. Few software products are 100% intuitive, because we only have so much time to develop it. The comment around “the Unix way” is interesting also. At this stage I’m not sure they’ve done better. I’d like to spend some time with someone or some team that has Passenger in production in a diverse environment and see how things are working out.

Passenger isn’t in the Debian repositories, so you would need to add the apt repository.

Passenger is six years old at the time of writing this, but the NodeJS support is only just over a year old.

Features that Stood Didn’t really Stand Out

Sadly there weren’t many that stood out for me.

  • Handle more traffic looked similar to Monit resource testing but without the detail. If there’s something Monit can’t do well, it’ll say “Hay, use this other tool and I’ll help you configure it to suite the way you want to work. If you don’t like it, swap it out for something else” With Passenger it seems to integrate into everything rather than providing tools to communicate loosely. Essentially locking you into a way of doing something that hopefully you like. It also talks about “Uses all available CPU cores“. If you’re using Monit you can use the NodeJS cluster module to take care of that. Again leaving the best tool for the job to do what it does best.
  • Reduce maintenance
    • Keep your app running, even when it crashesPhusion Passenger supervises your application processes, restarting them when necessary. That way, your application will keep running, ensuring that your website stays up. Because this is automatic and builtin, you do not have to setup separate supervision systems like Monit, saving you time and effort.” but this is what Monit excels at and it’s a much easier set-up than Passenger. This sort of marketing doesn’t sit right with me.
    • Host multiple apps at once. Host multiple apps on a single server with minimal effort. ” If we’re talking NodeJS web apps, then they are their own server. They host themselves. In this case it looks like Passenger is trying to solve a problem that doesn’t exist?
  • Improve security
    • Privilege separationIf you host multiple apps on the same system, then you can easily run each app as a different Unix user, thereby separating privileges.“. The Monit documentation says this: “If Monit is run as the super user, you can optionally run the program as a different user and/or group.” and goes on to provide examples how it’s done. So again I don’t see anything new here. Other than the “Slow client protections” which has side affects, that’s it for security considerations with Passenger. From what I’ve seen Monit has more in the way of security related features.
  • What I saw happening here was a lot of stuff that I actually didn’t need. Your mileage may vary.

Offerings

Phusion Passenger is a commercial product that has enterprise, custom and open source (which is free and still has loads of features).

Documentation

The following was the documentation I used in the same order and I found that the most helpful.

  1. NodeJS tutorial (This got me started with how it could work with NodeJS)
  2. Main web site
  3. Documentation and support portal
  4. Design and Architecture
  5. User Guide Index
  6. Nginx specific User Guide
  7. Standalone User Guide
  8. Twitter, blog
  9. IRC: #passenger at irc.freenode.net. I was on there for several days. There was very little activity.

Source

Does it Meet our Goals?

  1. Application should start automatically on system boot. There is no doubt that Passenger goes way beyond this aim.
  2. Application should be re-started if it dies or becomes un-responsive. There is no doubt that Passenger goes way beyond this aim.
  3. Ability to add the following later without having to swap the chosen offering:
    1. Reverse proxy: Passenger provides Integrations into Nginx, Apache and stand-alone (provide your own proxy)
    2. Passenger scales up NodeJS processes and automatically load balances between them
    3. Passenger is advertised as offering easily viewable statistics.
  4. There is loads of official documentation. Not as much community contributed though, as it’s still young.
  5. From what I’ve seen so far, I’d say Passenger is production ready. I would like to see more around how security was baked into the architecture though before I committed to using it.

Overall Thoughts

I spent quite a while reading the documentation. I just think it’s doing to much. I prefer to have stronger single focused tools that do one job, do it well and play nicely with all the other kids in the sand pit. You pick the tool up and it’s just intuitive how to use it and you end up reading docs to confirm how you think it should work. For me, this is not how passenger is.

If you’re looking for something even more comprehensive, check out Zabbix. If you like to pay for your tools, check out Nagios if you haven’t already.


At this point it was fairly clear as to which components I’d be using and configuring to keep my NodeJS application monitored, alive and healthy along with any other scripts or processes. systemd and Monit. If you’re on Ubuntu, you’d probably use Upstart instead of systemd as it should already be your default init system. So going with the default for the init system should give you a quick start and provide plenty of power. Plus it’s well supported, reliable, feature rich and you can manage anything/everything you want without installing extra packages. For the next level up, I’d choose Monit. I’ve now used it in production and it’s taken care of everything above the init system. I feel it has a good level of abstraction, plenty of features, doesn’t get in the way and integrates nicely into your production OS.

Getting Started with Monit

So we’ve installed Monit with an apt-get install monit and we’re ready to start configuring it.

ps aux | grep -i monit

Will reveal that Monit is running:

/usr/bin/monit -c /etc/monit/monitrc

Now if you issue a sudo service monit restart, it won’t work as you can’t access the Monit CLI due to the httpd not running.

The first thing we need to do is make some changes to the control file (/etc/monit/monitrc in Debian). The control file has sensible defaults already. At this stage I don’t need a web UI accessible via localhost or any other hosts, but it still needs to be turned on and accessible by at least localhost. Here’s why:

Note that HTTP support is required for almost all Monit CLI interface operation, as CLI commands (such as “monit status”) are handled by communicating with the Monit background process via the the HTTP interface. So basically you should have this enable, though you can bind the HTTP interface to localhost only so Monit is not accessible from the outside.

In order to turn on the httpd, all you need in your control file for that is:

set httpd port 2812 and use address localhost # only accept connection from localhost
allow localhost # allow localhost to connect to the server and

If you want to receive alerts via email, then you’ll need to configure that. Then on reload you should get start and stop events (when you quit).

sudo monit reload

Now if you issue a curl localhost:2812 you should get the web UI’s response of a html page. Now you can start to play with the Monit CLI

Now to stop the Monit background process use:

monit quit

Oh, you can find all the arguments you can throw at Monit here, or just issue a:

monit -h # will list all options.

To check the control file for syntax errors:

sudo monit -t

Also keep an eye on your log file which is specified in the control file: set logfile /var/log/monit.log

Right. So what happens when Monit dies…

Keep Monit Alive

Now you’re going to want to make sure your monitoring tool that can be configured to take all sorts of actions never just stops running, leaving you flying blind. No noise from your servers means all good right? Not necessarily. Your monitoring tool just has to keep running. So lets make sure of that now.

When Monit is apt-get install‘ed on Debian it gets installed and configured to run as a daemon. This is defined in Monit’s init script.
Monit’s init script is copied to /etc/init.d/ and the run levels set-up for it. This means when ever a run level is entered the init script will be run taking either the single argument of stop (example: /etc/rc0.d/K01monit), or start (example: /etc/rc2.d/S17monit). Further details on run levels here.

systemd to the rescue

Monit is pretty stable, but if for some reason it dies, then it won’t be automatically restarted again.
This is where systemd comes in. systemd is installed out of the box on Debian Jessie on-wards. Ubuntu uses Upstart which is similar. Both SysV init and systemd can act as drop-in replacements for each other or even work along side of each other, which is the case in Debian Jessie. If you add a unit file which describes the properties of the process that you want to run, then issue some magic commands, the systemd unit file will take precedence over the init script (/etc/init.d/monit)

Before we get started, lets get some terminology established. The two concepts in systemd we need to know about are unit and target.

  1. A unit is a configuration file that describes the properties of the process that you’d like to run. There are many examples of these I can show you and I’ll point you in the direction soon. They should have a [Unit] directive at a minimum. The syntax of the unit files and the target files were derived from Microsoft Windows .ini files. Now I think the idea is that if you want to have a [Service] directive within your unit file, then you would append .service to the end of your unit file name.
  2. A target is a grouping mechanism that allows systemd to start up groups of processes at the same time. This happens at every boot as processes are started at different run levels.

Now in Debian there are two places that systemd looks for unit files… In order from lowest to highest precedence, they are as follows:

  1. /lib/systemd/system/ (prefix with /usr dir for archlinux) unit files provided by installed packages. Have a look in here for many existing examples of unit files.
  2. /etc/systemd/system/ unit files created by the system administrator

As mentioned above, systemd should be the first process started on your Linux server. systemd reads the different targets and runs the scripts within the specific target’s “target.wants” directory (which just contains a collection of symbolic links to the unit files). For example the target file we’ll be working with is the multi-user.target file (actually we don’t touch it, systemctl does that for us (as per the magic commands mentioned above)). Just as systemd has two locations in which it looks for unit files. I think this is probably the same for the target files, although there wasn’t any target files in the system administrator defined unit location but there were some target.wants files there.

systemd Monit Unit file

I found a template that Monit had already provided for a unit file in /usr/share/doc/monit/examples/monit.service. There’s also one for Upstart. Copy that to where the system administrator unit files should go and make the change so that systemd restarts Monit if it dies for what ever reason. Check the Restart= options on the systemd.service man page. The following is what my initial unit file looked like:

[Unit]
Description=Pro-active monitoring utility for unix systems
After=network.target

[Service]
Type=simple
ExecStart=/usr/bin/monit -I -c /etc/monit/monitrc
ExecStop=/usr/bin/monit -c /etc/monit/monitrc quit
ExecReload=/usr/bin/monit -c /etc/monit/monitrc reload
Restart=always

[Install]
WantedBy=multi-user.target

Now, some explanation. Most of this is pretty obvious. The After= directive just tells systemd to make sure the network.target file has been acted on first and of course network.target has After=network-pre.target which doesn’t have a lot in it. I’m not going to go into this now, as I don’t really care too much about it. It works. It means the network interfaces have to be up first. If you want to know how, why, check this documentation. Type=simple. Again check the systemd.service man page.
Now to have systemd control Monit, Monit must not run as a background process (the default). To do this, we can either add the set init statement to Monit’s control file or add the -I option when running systemd, which is exactly what we’ve done above. The WantedBy= is the target that this specific unit is part of.

Now we need to tell systemd to create the symlinks in multi-user.target.wants directory and other things. See the man page for more details about what enable actually does if you want them. You’ll also need to start the unit.

Now what I like to do here is:

systemctl status /etc/systemd/system/monit.service

Then compare this output once we enable the service:

● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; disabled)
   Active: inactive (dead)
sudo systemctl enable /etc/systemd/system/monit.service
# systemd now knows about monit.service
systemctl status /etc/systemd/system/monit.service

Outputs:

● monit.service - Pro-active monitoring utility for unix systems
   Loaded: loaded (/etc/systemd/system/monit.service; enabled)
   Active: inactive (dead)

Now start the service:

sudo systemctl start monit.service # there's a stop and restart also.

Now you can check the status of your Monit service again. This shows terse runtime information about the units or PID you specify (monit.service in our case).

sudo systemctl status monit.service

By default this function will show you 10 lines of output. The number of lines can be controlled with the --lines= option

sudo systemctl --lines=20 status monit.service

Now try killing the Monit process. At the same time, you can watch the output of Monit in another terminal. tmux or screen is helpful for this:

sudo tail -f /var/log/monit.log
sudo kill -SIGTERM $(pidof monit)
# SIGTERM is a safe kill and is the default, so you don't actually need to specify it. Be patient, this may take a minute or two for the Monit process to terminate.

Or you can emulate a nastier termination with SIGKILL or even SEGV (which may kill monit faster).

Now when you run another status command you should see the PID has changed. This is because systemd has restarted Monit.

When you need to make modifications to the unit file, you’ll need to run the following command after save:

sudo systemctl daemon-reload

When you need to make modifications to the running services configuration file
/etc/monit/monitrc for example, you’ll need to run the following command after save:

sudo systemctl reload monit.service
# because systemd is now in control of Monit, rather than the before mentioned: sudo monit reload

 

Keep NodeJS Application Alive

Right, we know systemd is always going to be running. So lets use it to take care of the coarse grained service control. That is keeping your NodeJS application service alive.

Using systemd

systemd my-web-app.service Unit file

You’ll need to know where your NodeJS binary is. The following will provide the path:

which NodeJS

Now create a systemd unit file my-nodejs-app.service

[Unit]
Description=My amazing NodeJS application
After=network.target

[Service]
# systemctl start my-nodejs-app # to start the NodeJS script
ExecStart=[where nodejs binary lives] [where your app.js/index.js lives]
# systemctl stop my-nodejs-app # to stop the NodeJS script
# SIGTERM (15) - Termination signal. This is the default and safest way to kill process.
# SIGKILL (9) - Kill signal. Use SIGKILL as a last resort to kill process. This will not save data or cleaning kill the process.
ExecStop=/bin/kill -SIGTERM $MAINPID
# systemctl reload my-nodejs-app # to perform a zero-downtime restart.
# SIGHUP (1) - Hangup detected on controlling terminal or death of controlling process. Use SIGHUP to reload configuration files and open/close log files.
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=my-nodejs-app
User=my-nodejs-app
Group=my-nodejs-app # Not really needed unless it's different, as the default group of the user is chosen without this option. Self documenting though, so I like to have it present.
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

Add the system user and group so systemd can actually run your service as the user you’ve specified.

sudo groupadd --system my-nodejs-app # this is not needed if you adduser like below...
getent group # to verify which groups exist.
sudo adduser --system --no-create-home --group my-nodejs-app # This will create a system group with the same name and ID of the user.
groups my-nodejs-app # to verify which groups the new user is in.

Now as we did above, go through the same procedure enable‘ing, start‘ing and verifying your new service.

Make sure you have your directory permissions set-up correctly and you should have a running NodeJS application that when it dies will be restarted automatically by systemd.

Don’t forget to backup all your new files and changes in case something happens to your server.

We’re done with systemd for now. Following are some useful resources I’ve used:

 

Using Monit

Now just configure your Monit control file. You can spend a lot of time here tweaking a lot more than just your NodeJS application. There are loads of examples around and the control file itself has lots of commented out examples also. You’ll find the following the most helpful:

There are a few things that had me stuck for a bit. By default Monit only sends alerts on change, not on every cycle if the condition stays the same, unless when you set-up your

set alert your-ame@your.domain

Append receive all alerts, so that it looks like this:

set alert your-ame@your.domain receive all alerts

There’s quite a few things you just work out as you go. The main part I used to health-check my NodeJS app was:

check host myhost with address 1.2.3.4
   start program = "/bin/systemctl start my-nodejs-app.service"
   stop program = "/bin/systemctl stop my-nodejs-app.service"
   if failed ping then alert
   if failed
      port 80 and
      protocol http and
      status = 200 # The default without status is failure if status code >= 400
      request /testdir with content = "some text on my web page" and
         then restart
   if 5 restarts within 5 cycles then alert

I carry on and check things like:

  1. cpu and memory usage
  2. load averages
  3. File system space on all the mount points
  4. Check SSH that it hasn’t been restarted by anything other than Monit (potentially swapping the binary or it’s config). Of course if an attacker kills Monit, systemd immediately restarts it and we get Monit alert(s). We also get real-time logging hopefully to an off-site syslog server. Ideally your off-site syslog server also has alerts set-up on particular log events. On top of that you should also have inactivity alerts set-up so that if your log files are not generating events that you expect, then you also receive alerts. Services like Dead Man’s Snitch or packages like Simple Event Correlator with Cron are good for this. On top of all that, if you have a file integrity checker that resides on another system that your host reveals no details of and you’ve got it configured to check all the right file check-sums, dates, permissions, etc, you’re removing a lot of low hanging fruit for someone wanting to compromise your system.
  5. Directory permissions, uid, gid and checksums. Of course you’re also going to have to make sure the tools that Monit uses to do these checks haven’t been modified.

 

If you find anything I haven’t explained clearly, or you need a hand with any of this just leave a comment. Cheers.

Keeping Your Linux Server/s In Time With Your Router

March 28, 2015

Your NTP Server

With this set-up, we’ve got one-to-many Linux servers in a network that all want to be synced with the same up-stream Network Time Protocol (NTP) server/s that your router (or what ever server you choose to be your NTP authority) uses.

On your router or what ever your NTP server host is, add the NTP server pools. Now how you do this really depends on what your using for your NTP server, so I’ll leave this part out of scope. There are many NTP pools you can choose from. Pick one or a collection that’s as close to you’re NTP server as possible.

If your NTP daemon is running on your router, you’ll need to decide and select which router interfaces you want the NTP daemon supplying time to. You almost certainly won’t want it on the WAN interface (unless you’re a pool member) if you have one on your router.

Make sure you restart your NTP daemon.

Your Client Machines

If you have ntpdate installed, /etc/default/ntpdate says to look at /etc/ntp.conf which doesn’t exist without ntp being installed. It looks like this:

# Set to "yes" to take the server list from /etc/ntp.conf, from package ntp,
# so you only have to keep it in one place.
NTPDATE_USE_NTP_CONF=yes

but you’ll see that it also has a default NTPSERVERS variable set which is overridden if you add your time server to /etc/ntp.conf. If you enter the following and ntpdate is installed:

dpkg-query -W -f='${Status} ${Version}\n' ntpdate

You’ll get output like:

install ok installed 1:4.2.6.p5+dfsg-3ubuntu2

Otherwise install it:

apt-get install ntp

The public NTP server/s can be added straight to the bottom of the /etc/ntp.conf file, but because we want to use our own NTP server, we add the IP address of our server that’s configured with our NTP pools to the bottom of the file.

server <IP address of your local NTP server here>

Now if your NTP daemon is running on your router, hopefully you have everything blocked on its interface/s by default and are using a white-list for egress filtering.

In which case you’ll need to add a firewall rule to each interface of the router that you want NTP served up on.

NTP talks over UDP and listens on port 123 by default.

After any configuration changes to your ntpd make sure you restart it. On most routers this is done via the web UI.

On the client (Linux) machines:

sudo service ntp restart

Now issuing the date command on your Linux machine will provide the current time, yes with seconds.

Trouble-shooting

The main two commands I use are:

sudo ntpq -c lpeer

Which should produce output like:

            remote                       refid         st t when poll reach delay offset jitter
===============================================================================================
*<server name>.<domain name> <upstream ntp ip address> 2  u  54   64   77   0.189 16.714 11.589

and the standard NTP query program followed by the as argument:

ntpq

Which will drop you at ntpq’s prompt:

ntpq> as

Which should produce output like:

ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1 15720  963a   yes   yes  none  sys.peer    sys_peer  3

Now in the first output, the * in front of the remote means the server is getting it’s time successfully from the upstream NTP server/s which needs to be the case in our scenario. Often you may also get a refid of .INIT. which is one of the “Kiss-o’-Death Codes” which means “The association has not yet synchronized for the first time”. See the NTP parameters. I’ve found that sometimes you just need to be patient here.

In the second output, if you get a condition of reject, it’s usually because your local ntp can’t access the NTP server you set-up. Check your firewall rules etc.

Now check all the times are in sync with the date command.

Journey To Self Hosting

November 29, 2014

I was recently tasked with working out the best options for hosting web applications and their data for a client. This was their foray into whether to throw all their stuff into the cloud or to build their own infrastructure to host everything on.

Hosting Options

There are a lot of options available now. Most of which are derivatives of either external cloud or internal (possibly cloud). All of which come with features and some price tags that need to be weighed up. I’ve been collecting resources of providers and their offerings (both cloud and in-house) for quite a while. So I didn’t have to go far to pull them together for comparison.

All sites and apps require a different amount of each resource type to be allocated to them. For example many web sites are still predominantly static, which require more network band-width than any other resource, some memory, a little processing power and provided they’re being cached on the server, not a lot else. These resources are very cheap.

If you’re running an e-commerce site, then you can potentially add more Disk I/O which is usually the first bottleneck, processing power and space for your data store. Add in redundancy, backups and administration of.
Fast disks (or lets just call it storage) are cheap. In fact most hardware is cheap.

Administration of redundancy, backups and staying on top of security starts to cost more. Although the “staying on top of security” will need to be done whether you’re on someone else’s hardware or on your own. It’s just that it’s a lot easier on your own because you’re in control and dictate the amount of visibility you have.

The Cloud

The Cloud

Pros

It’s out of your hands.
Indeed it is, in more ways than one. Your trust is going to have to be honoured here (or not). Yes you have SLA’s, but what guarantee do the SLA’s give you that the people working on your system and data are not having a bad day. Maybe they’ve broken up with their girlfriend, or what ever. It takes very little to miss something that could drastically compromise your system and or data.

VPS’s can be spun up quickly, but remember, good things take time. Everything has a cost. Things are quick and easy for a reason. There is a cost to this, think about what those (often hidden) costs are.

In some cases it can be cheaper, but you get what you pay for.

Cons

Your are trusting others with your data. Even others that you are not aware of. In many cases, hosting providers can be (and in many cases are) forced by governments and other agencies to give up your secrets. This is very common place now and you may not even know it’s happened.

Your provider may go out of business.

There is an inherent lack of security in all the cloud providers I’ve looked at and worked with. They will tell you they take security seriously, but when someone that understands security inspects how they do things, the situation often looks like Swiss cheese.

In-House Cloud

In-House Cloud

Pros

You are in control of your data and your application, providing you or “your” staff:

  • and/or external consultants are competent and haven’t made mistakes in setting up your infrastructure
  • Are patching all software/firmware involved
  • Are Fastidiously hardening your server/s (this is continuous. It doesn’t stop at the initial set-up)
  • Have set-up the routes and firewall rules correctly
  • Have the correct alerts set-up
  • Have implemented Intrusion Detection and Prevention Systems (IDS’s/IPS’s)
  • Have penetration tested the set-up and not just from a technical perspective. It’s often best to get pairs to do the reviews.

The list goes on. If you are at all in doubt, that’s where you consider the alternatives. In saying that, most hosting and cloud providers perform abysmally, despite their claims that your applications and data is safe with them.

It “can” cost less than entrusting your system and data to someone (or many someone’s) on the other side of the planet. Weigh up the costs. They will not always be what they appear at face value.

Hardware is very cheap.

Cons

Potential lack of in-house skills.

People with the right skills and attitudes are not cheap.

It may not be core business. You may not have the necessary capitols in-house to scope, architect, cost, set-up, administer. Potentially you could hire someone to do the initial work and the on going administration. The amount of on going administration will be partly determined by what your hosting. Generally speaking hosting company web sites, blogs etc, will require less work than systems with distributed components and redundancy.

Spinning up an instance to develop or prototype on, doesn’t have to be hard. In fact if you have some hardware, provisioning of VM images is usually quick and easy. There is actually a pro in this too… you decide how much security you want baked into these images and the processes taken to configure.

Consider download latencies from people you want to reach possibly in other countries.

In some cases it can be more expensive, but you get what you pay for.

Outcome

The decision for this client was made to self host. There will be a follow up post detailing some of the hardening process I took for one of their Debian web servers.