Archive for the ‘Uncategorized’ Category

Setup of Chromium, Burp Suite, Node.js to view HTTP on the wire

March 30, 2013

As part of my Node.js development I really wanted to see what was going over the wire from chromium-browser to my Node.js web apps.

I have node.js installed globaly, express installed locally, a very simple express server listening on port 3000

var express = require('express');
var app = express();

app.get('/', function (request, response) {
   response.send('Welcome to Express!');
});

app.listen(3000);

Burp Suite setup in my main menu. Added the command via System menu -> Preferences -> Main Menu

Burp Suite Command

The Command string looks like the following.

java -jar -Xmx1024m /WhereTheBurpSuiteLives/burpsuite_free_v1.5.jar

Setting up Burp Suite configuration details are found here. I’ve used Burp Suite before several times. Most notably to create my PowerOffUPSGuests library which I discuss here. In that usage I reverse engineered how the VMware vSphere client shuts down it’s guests and replicated the traffic in my library code. For a simple setup, it’s very easy to use. You can spend hours exploring Burps options and all the devious things you can use it for, but to get started it’s simple. Set it up to listen on localhost and port 3001 for this example.

Burp Suite Proxy Listeners

Run the web app

to start our express app from the directory where our above server is located, from a console, run:

node index.js

Where index.js is the name of the file that contains our JavaScript.

To test that our express server is active. We can browse to http://localhost:3000/ or we can curl it:

curl -i  http://localhost:3000/

Should give us something in return like:


HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 19
Date: Sun, 24 Mar 2013 07:53:38 GMT
Connection: keep-alive

Welcome to Express!

Now for the Proxy interception (Burp Suite)

Now that we’ve got end to end comms, lets test the interceptor.

Run burpsuite with the command I showed you above.

Fire the Http request at your web app via the proxy:

curl -i --proxy http://localhost:3001 http://localhost:3000/

Now you should see burps interceptor catch the request. On the Intercept tab, press the Forward button and curl should show a similar response to above.

Burp Suite Proxy Intercept

If you look at the History tab, you can select the message curl sent and also see the same Response that curl received.

Burp Suite Proxy History

Now you can also set Burp to intercept the server responses too. In fact Burp is extremely configurable. You can also pass the messages to different components of Burp to process how ever you see fit. As you can see in the above image looking at all the tabs that represent burp tools. These can be very useful for penetration testing your app as you develop it.

I wanted to be able to use chromium normally and also be able to open another window for browsing my express apps and viewing the HTTP via Burp Suite. This is actually quite simple. Again with your app running locally on port 3000 and burp listening on port 3001, run:

chromium-browser --temp-profile --proxy-server=localhost:3001

For more chromium options:

chromium-browser -help

Now you can just browse to your web app and have burp intercept your requests.

chromium proxied via burp

You may also want to ignore requests to your search provider, because as your typing in URL’s chromium will send searches when you pause. Under Proxy->Options tab you can do something like this:

Ignore Client Requests

Advertisements

How to Increase Software Developer Productivity

March 2, 2013

Is your organisation:

  • Wanting to get more out of your Software Developers?
  • Wanting to increase RoI?
  • Spending too much money fixing bugs?
  • Development team not releasing business value fast enough?
  • Maybe your a software developer and you want to lift your game to the next level?

If any of these points are of concern to you… read on.

There are many things we can do to lift a software developers productivity and thus the total output of The Development Team. I’m going to address some quick and cheap wins, followed by items that may take a little longer to implement, but non the less, will in many cases provide even greater results.

What ever it takes to remove friction and empower your software developers to work with the least amount of interruptions, do it.
Allow them to create a space that they love working in. I know when I work from home my days are far more productive than when working for a company that insists on cramming as many workers around you into a small space as possible. Chitter chatter from behind, both sides and in front of you will not help one get their mind into a state of deep thought easily.

I have included thoughts from Nicholas C. Zakas post to re-iterate the common fallacies uttered by non-engineers.

  • I don’t understand why this is such a big deal. Isn’t it just a few lines of code? (Technically, everything is a few lines of code. That doesn’t make it easy or simple.)
  • {insert name here} says it can be done in a couple of days. (That’s because {insert name here} already has perfect knowledge of the solution. I don’t, I need to learn it first.)
  • What can we do to make this go faster? Do you need more engineers? (Throwing more engineers at a problem frequently makes it worse. The only way to get something built faster is to build a smaller thing.)

Screen real estate

When writing code, a software developers work requires a lot of time spent deep in thought. Holding multiple layers of complexity within immediately accessible memory.
One of the big wins I’ve found that helps with continuity, is maximising your screen real estate.
I’ve now moved up to 3 x 27″ 2560×1440 IPS flat panels. These are absolutely gorgeous to look at/work with.
Software development generally requires a large number of applications to be running at any one time.
For example in any average session for me, I generally have somewhere around 30 windows open.
The more screen real estate a developer has, the less he/she has to fossick around for what he/she needs and switch between them.
Also, the less brain cycles he/she has to spend locating that next running application, means the more cycles you have in order to do real work.
So, the less gap there is switching between say one code editor and another, the easier it is for a developer to keep the big picture in memory.
We’re looking at:

  1. physical screen size
  2. total pixel count

The greater real estate available (physical screen size and pixel count) the more information you can have instant access to, which means:

  • less waiting
  • less memory loss
  • less time spent rebuilding structures in your head
  • greater continuity

Which then gives your organisation and developers:

  • greater productivity
  • greater RoI

These screens are cheaper than many realise. I set these up 4 months ago. They continue to drop in price.

  1. FSM-270YG 27″ PC Monitor LED S-IPS WIDE 2560×1440 16:9 WQHD DVI-D $470.98 NZD
  2. [QH270-IPSMS] Achieva ShiMian HDMI DVI D-Sub 27″ LG LED 2560×1440 $565.05 NZD
  3. [QH270-IPSMS] Achieva ShiMian HDMI DVI D-Sub 27″ LG LED 2560×1440 $565.05 NZD

It’s just simply not worth not to upgrading to these types of panels.

korean monitors

In this setup, I’m running Linux Mint Maya. Besides the IPS panels, I’m using the following hardware.

  • Video card: 1 x Gigabyte GV-N650OC-2GI GTX 650 PCIE
  • PSU: 1200w Corsair AX1200 (Corsair AX means no more PSU troubles (7 yr warranty))
  • CPU: Intel Core i7 3820 3.60GHz (2011)
  • Mobo: Asus P9X79
  • HDD: 1TB Western Digital WD10EZEX Caviar Blue
  • RAM: Corsair 16GB (2x8GB) Vengeance Performance Memory Module DDR3 1600MHz

One of the ShiMian panels is using the VGA port on the video card as the FSM-270YG only supports DVI.
The other ShiMian and the FSM-270YG are hooked up to the 2 DVI-D (dual link) ports on the video card. The two panels feeding on the dual link are obviously a lot clearer than the panel feeding on the VGA. Also I can reduce the size of the text considerably giving me greater clarity while reading, while enabling me to fit a lot more information on the screens.

With this development box, I’m never left waiting for the machine to catchup with my thought process.
So don’t skimp on hardware. It just doesn’t make sense any way you look at it.

Machine Speed

The same goes for your machine speed. If you have to wait for your machine to do what you’ve commanded it to do and at the same time try and keep a complex application structure in your head, the likelihood of loosing part of that picture increases. Plus your brain has to work harder to hold the image in memory while your trying to maintain continuity of thought. Again using precious cycles for something that shouldn’t be required rather than on the essential work. When a developer looses part of this picture, they have to rebuild it again when the machine finishes executing the last command given. This is re-work that should not be necessary.

An interesting observation from Joel Spolsky:

“The longer it takes to task switch, the bigger the penalty you pay for multitasking.
OK, back to the more interesting topic of managing humans, not CPUs. The trick here is that when you manage programmers, specifically, task switches take a really, really, really long time. That’s because programming is the kind of task where you have to keep a lot of things in your head at once. The more things you remember at once, the more productive you are at programming. A programmer coding at full throttle is keeping zillions of things in their head at once: everything from names of variables, data structures, important APIs, the names of utility functions that they wrote and call a lot, even the name of the subdirectory where they store their source code. If you send that programmer to Crete for a three week vacation, they will forget it all. The human brain seems to move it out of short-term RAM and swaps it out onto a backup tape where it takes forever to retrieve.”

Many of my posts so far have been focused on productivity enhancements. Essentially increasing RoI. This list will continue to grow.

Coding Standards and Guidelines

Agreeing on a set of Coding Standards and Guidelines and policing them (generally by way of code reviews and check-in commit scripts) means software developers get to spend less time thinking about things that they don’t need to and get to throw more time at the real problems.

For example:

Better Tooling

Improving tool sets has huge gains in productivity. In most cases many of the best tools are free. Moving from the likes of non distributed source control systems to best of bread distributed.

There are many more that should be considered.

Wiki

Implementing an excellent Wiki that is easy to use. I’ve put a few wiki’s in place now and have used even more. My current pick of the bunch would have to be Atlassians Confluence. I’ve installed this on a local server and also migrated the instance to their cloud. There are varying plans and all very reasonably priced with excellent support. If the wiki you’re planning on using is not as intuitive as it could be, developers just wont use it. So don’t settle for anything less.

Improving Processes

Code Reviews

Also a very important step in all successful development teams and often a discipline that must be satisfied as part of Scrums Definition of Done (DoD). What this gives us is high quality designs and code, conforming to the coding standards. This reduces defects, duplicate code (DRY) and enforces easily readable code as the reviewer has to understand it. Saves a lot of money in re-work.

Cost of Change

Scott Amblers Cost of change curve

Definition of Done (DoD)

Get The Team together and decide on what it means to have each Product Backlog Item that’s pulled into the Sprint Done.
Here’s an example of a DoD that one of my previous Development Teams compiled:

Definition of Done

What does Done actually mean?

Come Sprint Review on the last day of the Sprint, everyone knows what it means to be done. There is no “well I thought it was Done because I’ve written the code for it, but it’s not tested yet”.

Continuous Integration (CI)

There are many tools and ways to implement CI. What does CI give you? Visibility of code quality, adherence to standards, reports on cyclomatic complexity, predictability and quite a number of other positive side effects. You’ll know as soon as the code fails to build and/or your fast running tests (unit tests) fail. This means The Development Team don’t keep writing code on top of faulty code, thus reducing technical debt by not having to undo changes on changes later down the track.
I’ve used a number of these tools and have carried out extensive research and evaluation spikes on a number of the most popular offerings. In order of preference, the following are my candidates.

  1. Jenkins (free and open source, with a great community)
  2. TeamCity
  3. Atlassian Bamboo

Release Plans

Make sure you have these. This will reduce confusion and provide a clear definition of the steps involved to get your software out the door. This will reduce the likelihood of screwing up a release and re-work being required. You’ll definitely need one of these for the next item.

Here’s an example of a release notes guideline I wrote for one of the previous companies I worked for.

release notes

Continuous Deployment

If using Scrum, The Scrum Team will be forecasting a potentially releasable Increment (the sum of all the Product Backlog items completed during a Sprint and all previous Sprints).
You may decide to actually release this. When you do, you can look at the possibility of automating this deployment. Thus reducing the workload of the release manager or who ever usually deploys (often The Development Team in a Scrum environment). This has the added benefit of consistency, predictability, reliability and of course happy customers. I’ve also been through this process of research and evaluation on the tools available and the techniques to implement.

Here’s a good podcast that got me started. I’ve got a collection of other resources if you need them and can offer you my experience in this process. Just leave a comment.

Implement Scrum (and not the Flaccid flavour)

I hope this goes without saying?
Implementing Scrum to provide ultimate visibility

Get maximum quality out of the least money spent

How to get the most out of your limited QA budget

Driving your designs with tests, thus creating maintainable code, thus reducing technical debt.

Hold Retrospectives

Scrum is big on continual inspection and adaption, self-organisation and fostering innovation. The military have another term for inspection and adaption. It’s called the OODA Loop.
The Retrospective is just one of the Scrum Events that enable The Scrum Team to continually inspect the way they are doing things and improve the way they develop and deliver business value.

Invest a little into your servant leaders

Empowering the servant leaders.

Context Switching

Don’t do it. This is a real killer.
This is hard. What you need to do is be aware of how much productivity is killed with each switch. Then do everything in your power to make sure your Development Team is sheltered from as much as possible. There are many ways to do this. For starters, you’re going to need as much visibility as possible into how much this is currently happening. track add-hock requests and any other types of interruptions that steel the developers concentration. In the last Scrum Team that I was Scrum Master of, The Development Team decided to include another metric to the burn down chart that was on the middle of the wall, clearly visible to all. Every time one of the developers was interrupted during a Sprint, they would record this time, the reason and who interrupted them, on the burn down chart. The Scrum Team would then address this during the Retrospective and empirically address why this happened and work out how to stop it happening every Sprint. Jeff Atwood has an informative post on why and how context-switching/multitasking kills productivity. Be sure to check it out.

As always, if anything I’ve mentioned isn’t completely clear, or you have any questions, please leave a comment 🙂

A Decent Console for Windows

January 19, 2013

On *nix we’re kind of spoilt when it comes to the CLI experience.
The console I use most in a GUI environment is the great terminator.

terminator

No, not that one.
This one

terminator

Multi tab, split screen, transparency, the works.
Then we’ve also got tmux (and a comparison between terminator and tmux).
Taking things further, we’ve got awesome

Well I’ve been looking for something similar for Windows for a while.
I’ve tried terminator on Cygwin, but it’s just not the same, plus it only supports the single shell.

Meet Console2

Console2 PS

With PowerShell as the currently active tab.
It’s a stand alone executable and crucially it’s free.
Console2 is just that, a console or terminal that seems to be able to host any shell that’s thrown at it.
As you can see with the image above, I’ve setup Console2 to host the following shells:

  1. Windows Command shell
  2. The Visual Studio Command Prompt (which is just the Windows Command shell (with some paths and variables added?))
  3. PowerShell
  4. The node.js Read-Eval-Print-Loop (REPL)
  5. VMwares vSphere PowerCLI
  6. And of course the bash shell we all know and love.
    Cygwin required

Although project activity looks minimal to non existent currently.

How I setup Console2

Once running, right click the console -> Edit -> Settings…

  • Setup the hot keys under the Hotkeys node to behave like the terminal I use on Linux (currently terminator).
    • Select the specific command, put your cursor in the Hotkey text box, press your preferred key combination, press the Assign key.
    • For opening new tabs I use Ctrl+Shift+T
    • Change the Copy selection node to what it should be: Ctrl+C
    • Change the Past to what it should be: Ctrl+V
  • Under the Console node, enter the directory to have each shell start in
  • Under the Appearance/More… node, I deselect the Show menu, Show toolbar and Show status bar
    • I make sure the Window transparency is set to None, as it just distracts me being able to see stuff behind the surface I’m concentrating on.
      It looks cool to turn it on, but I personally find it harder to read the text when you’ve got to lots of text overlapping
    • Under the Behavior node, I turn on Copy on select, as this is on in Linux by default
  • Now under the Tabs node is where we set up all of our shells.
    • Click the Add button
    • Change the name you want the shell tab to appear as in the Main tab under the Title text box
    • Now for the Icons I just got images I wanted for them and opened them in GIMP and changed the size to 32×32 pixels and saved as .ico files to the same directory that the Console.exe runs from
    • I Then select them here
    • Under the Shell section I just copy the short cuts from the likes of the start menu and past them in there
    • You can then override the default startup dir by specifying your path in the Startup dir text box
    • You can also specify if you want the shell to run as a specific user. Administrator for example.
      When you run this shell, you’ll be prompted for the users credentials if it’s not you.

As I was working through the Console2 set up, I ran into another offering…

Meet ConEmu

The actively maintained ConEmu lives here.
I had a quick play with this and a flick through the documentation.
The simple tasks of setting up different shells as pre-sets seemed to evade me.
There seems to be a lot more configuration options too.
As I’d just set up the Console2 and it seemed to be doing everything I needed for now, I decided to call it quits with ConEmu.
I think it’s worth checking out though if you need more power than Console2.
Scott Hanselmans post on Conemu.

Extending, Currying and Monkey Patching. part 1

April 29, 2012

Extending

The JavaScript Function.prototype.call and Function.prototype.apply methods allow us to extend an object with additional functionality.
The first argument to Function.prototype.call and Function.prototype.apply is the object on which the function is to be:
temporarily added to, invoked, then removed again.
The first argument is bound to this for the current scope (within the current function body).
Be wary of what this is bound to. I talk a little more about this below.
If you have no arguments to pass to the applied method, call or apply will achieve the same result.

The way I remember how the call and apply functions work, is like the following:

BINARYMIST.extensions.minutes.remaining applied to (BINARYMIST.coffee)
Method on the left is applied to the object on the right.

First we create our single global object (line 01) that will be used as a namespace
On line 05 we declare the extensions object and augment it to our single BINARYMIST object.
You can see on line 07, this is how we create extension methods in JavaScript.
If we replaced the apply method from line 34 with call,  the outcome would be exactly the same.
You’ll also notice on line 34 and 35, that the remaining method of BINARYMIST.extensions.minutes returns a minutes property.
Once BINARYMIST.extensions.minutes.remaining is applied to BINARYMIST.coffee, return this.minutes now returns a function that returns privateMinutes.

javascript binding callback

You can copy past the below code:

var BINARYMIST = (function (binMist) {
   return binMist;
} (BINARYMIST || {/*if BINARYMIST is falsy, create a new object and pass it*/}));

BINARYMIST.extensions = (function () {
   var localExtensions = {}; // private
   localExtensions.minutes = {
      minutes: 0,
      remaining: function () {
         // this is bound to the object that "remaining" is a member of at runtime, because "remaining" is a property of the object "localExtensions.minutes".
         // If "remaining" was not the property of an object, it would be invoked as a function rather than a method. "this" would be bound to the global object.
         return this.minutes;
      }
   };

   return localExtensions;
}());

var BINARYMIST = (function (binMist) {
   var privateMinutes = 5;
   binMist.coffee = {
      type: "ShortBlack",
      member: function () {
         return privateMinutes;
      }
   };
   return binMist;
}(BINARYMIST || {}));
// Function.call, Function.apply are interchangeable if your only using a single parameter.
// The first argument of call or apply can also be a primitive value, null or undefined.
// Either way it will be bound to the this object.
// A word of warning here:
// When a function is not the property of an object, then it is invoked as a function.
// The this object is then bound to the global object (One of the poor design decisions in JavaScript).
var deliveryTime = BINARYMIST.extensions.minutes.remaining.apply(BINARYMIST.coffee);
alert("Your coffee will be ready in " + deliveryTime() + " minutes");

In continuing with the code comment immediately above:
If the first parameter to apply or call is null, then the this is bound to the global object upon invocation.
When you invoke a function that is not a method, this is exactly what happens.

You may have noticed, I’m using the module pattern in the above example.
This pattern is effective for getting our objects nicely nested into a single namespace (or more correctly an object ) that sits in the global object.
This stops us from littering the global space with every object we create.
You can also see from the following image, what’s private and what’s public.
So we can easily define accessibility.

JavaScript global scope abatement

What are the differences between apply and call then?

call

You can think of Function.prototype.call as being syntax sugar on top of Function.prototype.apply.
If you only have a single parameter after the first, your better off to use Function.prototype.call as it’s more efficient than creating an array for a single value.

The difference is in the parameters after the first invocation context argument.
With call, an arbitrary number of arguments can be passed.
Following the first argument, each argument will be passed to the extension method. In our case remaining (line 09 above).
So we could make the following changes to the above code.
You’ll notice on line 8 below once applied to BINARYMIST.coffee, no longer returns a function, because we explicitly set this.minutes to a number… 3.
Now in the below example on line 16, if we fail to pass an argument to BINARYMIST.extensions.minutes.remaining, by default our coffee will be ready in 10 minutes.

BINARYMIST.extensions = (function () {
   var localExtensions = {}; //private
   localExtensions.minutes = {
      minutes: 0,
      remaining: function (numberOfMinutes) {

         var defaultMinutes = 10;
         this.minutes = numberOfMinutes || defaultMinutes;

         return this.minutes;
      }
   };
   return localExtensions;
}());

var deliveryTime = BINARYMIST.extensions.minutes.remaining.call(BINARYMIST.coffee, 3);
alert("Your coffee will be ready in " + deliveryTime + " minutes");

Better still, lets pass a couple of arguments..
While we’re at it, lets tidy things up a bit.
Lets put the code we use to test the extensions on our coffee into an immediately invoked function expression,
also known as an anonymous closure.
You can see this on line 36.

On line 15 we assign localSpent the second argument we passed in (minutesSpent),
but only if we’re sure it’s a number,
else we assign the undefined value.

var BINARYMIST = (function (binMist) {
   return binMist;
} (BINARYMIST || {/*if BINARYMIST is falsy, create a new object and pass it*/}));

BINARYMIST.extensions = (function () {
   var localExtensions = {}; // private
   localExtensions.minutes = {
      minutes: 0,
      remaining: function (numberOfMinutes) {
         var defaultMinutes = 10;

         // check our numbers are actually numbers.
         var isNumber = function isNumber(num) { return typeof num === 'number' && isFinite(num);}
         var localRemaining = isNumber(numberOfMinutes) ? numberOfMinutes : defaultMinutes;
         var localSpent = isNumber(arguments[1]) ? arguments[1] : undefined;

         this.minutes = {remaining: localRemaining, spent: localSpent}
         return this.minutes;
      }
   };
   return localExtensions;
}());

// lets add some coffee
var BINARYMIST = (function (binMist) {
   var privateMinutes = 5;
   binMist.coffee = {
      type: "ShortBlack",
      minutes: function () {
         return privateMinutes;
      }
   };
   return binMist;
}(BINARYMIST || {}));

(function () {
   var minutesRemaining = 3;
   var minutesSpent = 7;
   var deliveryTime = BINARYMIST.extensions.minutes.remaining.call(BINARYMIST.coffee, minutesRemaining, minutesSpent);
   var minutesSinceOrder = deliveryTime.spent || 'a number of';
   alert("You placed your order " + minutesSinceOrder + " minutes ago. Your coffee will be ready in " + deliveryTime.remaining + " minutes.");
}());

On line 39 we pass in a couple of arguments.
Now to show another cool feature of JavaScript,
because I hate to miss a good opportunity.
All functions come with an arguments array.
This parameter is populated with all the arguments that were supplied to the function on invocation.
This includes any arguments that were not assigned to parameters.
As you can see below, arguments holds both values we passed in.

JavaScript argument's array

Now if we only pass in the first argument to the specified parameter, when line 41 is executed, we get…

apply

The difference with Function.apply is that it only takes 2 arguments.
The second being an array of arbitrary length.
Function.apply works with array-like objects as well as true arrays.


// show a new BINARYMIST.extension.minutes that takes an array

var minutesRemaining = 3;
var minutesSpent = 7;
var minutes = [minutesRemaining, minutesSpent];
var deliveryTime = BINARYMIST.Extensions.minutes.remaining.call(BINARYMIST.coffee, minutes);

Bind

In response to Mike Wilcox’s comments on the JavaScript Linked-in group
I’ve added some info on Function.prototype.bind introduced in Ecma 262 (I think).

This function is very similar to Function.prototype.call.
In fact the only difference I can see is that bind essentially returns a reference to the function that applies the function we want to apply to a target function.
Essentially this allows us to re-use an applied function, rather than create one each time we want to execute.
bind has the same function signature (has the same parameters) as Function.prototype.call.
This following example was taken from Mike West’s post here with some small changes and comments added.

This is tested and works as you would expect.

var first_multiply;
var second_multiply;

var first_object = {
   num: 42
};

var second_object = {
   num: 24
};

function multiply(mult) {
   return this.num * mult;
}

Function.prototype.bind = function (obj) {
   // When a function is not the property of an object, then it is invoked as a function:
   // Now because bind is a property of Function.prototype
   // and multiply inherits the bind property (which is a function of course) because multiply's prototype is Function's prototype.
   // this is bound to the bind properties object... which in this case is multiply.
   var method = this
   var temp = function () {
      return method.apply(obj, arguments);
   };
   // temp holds a reference to our bind functionality
   return temp;
}

first_multiply = multiply.bind(first_object);
document.writeln(first_multiply(5) + ''); // returns 42 * 5

second_multiply = multiply.bind(second_object);
document.writeln(second_multiply(5) + ''); // returns 24 * 5

Supporting multiple sites with a single SSL Certificate

April 9, 2012

There are a couple of ways I’m aware of you can support multiple web sites with a single SSL certificate using the same port.

  1. Wild card certificate
    Useful for when your collection of sites are on the same domain.
    For example:
    mysane.site.com, myinsane.site.com, mycrazy.site.com
  2. Unified Communications Certificate (UCC) / Subject Alternative Name (SAN) / MultiDomain
    Useful for when your collection of sites are on different domains.
    For example:
    mysanesite.com, myinsanesite.com, mycrazysite.com

You can choose to purchase a SSL cert,
you can use convergence (check out Moxie Marlinspikes talk on the subject),
or you can create a self signed one.

If you chose to create a self signed certificate

IIS 7.x

Click on the root machine node in the left tree view of IIS (7.x) manager.
Then double click the “Server Certificates” icon in the Features View.

Server Certificates

This will show you all the certificates currently registered on the server.
You will be able to see in the Actions pane,
that you can Import or create your own certificate.
To create the self signed wild card certificate,
chose “Create Self-Signed Certificate…”.
Give it the friendly name *.site.com
Ok.
The certificate will be registered on you machine.

Server Certificates

Now for each site you want to use the certificate for,
right click -> Edit Bindings… -> Add.
Select the Type to be https,
and select the certificate you just created from the SSL certificate drop down menu.
Ok -> Close.
Repeat these steps for the rest of the sites you want to share the certificate.

Using the appcmd utility

We now add the https binding and host information to our sites that need to share the wild card certificate.

Run a command prompt as administrator and

cd to %WINDIR%\system32\inetsrv

The format of the command looks like the following:

appcmd set site /site.name:"<your website name>" /+bindings.[protocol='https',bindingInformation='*:443:<your ssl domain>']

For our above three sites we wanted to use the same certificate,
mysane.site.com, myinsane.site.com, mycrazy.site.com
They may be named respectively:
mysane, myinsane, mycrazy
So for example,
we’d run the following commands:

appcmd set site /site.name:"mysane" /+bindings.[protocol='https',bindingInformation='*:443:mysane.site.com']

You should get feedback similar to the following:

SITE object "mysane.site.com" changed

if all goes well

appcmd set site /site.name:"myinsane" /+bindings.[protocol='https',bindingInformation='*:443:myinsane.site.com']

You should get feedback similar to the following:

SITE object "myinsane.site.com" changed

if all goes well

appcmd set site /site.name:"mycrazy" /+bindings.[protocol='https',bindingInformation='*:443:mycrazy.site.com']

You should get feedback similar to the following:

SITE object "mycrazy.site.com" changed

if all goes well

Although I normally keep it simple and name my sites the same as the URL (your ssl domain) I want to use.

IIS 6

Now this is a bit more work than with IIS 7.

If it’s not already installed, you’ll need the SelfSSL tool.
You can get this from the SSL Diagnostics Kit or the IIS 6.0 Resource Kit which contains lots of other stuff.
Once installed, run IIS.

Create the self signed wildcard certificate

You’ll need to generate the certificate for one existing IIS site.
For the first site take note of the site idendifier.
You can see this in the right pane when you select Web Sites from the server node in the IIS manager.
Open a command prompt, you’ll need to run the SelfSSL app.
Actually I think the easiest way to run this is Start menu -> All Programs -> IIS Resources -> SelfSSL -> SelfSSL.
The command string looks like this:

selfssl /n:cn=<your wild card domain> /s:<first website identifier> /P:<port you want to use> /v:<number of days to expiration>

So for example, we’d run the following command:

selfssl /n:cn=*.site.com /s:1 /P:443 /v:365

Options for SelfSSL

selfssl /?

some of them are:

/N: – This specifies the common name of the certificate. The computer name is used if there is no common name specified.
/K: – This specifies the key length of the certificate. The default is length 1024.
/V: – This specifies the amount of time the certificate will be valid for, calculated in days. The default setting is seven days.
/S: – This specifies the Identifier of the site, which we obtained earlier. The default will always be 1, which is the Default Web Site in IIS.

Assign the certificate to the sites that need it

Have a look at the site properties in IIS Manager -> Directory Security tab -> Server Certificate button.
This will start the IIS wizard.
Click Next -> Assign an existing certificate -> Next.
You should see the wild card certificate you created.
Select it, click next, and make sure you assign it the same port that was assigned to the first site.

Configure the SecureBindings

In order for IIS to use the host headers with SSL and secure the certificate as we did with appcmd,
you’ll need to run the following command for each of the sites that require it.
My adsutil is found in C:\Inetpub\AdminScripts\
It’s probably not in your path, so you’ll have to run it from location.
cscript adsutil.vbs set /w3svc/<website identifier>/SecureBindings ":443:<your ssl domain>"
So for example, we’d run the following command:
cscript adsutil.vbs set /w3svc/1/SecureBindings ":443:mysane.site.com"
That should be it.

Now if you need to remove a certificate from your store

Run mmc.exe
File menu -> Add/Remove Snap-in… -> Add… -> select Certificates -> Add -> select Computer account -> Next -> select Local computer -> Close -> Ok.
Select the Certificates node, expand Personal, Certificates.
Now in the right window pane, you can manage the certificates.
Delete, Renew etc.

copying with scp

March 25, 2012

I was having some trouble today copying a file (1.5GB .iso) from a notebook to a file server.
The notebook I was using was running Linux Ubuntu.
The server FreeBSD.
I was trying to copy this file using SMB/CIFS via Nautilus.
I tried several times, it failed each time.
Then I thought, what are you doing… drop to the command line.

scp to the rescue

The command I used:

From the directory on my local machine I was copying the file from

scp -P <MyPortNumberHere> MyFile.iso <MyUserName>@<MyServer>:/Path/To/Where/I/Want/MyFile/ToGo/MyFile.iso

This also took about half  the time to copy that SMB took, and SMB didn’t even complete. Not to mention the transfer is secure (SSH)

Some additional resources

http://www.linuxtutorialblog.com/post/ssh-and-scp-howto-tips-tricks

http://amath.colorado.edu/computing/software/man/scp.html

Also don’t forget to check the man page out 😉

man scp

JavaScript Reserved Words

December 19, 2011

Funnily enough, most of these are not used in the language.
They cannot be used to name variables or parameters.
In saying that,
I did some testing below and that statement’s not entirely accurate.

Usage of keywords in red should be avoided.

Reserved Keyword Comments
abstract  no
boolean  no
break  yes
byte  no  No type of byte in JavaScript
case  yes
catch  yes
char  no  JavaScript doesn’t have char. Use string instead
class  no  technically JavaScript doesn’t have class
const  no  no const, but read-only can be implemented
continue  yes
debugger  yes
default  yes
delete  yes
do  yes
double  no  JavaScript only has number (64 bit floating point)
else  yes
enum  no
export  no
extends  no
false  yes
final  no
finally  yes
float  no  JavaScript only has number (64 bit floating point)
for  yes
function  yes
goto  no
if  yes
implements  no  JavaScript uses prototypal inheritance. Reserved in strict mode
import  no
in  yes
instanceof  yes
int  no  JavaScript only has number (64 bit floating point)
interface  no  technically no interfaces, but they can be implemented. Reserved in strict mode
let  no Reserved in strict mode
long  no  JavaScript only has number (64 bit floating point)
native  no
new  yes  use in moderation. See comments in Responses below
null  yes
package  no Reserved in strict mode
private  no  access is inferred. Reserved in strict mode
protected  no  JavaScript has privileged, but it’s inferred. Reserved in strict mode
public  no  access is inferred. Reserved in strict mode
return  yes
short  no  JavaScript only has number (64 bit floating point)
static  no Reserved in strict mode
super  no
switch  yes
synchronized  no
this  yes
throw  yes
throws  no
transient  no
true  yes
try  yes
typeof  yes
var  yes
volatile  no
void  yes
while  yes
with  yes
yeild  no Reserved in strict mode

When reserved words are used as keys in object literals,
they must be quoted.
They cannot be used with the dot notation,
so it is sometimes necessary to use the bracket notation instead.
Or better, just don’t use them for your names.

var method;                  // ok
var class;                   // illegal
object = {box: value};       // ok
object = {case: value};      // illegal
object = {'case': value};    // ok
object.box = value;          // ok
object.case = value;         // illegal
object['case'] = value;      // ok

I noticed in Doug Crockfords JavaScript The Good Parts
in Chapter 2 under Names, where he talks about reserved words.
It says:
“It is not permitted to name a variable or parameter with a reserved
word.
Worse, it is not permitted to use a reserved word as the name of an object
property in an object literal or following a dot in a refinement.”

I tested this in Chrome and FireFox with the following results.

var private = 'Oh yuk'; // if strict mode is on: Uncaught SyntaxError: Unexpected strict mode reserved word
var break = 'break me'; // Uncaught SyntaxError: Unexpected token break

 

var myFunc = function (private, break) {
   // if strict mode is on or off: Uncaught SyntaxError: Unexpected token break
   // strangly enough, private is always fine as a parameter.
}

 

var myObj = {
   private: 'dung', // no problem
   break: 'beetle' // no problem
}
console.log('myObj.private: ' + myObj.private) // myObj.private: dung
console.log(' myObj.break: ' + myObj.break); // myObj.break: beetle

 

JavaScript also predefines a number of global variables and functions
that you should also avoid using their names for your own variables and functions.
Here’s a list:

  • arguments
  • Array
  • Boolean
  • Date
  • decodeURI
  • decodeURIComponent
  • encodeURI
  • encodeURIComponent
  • Error
  • eval
  • EvalError
  • Function
  • Infinity
  • isFinite
  • isNaN
  • JSON
  • Math
  • NaN
  • Number
  • Object
  • parseFloat
  • parseInt
  • RangeError
  • ReferenceError
  • RegExp
  • String
  • SyntaxError
  • TypeError
  • undefined
  • URIError