DevelopMentor is Now Part of Global Knowledge

To all our customers,

Today, we’re pleased to announce that we are joining the Global Knowledge team. For 22 years, DevelopMentor has delivered the industry’s most in-depth hands-on developer training. We will continue that tradition on a larger scale as part of Global Knowledge through expanded offerings and services.

Global Knowledge is the world’s leading learning services provider and an existing training partner for many of you. Global Knowledge is committed to the application development space, and together we will provide DevelopMentor customers with expanded curriculum in both the classroom and online.

DevelopMentor has also been an innovator in online learning with our instructor-supported LearningLine platform. As part of Global Knowledge, we will be adding to our online curriculum and will continue to enhance your web learning experience. Traditional classroom students will benefit from an expanded schedule with courses available in more locations with improved services for virtual participants.

Global Knowledge and the DevelopMentor technical team will build the world’s best learning services for developers. The curriculum will provide skills development in emerging web, mobile and cloud application development practices and techniques. The application development portfolio will include expanded support for developer teams – beyond classroom – ranging from trouble-shooting and guidance on workday issues, to enhanced project specific-capability support, and comprehensive career-shaping skills.

As part of Global Knowledge, we will be better equipped to support our customer’s learning requirements. We will deliver world-class training options so you can choose the format that suits you best: classroom, virtual, asynchronous and video – all with our shared values of technical depth, hands-on engagement and instructor quality.

We are looking forward to new opportunities together and to being of greater value to all of you.


Mike Abercrombie


Swift Webcasts

New Swift Webcast Scheduled

Swift vs. Modern C++: June 8, 2015

Most modern C based languages offer much the same feature set and Swift is no different. It is interesting though to see how each language goes about providing those features. This webinar will look at how features such as variable declaration, control constructs, classes, functions, and more differ between Swift and Modern C++.
Register Now

New Webcast on Demand:
Introduction to Swift

For professional developers who already have a basic understanding of the Swift Language syntax. In this webcast we will cover structs, classes, protocols, fields, methods, and initialization. Also covered will be inheritance in Swift.


Upcoming Live Webcasts:

Intermediate Swift 5/27/15 @ 11am PDT
Exploring the Decorator Pattern Using C# 6/3/15 @ 11am PDT
Swift vs. Modern C++ 6/8/15 @ 11am PDT
Test-Driven Development with Visual Studio and MsTest 6/29/15 @ 11m PDT

3 Exciting FREE Webcasts Just Scheduled

Introduction to Swift: May 6, 2015

For professional developers who what to get a jump start on understanding the Swift language from Apple. In this webcast we will see how to define variables, use flow control constructs, work with loops including range based loop constructs. Also covered will be switch statements, optionals, arrays and functions.
Register Now!

Intermediate Swift: May 27, 2015

For professional developers who already have a basic understanding of the Swift Language syntax. In this webcast we will cover structs, classes, protocols, fields, methods, and initialization. Also covered will be inheritance in Swift.
Register Now!

Exploring the Decorator Pattern Using C#: June 3, 2015

As experienced developers we know that we don’t want to have to keep modifying existing code…as this increases the risk of bugs. Inheritance is often seen as the vehicle for extension.
Register Now!

New Webcast on Demand: Lambda Expressions in C++

C++ developers traditionally use function objects (or functors) to encapsulate behavior in a general way that makes it easy to pass as a parameter or store for later invocation. The syntax to do this is verbose since it requires a class/struct that overloads the member function ‘operator()’. C++11 added lambda expressions to create this type of construct directly with far less code. Lambda expressions are defined inside a function which means they have access to the local variables and parameters in that context, something that is hard to achieve using functors. In this webcast, Bradley Needham introduces lambda expressions. He will cover their syntax, local-variable capture modes, closures, and changes from C++11 to 14.


Upcoming Live Webcast:

SQL Server Software Development Best Practices Tuesday, October 7, 2014 9:00am PDT

Create and deploy a Node.js application to Azure

written by jason diamond.

Node.js is an exciting, new platform for building networked applications. Those applications don’t have to be Web applications, but they usually are.

This article will show how simple it is to get started using Node.js. All you need is a computer (running any of Windows, OS X, or Linux). We’ll start out with a really simple “Hello, World” server, but then switch to using Express (the most popular Web application framework for Node.js) to create a simple application that renders HTML and responds to Ajax requests.

Hello, World!

Download and run the appropriate installer from and ensure it works by executing node -v in a command prompt. The installer should add the correct folders to your PATH environmental variable so that this “just works”, but I’ve seen this require a restart on some machines.

Make a new directory and create a file in it called app.js. Browse to in your favorite Web browser and copy the sample code from the home page into app.js. For your convenience, that code is duplicated here:

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(1337, '');
console.log('Server running at');

Here’s a quick breakdown of what that code does:

Line 1 loads the built-in http module.

Line 2 creates an HTTP server.

Line 5 ends the statement that started on line 2 by invoking the listen method, binding the server to port 1337.

The function that starts on line 2 and ends on line 5 handles all requests sent to this server. This function accepts req and resarguments, representing the HTTP request sent to the server and the HTTP response that will be sent back to the client.

As you can probably tell, this example is just outputting “Hello, World” regardless of the path or any other properties on the request object.

Go ahead and test this out by typing node app on the command line from the folder containing app.js. Visit http://localhost:1337/ with your browser and you should see “Hello, World”.

Node.js applications aren’t hosted “inside” existing Web servers like Apache or IIS. Instead, applications host their own Web servers (via the http module) on whatever ports they want to listen on. Normally, that would be port 80 or 443, but most Node.js applications are deployed “behind” reverse proxies. Those proxies accept connections or port 80 or 443 and forward the requests to the Node.js application running in a separate process on the same or different machines. Nginx, Apache, and even IIS can act as reverse proxies and all three are used in “front” of Node.js applications.

Hello, Express!

Node.js comes with a package manager (similar to Nuget, Ruby Gems, etc) called npm. There are tens of thousands of packages available to install via npm. For this demo, you’ll fetch Express, a Web application framework, which adds support for routing requests to different functions and rendering dynamic HTML.

Type the following into the folder containing app.js:

npm install express body-parser ejs

That npm install command installs three packages: Express, the body parser middleware for Express for parsing JSON, and EJS (Embedded JavaScript) for rendering HTML. You should see a new node_modules folder containing folders corresponding to those packages.

Replace the code in app.js with the following:

// Load core modules.
var path = require('path');

// Load third-party modules.
var express = require('express');
var bodyParser = require('body-parser');

// Create the Express application object.
var app = express();

// Configure Express to use the EJS view engine.
app.set('view engine', 'ejs');

// Tell Express where to find views.
app.set('views', path.join(__dirname, 'views'));

// Render HTML when the root path is requested.
app.get('/', function(req, res) {

// Do some addition when JSON is posted to "/add".'/add', bodyParser.json(), function(req, res) {
    res.json(req.body.a + req.body.b);

// Serve static files from the "public" folder.
app.use(express.static(path.join(__dirname, 'public')));

// Start the server on a custom port.
app.listen(process.env.PORT || 1337);

The code contains comments to help explain what each bit is doing. It’s roughly similar to the previous example, but instead of specifying a single callback function for all requests, Express allows specifying separate callback functions for the various endpoints exposed by your application.

The two endpoints in this example are “/” and “/add”. Requests to “/” must be GET requests. Requests to “/add” must be POST requests. The JSON-parsing middleware is used with the “/add” route to parse the request body prior to the callback function receiving the req and res arguments. This middleware could have also been registered “globally” so that all routes automatically parse incoming JSON, but that wasn’t necessary for this simple case.

After those two routes are defined, Express is configured to respond to GET requests with any file it can from the “public” folder. If the file can be found, it’s served directly to the client. Requests to any other paths (or using the wrong HTTP methods for the correct paths) will result in Express responding with a 404 status code.

GET requests to “/” respond with HTML rendered by the EJS view engine. We had to tell Express what view engine to use by setting the view engine property. The default location for views is the “views” folder, but that could be changed.

Place the following in “views/index.ejs”:

<!DOCTYPE html>
    <title>My Application</title>
    <p>This page was rendered at <%= new Date() %>.</p>

    <input id="a" type="number"> + <input id="b" type="number">

    <input id="add" type="submit" value="Add">

    <span id="result"></span>

    <script src="script.js"></script>

The dynamic part of this view is pretty small. The first paragraph inside the body contains a JavaScript expression delimited with<%= and %>. This will probably be familiar to most as many template languages use that syntax. The rest of the file is “static” content that will be served to clients as is.

The <script> include near the end of the file will cause the browser to make a second GET request for script.js. Thinking back to app.js, the call to app.use() near the end of that file told Express to serve static files out of the “public” folder, but that doesn’t appear here. As far as the client is concerned, the “public” folder doesn’t exist. It’s a really good practice to have a distinct folder for publicly accessible resources on the server, though, so that you can control what files clients are able to download.

Place the following in “public/script.js”:

document.getElementById('add').onclick = function() {
    var a = document.getElementById('a').valueAsNumber;
    var b = document.getElementById('b').valueAsNumber;

    var xhr = new XMLHttpRequest();'POST', '/add');

    xhr.setRequestHeader('Content-Type', 'application/json');
    xhr.responseType = 'json';

    xhr.onload = function() {
        document.getElementById('result').innerHTML = xhr.response;

    xhr.send(JSON.stringify({ a: a, b: b }));

This JavaScript relies on some new “HTML5” features to keep the code small, but should run in most modern browsers.

You should be able to test everything out now. If the previous version of app.js is still running, kill it by hitting Ctrl+C and then immediately re-start it. Refresh or re-visit http://localhost:1337/ and try adding some numbers together.

Hopefully, everything works as expected on your local machine. To share the code with other developers on your team, you can just check all the files into your favorite version control system. But what about that node_modules folder? Do you want to check all of that in? Some do, some don’t.

Let’s pretend you don’t want it checked in. You don’t want the other developers on your team to have to know what modules to install with npm. To describe your application’s dependencies, you need a package.json file. This is a standard file that all Node.js applications use. The easiest way to create one is to let npm do it for you.

Type npm init in your folder and accept all the defaults by hitting ENTER until you’re back at the prompt. If you look at package.json, you should see “express”, “body-parser”, and “ejs” as dependencies. npm put those there because it saw them in your node_modules folder.

Now you can check in package.json and ignore node_modules. Your fellow teammates will just have to run npm install prior to trying to start the application. Test this out by deleting your node_modules folder and then entering npm install from the same folder that contains package.json. The entire node_modules folder should come back.

Hello, Azure!

At this point, you have a trivial, but working application. It’s time to deploy. We’ll use Windows Azure since they support Node.js (in fact, the management portal for Windows Azure is said to be written using Node.js). Free credits come with MSDN subscriptions, but you can also sign up for a free trial. Even after the trial expires, you can still create small sites like the kind we’re about to without paying anything.

There are multiple ways to deploy your code to Windowz Azure, but we’ll use Git since that’s a popular choice among many PaaS providers like Azure and Heroku.

You can use your favorite Git GUI, but the following commands assume you have the Git executable in your path:

git init
git add app.js views public package.json
git commit -m "Initial commit."

That creates a local Git repository and checks in the necessary files (ignoring the node_modules folder).

Log in to the Windows Azure Management portal and create your site by clicking on the “New” button at the bottom. Select “Compute”, “Web Site”, and “Quick Create”. Give your new site a name and click “Create Web Site”. Wait for the site to change its status to “Running”. Click it and then click “Set up deployment from source control” and select “Local Git repository”. If this is your first time doing this, you’ll be prompted to create a username and password. Once you’re done, you should have a URL like this:

The uppercase parts of that URL will contain your user and site names. Just copy whatever URL they show to your clipboard so you can register it with git like this:

git remote add azure
git push azure master

If all goes well, you should see the output from npm downloading Express and friends along with a lot of other diagnostic messages. When it’s done, you should be able to test out your application, by going back to the dashboard and clicking on the site URL (probably something like It is located half way down the page on the right hand side.

As you can see, building a web app using node.js, Express and Windows Azure is very simple. In fact, you can watch me go through the steps in this video. The node.js runtime is very efficient and includes a terrific package management tool in npm. Like most cloud services nowadays, code can be quickly uploaded to Azure using git. Finally, Azure makes it easy to deploy and manage your websites. These technologies are taking the web by storm. Contact Developmentor to join our crash courses on node.js and many more modern web technologies.

Enabling Tincr on Windows 8

Some time ago I posted a blog post on Tincr and live reloading of CSS/JavaScript in Google Chrome. This works really well with one exception, on Windows 8 it will not install. When you try Chrome shows the following error message:

This application is not supported on this computer. Installation has been disabled.

The Chromium team has acknowledged this as a bug but it still needs to be fixed.

The interim solution

Fortunately Lauricio Su came up with a nice workaround and posted it in the Tincr discussion group. Basically his solution is to run Chrome in Windows 7 compatibility mode and install Tincr. After that Chrome can be started in the normal way and Tincr will work just fine.

This worked just fine for me with one addition. I had to keep once instance of Chrome running before starting a new instance in compatibility mode. Without that Chrome would not get a connection to the Internet.