Node With Docker - Continuous Integration and Delivery

Welcome.

This is a quick start guide for spinning up Docker containers that run NodeJS and Redis. We’ll look at a basic development workflow to manage the local development of an app, on Mac OS X, as well as continuous integration and delivery, step by step.

logo

This tutorial is ported from Docker in Action - Fitter, Happier, More Productive.

We’ll be using the following tools, technologies, and services in this post:

  1. NodeJS v0.12.0
  2. Express v3.4.8
  3. Redis v2.8.19
  4. Docker v1.5.0
  5. boot2docker v1.5.0
  6. Docker Compose v1.1.0
  7. Docker Hub
  8. CircleCI
  9. Digital Ocean
  10. Tutum

There’s slides too! Check them out here, if interested.

Docker?

Be sure you understand the Docker basics before diving into this tutorial. Check out the official “What is Docker?” guide for an excellent intro.

In short, with Docker, you can truly mimic your production environment on your local machine. No more having to debug environment specific bugs or worrying that your app will perform differently in production.

  1. Version control for infrastructure
  2. Easily distribute/recreate your entire development environment
  3. Build once, run anywhere – aka The Holy Grail!

Docker-specific terms

  • A Dockerfile is a file that contains a set of instructions used to create an image*.
  • An image is used to build and save snapshots (the state) of an environment.
  • A container is an instantiated, live image that runs a collection of processes.

Be sure to check out the Docker documentation for more info on Dockerfiles, images, and containers.

Local Setup

Let’s get your local development environment set up!

Get Docker

Follow the download instructions from the guide Installing Docker on Mac OS X to install both Docker and the official boot2docker package. boot2docker is a lightweight Linux distribution designed specifically to run Docker for Windows and Mac OS X users. In essence, it starts a small VM that’s configured to run Docker containers.

Once installed, run the following commands in your project directory to start boot2docker:

1
2
3
$ boot2docker init
$ boot2docker up
$ $(boot2docker shellinit)

Get the Project

Grab the base code from the repo, and add it to your project directory:

1
2
3
4
5
6
7
8
├── app
│   ├── Dockerfile
│   ├── index.js
│   ├── package.json
│   └── test
│       └── test.js
└── redis
    └── Dockerfile

Compose Up!

Docker Compose (Previously known as fig) is an orchestration framework that handles the building and running of multiple services, making it easy to link multiple services together running in different containers. Follow the installation instructions here, and then test it out to make sure all is well:

1
2
$ docker-compose --version
docker-compose 1.1.0

Now we just need to define the services - web (NodeJS) and persistence (Redis) in a configuration file called docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
web:
  build: ./app
  volumes:
    - "app:/src/app"
  ports:
    - "80:3000"
  links:
   - redis
redis:
    build: ./redis
    ports:
        - "6379:6379"

Here we add the services that make up our basic stack:

  1. web: First, we build the image based on the instructions in the Dockerfile - where we setup our Node environment, create a volume, install the required dependencies, and fire up the app running on port 3000. Then we forward that port in the container to port 80 on the host environment - e.g., the boot2docker VM.
  2. redis: Next, the Redis service is again built from the instructions in the Dockerfile. Port 6379 is exposed and forwarded.

Profit

Run docker-compose up to build new images for the NodeJS/Express app and Redis services and then run both processes in new containers. Open your browser and navigate to the IP address associated the boot2docker VM (boot2docker ip). You should see the text, “You have viewed this page 1 times!” in your browser. Refresh. The page counter should increment.

Once done, kill the processes (Ctrl-C). Commit your changes locally, and then push to Github.

Next Steps

So, what did we accomplish?

We set up our local environment, detailing the basic process of building an image from a Dockerfile and then creating an instance of the image called a container. We then tied everything together with Docker Compose to build and connect different containers for both the NodeJS/Express app and Redis process.

Need the updated code? Grab it from the repo.

Next, let’s talk about Continuous Integration…

Continuous Integration

We’ll start with Docker Hub.

Docker Hub

Docker Hub “manages the lifecycle of distributed apps with cloud services for building and sharing containers and automating workflows”. It’s the Github for Docker images.

  1. Signup using your Github credentials.
  2. Set up a new automated build. And add your Github repo that you created and pushed to earlier. Just accept all the default options, expect for the “Dockerfile Location” - change that to “/app”. Once complete, Docker Hub will trigger an initial build.

Each time you push to Github, Docker Hub will generate a new build from scratch.

Docker Hub acts much like a continuous integration server since it ensures you do not cause a regression that completely breaks the build process when the code base is updated. That said, Docker Hub should be the last test before deployment to either staging or production so let’s use a true continuous integration server to fully test our code before it hits Docker Hub.

CircleCI

CircleCI is a CI platform that supports Docker.

Given a Dockerfile, CircleCI builds an image, starts a new container (or containers), and then runs tests inside that container.

  1. Sign up with your Github account.
  2. Create a new project using the Github repo you created.

Next we need to add a configuration file, called circle.yml, to the root folder of the project so that CircleCI can properly create the build.

1
2
3
4
5
6
7
8
9
10
11
12
machine:
  services:
    - docker

dependencies:
  override:
    - curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose

test:
  override:
    - docker-compose run -d --no-deps web
    - cd app; mocha

Here, we install Docker Compose, then we create a new image, and run the container along with our unit tests.

Notice how we’re using the command docker-compose run -d --no-deps web, to run the web process, instead of docker-compose up. This is because CircleCI already has Redis running and available to us for our tests. So, we just need to run the web process.

Before we test this out, we need to change some settings on Docker Hub.

Docker Hub (redux)

Right now, each push to Github will create a new build. That’s not what we want. Instead, we want CircleCI to run tests against the master branch then after they pass(and only after they pass), a new build should trigger on Docker Hub.

Open your repository on Docker Hub, and make the following updates:

  1. Under Settings click Automated Build.
  2. Uncheck the Active box: “When active we will build when new pushes occur”. Save the changes.
  3. Then once again under Settings click Build Triggers.
  4. Change the status to on.
  5. Copy the example curl command – i.e., $ curl --data "build=true" -X POST https://registry.hub.docker.com/u/mjhea0/node-docker-workflow/trigger/84957124-2b85-410d-b602-b48193853b66/.

CircleCI (redux)

Back on CircleCI, let’s add that curl command as an environment variable:

  1. Within the Project Settings, select Environment variables.
  2. Add a new variable with the name “DEPLOY” and paste the curl command as the value.

Then add the following code to the bottom of the circle.yml file:

1
2
3
4
5
deployment:
  hub:
    branch: master
    commands:
      - $DEPLOY

This simple fires the $DEPLOY variable after our tests pass on the master branch.

Now, let’s test!

Profit!

Follow these steps…

  1. Create a new branch
  2. Make changes locally
  3. Issue a pull request
  4. Manually merge once the tests pass
  5. Once the second round passes, a new build is triggered on Docker Hub

What’s left? Deployment! Grab the updated code, if necessary.

Deployment

Let’s get our app running on Digital Ocean.

After you’ve signed up, create a new Droplet, choose “Applications” and then select the Docker Application.

Once setup, SSH into the server as the ‘root’ user:

1
$ ssh root@<some_ip_address>

Now you just need to clone the repo, install Docker compose, and then you can run your app:

1
2
3
4
$ git clone https://github.com/mjhea0/node-docker-workflow.git
$ curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ docker-compose up -d

Sanity check. Navigate to your Droplet’s IP address in the browser. You should see your app.

Nice!

But what about continuous delivery? Instead of having to SSH into the server and clone the new code, the process should be part of our workflow so that once a new build is generated on Docker Hub, the code is updated on Digital Ocean automatically.

Enter Tutum.

Continuous Delivery

Tutum manages the orchestration and deployment of Docker images and containers. Setup is simple. After you’ve signed up (with Github), you need to add a Node, which is just a Linux host. We’ll use Digital Ocean.

Start by linking your Digital Ocean account within the “Account Info” area.

Now you can add a new Node. The process is straightforward, but if you need help, please refer to the official documentation. Just add a name, select a region, and then you’re good to go.

With a Node setup, we can now add a Stack of services - web and Redis, in our case - that make up our tech stack. Next, create a new file called tutum.yml, and add the following code:

1
2
3
4
5
6
7
8
9
10
11
12
web:
  image: mjhea0/node-docker-workflow
  autorestart: always
  ports:
    - "80:3000"
  links:
   - "redis:redis"
redis:
    image: redis
    autorestart: always
    ports:
        - "6379:6379"

Here, we are pulling the images from Docker Hub and building them just like we did with Docker Compose. Notice the difference here, between this file and the docker-compose.yml file. Here, we are not creating images, we’re pulling them in from Docker Hub. It’s essentially the same thing since the most updated build is on Docker Hub.

Now just create a new Stack, adding a name and uploading the tutum.yml file, and click “Create and deploy” to pull in the new images on the Node and then build and run the containers.

Once done, you can view your live app!

Note: You lose the “magic” of Tutum when running things in a single host, as we’re currently doing. In a real world scenario you’d want to deploy multiple web containers, load balance across them and have them live on different hosts, sharing a single REDIS cache. We may look at this in a future post, focusing solely on delivery.

Before we call it quits, we need to sync Docker Hub with Tutum so that when a new build is created on Docker Hub, the services are rebuilt and redeployed on Tutum - automatically!

Tutum makes this simple.

Under the Services tab, click the web service, and, finally, click the Webhooks tab. To create a new hook, simply add a name and then click Add. Copy the URL, and then navigate back to Docker Hub. Once there, click the Webhook link and add a new hook, pasting in the URL.

Now after a build is created on Docker Hub, a POST request is sent to that URL, which, in turn, triggers a redeploy on Tutum. Boom!

Conclusion

As always comment below if you have questions. If you manage a different workflow for continuous integration and delivery, please post the details below. Grab the final code from the repo.

See you next time!

PostgreSQL and NodeJS

Today we’re going to build a CRUD todo single page application with Node, Express, Angular, and PostgreSQL.

node todo app

Technologies/Tools used - Node v0.10.36, Express v4.11.1, Angular v1.3.12.

Project Setup

Start by installing the Express generator if you don’t already have it:

1
$ npm install -g express-generator@4

Then create a new project and install the dependencies:

1
2
$ express node-postgres-todo
$ cd node-postgres-todo && npm install

Add Supervisor to watch for code changes:

1
$ npm install supervisor -g

Update the ‘start’ script in the package.json file:

1
2
3
"scripts": {
  "start": "supervisor ./bin/www"
},

Run the app:

1
$ npm start

Then navigate to http://localhost:3000/ in your browser. You should see the “Welcome to Express” text.

Postgres Setup

Need to setup Postgres? On a Mac? Check out Postgres.app.

With your Postgres server up and listening on port 5432, making a database connection is easy with the pg library:

1
$ npm install pg --save

Now let’s set up a simple table creation script:

1
2
3
4
5
6
7
var pg = require('pg');
var connectionString = process.env.DATABASE_URL || 'postgres://localhost:5432/todo';

var client = new pg.Client(connectionString);
client.connect();
var query = client.query('CREATE TABLE items(id SERIAL PRIMARY KEY, text VARCHAR(40) not null, complete BOOLEAN)');
query.on('end', function() { client.end(); });

Save this as database.js in a new folder called “models”.

Here we create a new instance of Client to interact with the database and then establish communication with it via the connect() method. We then set run a SQL query via the query() method. Communication is closed via the end() method. Be sure to check out the documentation for more info.

Make sure you have a database called “todo” setup, and then run the script to setup the table and subsequent fields:

1
$ node models/database.js

Verify the table/schema creation in psql:

1
2
3
4
5
6
7
8
9
10
11
michaelherman=# \c todo
You are now connected to database "todo" as user "michaelherman".
todo=# \d+ items
                                                     Table "public.items"
  Column  |         Type          |                     Modifiers                      | Storage  | Stats target | Description
----------+-----------------------+----------------------------------------------------+----------+--------------+-------------
 id       | integer               | not null default nextval('items_id_seq'::regclass) | plain    |              |
 text     | character varying(40) | not null                                           | extended |              |
 complete | boolean               |                                                    | plain    |              |
Indexes:
    "items_pkey" PRIMARY KEY, btree (id)

With the database connection setup along with the “items” table, we can now configure the CRUD portion of our app.

Server-Side: Routes

Let’s keep it simple by adding all endpoints to the index.js file within the “routes” folder. Make sure to update the imports:

1
2
3
4
var express = require('express');
var router = express.Router();
var pg = require('pg');
var connectionString = process.env.DATABASE_URL || 'postgres://localhost:5432/todo';

Now, let’s add each endpoint.

Function URL Action
CREATE /api/v1/todos Create a single todo
READ /api/v1/todos Get all todos
UPDATE /api/v1/todos/:todo_id Update a single todo
DELETE /api/v1/todos/:todo_id Delete a single todo

Follow along with the inline comments below for an explanation of what’s happening. Also, be sure to check out the pg documentation to learn about connection pooling. How does that differ from pg.Client?

Create

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
router.post('/api/v1/todos', function(req, res) {

    var results = [];

    // Grab data from http request
    var data = {text: req.body.text, complete: false};

    // Get a Postgres client from the connection pool
    pg.connect(connectionString, function(err, client, done) {

        // SQL Query > Insert Data
        client.query("INSERT INTO items(text, complete) values($1, $2)", [data.text, data.complete]);

        // SQL Query > Select Data
        var query = client.query("SELECT * FROM items ORDER BY id ASC");

        // Stream results back one row at a time
        query.on('row', function(row) {
            results.push(row);
        });

        // After all data is returned, close connection and return results
        query.on('end', function() {
            client.end();
            return res.json(results);
        });

        // Handle Errors
        if(err) {
          console.log(err);
        }

    });
});

Test this out via Curl in your terminal:

1
$ curl --data "text=test&complete=false" http://127.0.0.1:3000/api/v1/todos

Then confirm that the data was INSERT’ed correctly into the database via psql:

1
2
3
4
5
todo=# SELECT * FROM items ORDER BY id ASC;
 id | text  | complete
----+-------+----------
  1 | test  | f
(1 row)

Read

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
router.get('/api/v1/todos', function(req, res) {

    var results = [];

    // Get a Postgres client from the connection pool
    pg.connect(connectionString, function(err, client, done) {

        // SQL Query > Select Data
        var query = client.query("SELECT * FROM items ORDER BY id ASC;");

        // Stream results back one row at a time
        query.on('row', function(row) {
            results.push(row);
        });

        // After all data is returned, close connection and return results
        query.on('end', function() {
            client.end();
            return res.json(results);
        });

        // Handle Errors
        if(err) {
          console.log(err);
        }

    });

});

Add a few more rows of data via Curl, and then test the endpoint out in your browser at http://localhost:3000/api/v1/todos. You should see an array of JSON objects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[
    {
        id: 1,
        text: "test",
        complete: false
    },
    {
        id: 2,
        text: "test2",
        complete: false
    },
    {
        id: 3,
        text: "test3",
        complete: false
    }
]

Update

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
router.put('/api/v1/todos/:todo_id', function(req, res) {

    var results = [];

    // Grab data from the URL parameters
    var id = req.params.todo_id;

    // Grab data from http request
    var data = {text: req.body.text, complete: req.body.complete};

    // Get a Postgres client from the connection pool
    pg.connect(connectionString, function(err, client, done) {

        // SQL Query > Update Data
        client.query("UPDATE items SET text=($1), complete=($2) WHERE id=($3)", [data.text, data.complete, id]);

        // SQL Query > Select Data
        var query = client.query("SELECT * FROM items ORDER BY id ASC");

        // Stream results back one row at a time
        query.on('row', function(row) {
            results.push(row);
        });

        // After all data is returned, close connection and return results
        query.on('end', function() {
            client.end();
            return res.json(results);
        });

        // Handle Errors
        if(err) {
          console.log(err);
        }

    });

});

Again, test via Curl:

1
$ curl -X PUT --data "text=test&complete=true" http://127.0.0.1:3000/api/v1/todos/1

Navigate to http://localhost:3000/api/v1/todos to make sure the data has been updated correctly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[
    {
        id: 1,
        text: "test",
        complete: true
    },
    {
        id: 2,
        text: "test2",
        complete: false
    },
    {
        id: 3,
        text: "test3",
        complete: false
    }
]

Delete

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
router.delete('/api/v1/todos/:todo_id', function(req, res) {

    var results = [];

    // Grab data from the URL parameters
    var id = req.params.todo_id;


    // Get a Postgres client from the connection pool
    pg.connect(connectionString, function(err, client, done) {

        // SQL Query > Delete Data
        client.query("DELETE FROM items WHERE id=($1)", [id]);

        // SQL Query > Select Data
        var query = client.query("SELECT * FROM items ORDER BY id ASC");

        // Stream results back one row at a time
        query.on('row', function(row) {
            results.push(row);
        });

        // After all data is returned, close connection and return results
        query.on('end', function() {
            client.end();
            return res.json(results);
        });

        // Handle Errors
        if(err) {
          console.log(err);
        }

    });

});

Final Curl test:

1
$ curl -X DELETE http://127.0.0.1:3000/api/v1/todos/3

And you should now have:

1
2
3
4
5
6
7
8
9
10
11
12
[
    {
        id: 1,
        text: "test",
        complete: true
    },
    {
        id: 2,
        text: "test2",
        complete: false
    }
]

Refactoring

Before we jump to the client-side to add Angular, be aware that our code should be refactored to address a few issues. We’ll handle this later on in this tutorial, but this is an excellent opportunity to refactor the code on your own. Good luck!

Client-Side: Angular

Let’s dive right in to Angular.

Keep in mind that this is not meant to be an exhaustive tutorial. If you’re new to Angular I suggest following my “AngularJS by Example” tutorial - Building a Bitcoin Investment Calculator.

Module

Create a file called app.js in the “public/javascripts” folder. This file will house our Angular module and controller:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
angular.module('nodeTodo', [])

.controller('mainController', function($scope, $http) {

    $scope.formData = {};
    $scope.todoData = {};

    // Get all todos
    $http.get('/api/v1/todos')
        .success(function(data) {
            $scope.todoData = data;
            console.log(data);
        })
        .error(function(error) {
            console.log('Error: ' + error);
        });
});

Here we define our module as well as the controller. Within the controller we are using the $http service to make an AJAX request to the '/api/v1/todos' endpoint and then updating the scope accordingly.

What else is going on?

Well, we’re injecting the $scope and $http services. Also, we’re defining and updating $scope to handle binding.

Update / Route

Let’s update the main route in index.js within the “routes” folder:

1
2
3
router.get('/', function(req, res, next) {
  res.sendFile(path.join(__dirname, '../views', 'index.html'));
});

So when the end user hits the main endpoint, we send the index.html file. This file will contain our HTML and Angular templates.

Make sure to add the following dependency as well:

1
var path = require('path');

View

Now, let’s add our basic Angular view within index.html:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<!DOCTYPE html>
<html ng-app="nodeTodo">
  <head>
    <title>Todo App - with Node + Express + Angular + PostgreSQL</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <link href="http://netdna.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css" rel="stylesheet" media="screen">
  </head>
  <body ng-controller="mainController">
    <div class="container">
      <ul ng-repeat="todo in todoData">
        <li></li>
      </ul>
    </div>
    <script src="http://code.jquery.com/jquery-1.11.2.min.js" type="text/javascript"></script>
    <script src="http://netdna.bootstrapcdn.com/bootstrap/3.3.1/js/bootstrap.min.js" type="text/javascript"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.12/angular.min.js"></script>
    <script src="javascripts/app.js"></script>
  </body>
</html>

This should all be straightforward. We bootstrap Angular - ng-app="nodeTodo", define the scope of the controller - ng-controller="mainController" - and then use ng-repeat to loop through the todoData object, adding each individual todo to the page.

Module (round two)

Next, let’s update the module to handle the Create and Delete functions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// Create a new todo
$http.post('/api/v1/todos', $scope.formData)
    .success(function(data) {
        $scope.formData = {};
        $scope.todoData = data;
        console.log(data);
    })
    .error(function(error) {
        console.log('Error: ' + error);
    });

// Delete a todo
$http.delete('/api/v1/todos/' + todoID)
    .success(function(data) {
        $scope.todoData = data;
        console.log(data);
    })
    .error(function(data) {
        console.log('Error: ' + data);
    });

Now, let’s update our view…

View (round two)

Simply update each list item like so:

1
<li><input type="checkbox" ng-click="deleteTodo(todo.id)">&nbsp;</li>

This uses the ng-click directive to call the deleteTodo() function - which we still need to define - that takes a unique id associated with each todo as an argument.

Module (round three)

Update the controller:

1
2
3
4
5
6
7
8
9
10
11
// Delete a todo
$scope.deleteTodo = function(todoID) {
    $http.delete('/api/v1/todos/' + todoID)
        .success(function(data) {
            $scope.todoData = data;
            console.log(data);
        })
        .error(function(data) {
            console.log('Error: ' + data);
        });
};

We simply wrapped the delete functionality in the deleteTodo() function. Test this out. Make sure that when you click a check box the todo is removed.

View (round three)

To handle the creation of a new todo, we need to add an HTML form:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<div class="container">

  <form>
    <div class="form-group">
      <input type="text" class="form-control input-lg" placeholder="Add a todo..." ng-model="formData.text">
    </div>
    <button type="submit" class="btn btn-primary btn-lg" ng-click="createTodo()">Add Todo</button>
  </form>

  <ul ng-repeat="todo in todoData">
    <li><input type="checkbox" ng-click="deleteTodo(todo.id)">&nbsp;</li>
  </ul>

</div>

Again, we use ng-click to call a function in the controller.

Module (round four)

1
2
3
4
5
6
7
8
9
10
11
12
// Create a new todo
$scope.createTodo = function(todoID) {
    $http.post('/api/v1/todos', $scope.formData)
        .success(function(data) {
            $scope.formData = {};
            $scope.todoData = data;
            console.log(data);
        })
        .error(function(error) {
            console.log('Error: ' + error);
        });
};

Test this out!

View (round four)

With the main functionality done, let’s update the front-end to make it look, well, presentable.

HTML:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<!DOCTYPE html>
<html ng-app="nodeTodo">
  <head>
    <title>Todo App - with Node + Express + Angular + PostgreSQL</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <!-- styles -->
    <link href="http://netdna.bootstrapcdn.com/bootstrap/3.3.1/css/bootstrap.min.css" rel="stylesheet" media="screen">
    <link href="stylesheets/style.css" rel="stylesheet" media="screen">
  </head>
  <body ng-controller="mainController">

    <div class="container">

      <div class="header">
        <h1>Todo App</h1>
        <hr>
        <h1 class="lead">Node + Express + Angular + PostgreSQL</h1>
      </div>

      <div class="todo-form">
        <form>
          <div class="form-group">
            <input type="text" class="form-control input-lg" placeholder="Enter text..." ng-model="formData.text">
          </div>
          <button type="submit" class="btn btn-primary btn-lg btn-block" ng-click="createTodo()">Add Todo</button>
        </form>
      </div>

      <br>

      <div class="todo-list">
        <ul ng-repeat="todo in todoData">
          <li><h3><input class="lead" type="checkbox" ng-click="deleteTodo(todo.id)">&nbsp;</li></h3><hr>
        </ul>
      </div>

    </div>

    <!-- scripts -->
    <script src="http://code.jquery.com/jquery-1.11.2.min.js" type="text/javascript"></script>
    <script src="http://netdna.bootstrapcdn.com/bootstrap/3.3.1/js/bootstrap.min.js" type="text/javascript"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.12/angular.min.js"></script>
    <script src="javascripts/app.js"></script>
  </body>
</html>

CSS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
body {
  padding: 50px;
  font: 14px "Lucida Grande", Helvetica, Arial, sans-serif;
}

a {
  color: #00B7FF;
}

ul {
  list-style-type: none;
  padding-left: 10px;
}

.container {
  max-width: 400px;
  background-color: #eeeeee;
  border: 1px solid black;
}

.header {
  text-align: center;
}

How’s that? Not up to par? Continue working on it on your end.

Refactoring (for real)

Now that we added the front-end functionality, let’s update our application’s structure and refactor parts of the code.

Structure

Since our application is logically split between the client and server, let’s do the same for our project structure. So, make the following changes to your folder structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
├── app.js
├── bin
│   └── www
├── client
│   ├── public
│   │   ├── javascripts
│   │   │   └── app.js
│   │   └── stylesheets
│   │       └── style.css
│   └── views
│       └── index.html
├── config.js
├── package.json
└── server
    ├── models
    │   └── database.js
    └── routes
        └── index.js

Now, we need to make a few updates to the code:

server/routes/index.js:

1
res.sendFile(path.join(__dirname, '../', '../', 'client', 'views', 'index.html'));

app.js:

1
var routes = require('./server/routes/index');

app.js:

1
app.use(express.static(path.join(__dirname, './client', 'public')));

Configuration

Next, let’s move the connectionString variable - which specifies the database URI (process.env.DATABASE_URL || 'postgres://localhost:5432/todo';) - to a configuration file since we are reusing the same same connection throughout our application.

Create a file called config.js in the root directory, and then add the following code to it:

1
2
3
var connectionString = process.env.DATABASE_URL || 'postgres://localhost:5432/todo';

module.exports = connectionString;

Then update the connectionString variable in both server/models/database.js and server/routes/index.js:

1
var connectionString = require(path.join(__dirname, '../', '../', 'config'));

And make sure to add var path = require('path'); to the former file as well.

Utility Function

Did you notice in our routes that we are reusing the same code in each of the CRUD functions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// SQL Query > Select Data
var query = client.query("SELECT * FROM items ORDER BY id ASC");

// Stream results back one row at a time
query.on('row', function(row) {
    results.push(row);
});

// After all data is returned, close connection and return results
query.on('end', function() {
    client.end();
    return res.json(results);
});

// Handle Errors
if(err) {
  console.log(err);
}

We should abstract that out into a utility function so we’re not duplicating code. Do this on your own, and then post a link to your code in the comments for review.

Conclusion and next steps

That’s it! Now, since there’s a number of moving pieces here, please review how each piece fits into the overall process and whether each is part of the client or server-side. Comment below with questions. Grab the code from the repo.




Finally, this app is far from finished. What else do we need to do?

  1. Handle Permissions via passport.js
  2. Add a task runner - like Gulp
  3. Test with Mocha and Chai
  4. Check test coverage with Istanbul
  5. Add promises
  6. Use Bower for managing client-side dependencies
  7. Utilize Angular Routing, form validation, Services, and Templates
  8. Handle updates/PUT requests
  9. Update the Express View Engine to HTML
  10. Better manage the database layer by adding an ORM - like Sequelize - and a means of managing migrations

What else? Comment below.

Sublime Text for Web Developers

Sublime Text 3 (ST3) is a powerful editor just as it is. But if you want to step up your game, you need to take advantage of all that ST3 has to offer by learning the keyboard shortcuts and customizing the editor to meet your individual needs…

NOTE: This tutorial is meant for Mac OS X users, utilizing HTML, CSS, and JavaScript/jQuery.

Be sure to set up the subl command line tool, which can be used to open a single file or an entire project directory of files and folders, before moving on.

Keyboard Shortcuts

Goal: Never take your hands off the keyboard!

  1. Command Palette (CMD-SHIFT-P) - Accesses the all-powerful Command Palette, where you can run toolbar actions - setting the code syntax, accessing package control, renaming a file, etc..

    Command Palette

  2. Goto Anything (CMD-P) - Searches for a file within the current project or a line or definition in the current file. It’s fuzzy so you don’t need to match the name exactly.

    • @ - Definition - class, method, function
    • : - Line #
  3. Distraction Free Mode (CMD-CTRL-SHIFT-F) - Eliminates distractions!

    Command Palette

  4. Hide/Show the Sidebar (CMD-K, CMD-B) - Toggles the sidebar.

  5. Comment Your Code (CMD-/) - Highlight the code you want to comment out, then comment it out. If you do not highlight anything, this command will comment out the current line.
  6. Highlight an entire line (CMD-L)
  7. Delete an entire line (CMD-SHIFT-K)
  8. Multi-Edit (CMD+D) - Simply select the word you want to edit, and press CMD-D repeatedly until you have selected all the words you want to change/update/etc..

Grab the cheat sheet in PDF.

Configuration

You can customize almost anything in ST3 by updating the config settings.

Config settings can be set at the global/default-level or by user, project, package, and/or syntax. Setting files are loaded in the following order:

  • Packages/Default/Preferences.sublime-settings
  • Packages/User/Preferences.sublime-settings
  • Packages/<syntax>/<syntax>.sublime-settings
  • Packages/User/<syntax>.sublime-settings

Always apply your custom configuration settings to at the User level, since they will not get overridden when you update Sublime and/or a specific package.

  1. Base User Settings: Sublime Text 3 > Preferences > Settings – User
  2. Package User Specific: Sublime Text 3 > Preferences > Package Settings > PACKAGE NAME > Settings – User
  3. Syntax User Settings: Sublime Text 3 > Preferences > Settings – More > Syntax Specific - User

Base User Settings

Don’t know where to start?

1
2
3
4
5
6
7
8
{
  "draw_white_space": "all",
  "rulers": [80],
  "tab_size": 2,
  "translate_tabs_to_spaces": true,
  "trim_trailing_white_space_on_save": true,
  "word_wrap": true
}

Add this to Sublime Text 3 > Preferences > Settings – User.

What’s happening?

  1. We convert tabs to two spaces. Now when you press tab, it actually indents two spaces. This is perfect for HTML, CSS, and JavaScript. This creates cleaner, easier to read code.
  2. The ruler is a simple reminder to keep your code concise (for readability).
  3. We added white space markers and trimmed any trailing (err, unnecessary) white space on save.
  4. Finally, word wrapping is automatically applied

What else can you update? Start with the theme.

For example -

1
"color_scheme": "Packages/User/Flatland Dark (SL).tmTheme",

Simply add this to that same file.

You can find and test themes online before applying them here.

Advanced users should look into customizing key bindings, macros, and code snippets.

Packages

Want more features? There’s a ton of extensions used to, well, extend ST3’s functionality written by the community. “There’s a package for that”.

Package Control

Package Control must be installed manually, then, once installed, you can use it to install other ST3 packages. To install, copy the Python code for found here. Then open your console (CTRL-`), paste the code, press ENTER. Then Reboot ST3.

Command Palette

Now you can easily install packages by entering the Command Palette (remember the keyboard shortcut?).

  1. Type “install”. Press ENTER when Package Control: Install Package is highlighted
  2. Search for a package. Boom!

Let’s look at some packages…

Sublime Linter

SublimeLinter is a framework for Sublime Text linters.

After you install the base package, you need to install linters separately via Package Control, which are easily searchable as they adhere to the following naming syntax - SublimeLinter-[linter_name]. You can view all the official linters here.

Start with the following linters:

  1. SublimeLinter-jshint
  2. SublimeLinter-csslint
  3. SublimeLinter-html-tidy
  4. SublimeLinter-json

Sidebar Enhancements

Sidebar Enhancements extends the number of menu options in the sidebar, adding file explorer actions - i.e., Copy, Cut, Paste, Delete, Rename. This package also adds the same commands/actions to the Command Palette.

Command Palette

JsFormat

JsFormat beautifies your JavaScript/jQuery Code!

Press CTRL-ALT-F to turn this mess…

1
2
function peopleFromBoulder(arr) {return arr.filter(function(val) {return val.city == 'Boulder';})
    .map(function(val) {return val.name + ' is from Boulder';});}

…into…

1
2
3
4
5
6
7
8
function peopleFromBoulder(arr) {
    return arr.filter(function(val) {
            return val.city == 'Boulder';
        })
        .map(function(val) {
            return val.name + ' is from Boulder';
        });
}

DocBlockr

DocBlockr creates comment blocks based on the context.

Try it!

1
2
3
4
5
6
7
8
function refactorU (student) {
    if (student === "Zach") {
        var str = student + " is awesome!";
    } else {
        var str = student + " is NOT awesome!";
    }
    return str;
}

Now add an opening comment block - /** - and as soon as you press tab, it will create a dummy-documentation-comment automatically.

1
2
3
4
5
6
7
8
9
10
11
12
/**
 * [refactorU description]
 * @param  {[type]}
 * @return {[type]}
 */
function refactorU (student) {
    if (student === "Zach") {
        return student + " is awesome!";
    } else {
        return student + " is NOT awesome!";
    }
}

Yay!

GitGutter

GitGutter displays icons in the “gutter” area (next to the line numbers) indicating whether an individual line has been modified since your last commit.

GitGutter

Emmet

With Emmet you can turn a symbol or code abbreviation into a HTML or CSS code snippet. It’s by far the best plugin for increasing your productivity and efficiency as a web developer.

Try this out: Once installed, start a new HTML file, type a bang, !, and then press tab.

1
2
3
4
5
6
7
8
9
10
<!doctype html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>Document</title>
</head>
<body>

</body>
</html>

Boom!

Check the official docs to see all the expressions/symbols/abbreviations that can be used for generating snippets.

Conclusion

Go pimp your editor.

Want a package? It’s just Python. Hire me!

Comment below. Check out the repo for my Sublime dotfiles. Cheers!

Additional Resources

  1. Sublime Text Tips Newsletter - awesome tips, tricks
  2. Community-maintained documentation
  3. Package Manager documentation
  4. Unofficial documentation reference
  5. Setting Up Sublime Text 3 for Full Stack Python Development - my other ST3 post