This is a quick start guide for spinning up Docker containers that run NodeJS and Redis. We’ll look at a basic development workflow to manage the local development of an app, on Mac OS X, as well as continuous integration and delivery, step by step.
This tutorial is ported from Docker in Action - Fitter, Happier, More Productive.
We’ll be using the following tools, technologies, and services in this post:
- NodeJS v0.12.0
- Express v3.4.8
- Redis v2.8.19
- Docker v1.5.0
- boot2docker v1.5.0
- Docker Compose v1.1.0
- Docker Hub
- Digital Ocean
There’s slides too! Check them out here, if interested.
Be sure you understand the Docker basics before diving into this tutorial. Check out the official “What is Docker?” guide for an excellent intro.
In short, with Docker, you can truly mimic your production environment on your local machine. No more having to debug environment specific bugs or worrying that your app will perform differently in production.
- Version control for infrastructure
- Easily distribute/recreate your entire development environment
- Build once, run anywhere – aka The Holy Grail!
- A Dockerfile is a file that contains a set of instructions used to create an image*.
- An image is used to build and save snapshots (the state) of an environment.
- A container is an instantiated, live image that runs a collection of processes.
Let’s get your local development environment set up!
Follow the download instructions from the guide Installing Docker on Mac OS X to install both Docker and the official boot2docker package. boot2docker is a lightweight Linux distribution designed specifically to run Docker for Windows and Mac OS X users. In essence, it starts a small VM that’s configured to run Docker containers.
Once installed, run the following commands in your project directory to start boot2docker:
1 2 3
Get the Project
Grab the base code from the repo, and add it to your project directory:
1 2 3 4 5 6 7 8
Docker Compose (Previously known as fig) is an orchestration framework that handles the building and running of multiple services, making it easy to link multiple services together running in different containers. Follow the installation instructions here, and then test it out to make sure all is well:
Now we just need to define the services - web (NodeJS) and persistence (Redis) in a configuration file called docker-compose.yml:
1 2 3 4 5 6 7 8 9 10 11 12
Here we add the services that make up our basic stack:
- web: First, we build the image based on the instructions in the Dockerfile - where we setup our Node environment, create a volume, install the required dependencies, and fire up the app running on port 3000. Then we forward that port in the container to port 80 on the host environment - e.g., the boot2docker VM.
- redis: Next, the Redis service is again built from the instructions in the Dockerfile. Port 6379 is exposed and forwarded.
docker-compose up to build new images for the NodeJS/Express app and Redis services and then run both processes in new containers. Open your browser and navigate to the IP address associated the boot2docker VM (
boot2docker ip). You should see the text, “You have viewed this page 1 times!” in your browser. Refresh. The page counter should increment.
Once done, kill the processes (Ctrl-C). Commit your changes locally, and then push to Github.
So, what did we accomplish?
We set up our local environment, detailing the basic process of building an image from a Dockerfile and then creating an instance of the image called a container. We then tied everything together with Docker Compose to build and connect different containers for both the NodeJS/Express app and Redis process.
Need the updated code? Grab it from the repo.
Next, let’s talk about Continuous Integration…
We’ll start with Docker Hub.
Docker Hub “manages the lifecycle of distributed apps with cloud services for building and sharing containers and automating workflows”. It’s the Github for Docker images.
- Signup using your Github credentials.
- Set up a new automated build. And add your Github repo that you created and pushed to earlier. Just accept all the default options, expect for the “Dockerfile Location” - change that to “/app”. Once complete, Docker Hub will trigger an initial build.
Each time you push to Github, Docker Hub will generate a new build from scratch.
Docker Hub acts much like a continuous integration server since it ensures you do not cause a regression that completely breaks the build process when the code base is updated. That said, Docker Hub should be the last test before deployment to either staging or production so let’s use a true continuous integration server to fully test our code before it hits Docker Hub.
CircleCI is a CI platform that supports Docker.
Given a Dockerfile, CircleCI builds an image, starts a new container (or containers), and then runs tests inside that container.
- Sign up with your Github account.
- Create a new project using the Github repo you created.
Next we need to add a configuration file, called circle.yml, to the root folder of the project so that CircleCI can properly create the build.
1 2 3 4 5 6 7 8 9 10 11 12
Here, we install Docker Compose, then we create a new image, and run the container along with our unit tests.
Notice how we’re using the command
docker-compose run -d --no-deps web, to run the web process, instead of
docker-compose up. This is because CircleCI already has Redis running and available to us for our tests. So, we just need to run the web process.
Before we test this out, we need to change some settings on Docker Hub.
Docker Hub (redux)
Right now, each push to Github will create a new build. That’s not what we want. Instead, we want CircleCI to run tests against the master branch then after they pass(and only after they pass), a new build should trigger on Docker Hub.
Open your repository on Docker Hub, and make the following updates:
- Under Settings click Automated Build.
- Uncheck the Active box: “When active we will build when new pushes occur”. Save the changes.
- Then once again under Settings click Build Triggers.
- Change the status to on.
- Copy the example curl command – i.e.,
$ curl --data "build=true" -X POST https://registry.hub.docker.com/u/mjhea0/node-docker-workflow/trigger/84957124-2b85-410d-b602-b48193853b66/.
Back on CircleCI, let’s add that curl command as an environment variable:
- Within the Project Settings, select Environment variables.
- Add a new variable with the name “DEPLOY” and paste the curl command as the value.
Then add the following code to the bottom of the circle.yml file:
1 2 3 4 5
This simple fires the
$DEPLOY variable after our tests pass on the master branch.
Now, let’s test!
Follow these steps…
- Create a new branch
- Make changes locally
- Issue a pull request
- Manually merge once the tests pass
- Once the second round passes, a new build is triggered on Docker Hub
What’s left? Deployment! Grab the updated code, if necessary.
Let’s get our app running on Digital Ocean.
After you’ve signed up, create a new Droplet, choose “Applications” and then select the Docker Application.
Once setup, SSH into the server as the ‘root’ user:
Now you just need to clone the repo, install Docker compose, and then you can run your app:
1 2 3 4
Sanity check. Navigate to your Droplet’s IP address in the browser. You should see your app.
But what about continuous delivery? Instead of having to SSH into the server and clone the new code, the process should be part of our workflow so that once a new build is generated on Docker Hub, the code is updated on Digital Ocean automatically.
Tutum manages the orchestration and deployment of Docker images and containers. Setup is simple. After you’ve signed up (with Github), you need to add a Node, which is just a Linux host. We’ll use Digital Ocean.
Start by linking your Digital Ocean account within the “Account Info” area.
Now you can add a new Node. The process is straightforward, but if you need help, please refer to the official documentation. Just add a name, select a region, and then you’re good to go.
With a Node setup, we can now add a Stack of services - web and Redis, in our case - that make up our tech stack. Next, create a new file called tutum.yml, and add the following code:
1 2 3 4 5 6 7 8 9 10 11 12
Here, we are pulling the images from Docker Hub and building them just like we did with Docker Compose. Notice the difference here, between this file and the docker-compose.yml file. Here, we are not creating images, we’re pulling them in from Docker Hub. It’s essentially the same thing since the most updated build is on Docker Hub.
Now just create a new Stack, adding a name and uploading the tutum.yml file, and click “Create and deploy” to pull in the new images on the Node and then build and run the containers.
Once done, you can view your live app!
Note: You lose the “magic” of Tutum when running things in a single host, as we’re currently doing. In a real world scenario you’d want to deploy multiple web containers, load balance across them and have them live on different hosts, sharing a single REDIS cache. We may look at this in a future post, focusing solely on delivery.
Before we call it quits, we need to sync Docker Hub with Tutum so that when a new build is created on Docker Hub, the services are rebuilt and redeployed on Tutum - automatically!
Tutum makes this simple.
Under the Services tab, click the web service, and, finally, click the Webhooks tab. To create a new hook, simply add a name and then click Add. Copy the URL, and then navigate back to Docker Hub. Once there, click the Webhook link and add a new hook, pasting in the URL.
Now after a build is created on Docker Hub, a POST request is sent to that URL, which, in turn, triggers a redeploy on Tutum. Boom!
As always comment below if you have questions. If you manage a different workflow for continuous integration and delivery, please post the details below. Grab the final code from the repo.
See you next time!