# Building a RESTful API With Node, Flow, and Jest

This tutorial details how to develop a RESTful API with NodeJS, ExpressJS, and Flow using test-driven development (TDD).

We’ll be going full-Facebook with this application (FaceStack), utilizing:

• Flow for type checking
• Babel for transpilation
• Jest for our testing framework
• (Optionally) Yarn to replace NPM

## Contents

1. Project Setup
2. Server Setup
3. Test Setup
4. First Endpoint
5. Rounding out CRUD
6. Conclusion

## Project Setup

Create a new directory to hold the project:

### Transpilation

To start, let’s get Babel transpilation up and running. We’ll use Gulp to automate the build process.

NOTE: W’ll use yarn here to download and manage dependencies, but you can use npm just as easily if you wish. Anytime you see a yarn add or yarn remove just substitute in a npm install or npm rm. The only difference is that yarn does the --save part for you and with npm you must be explicit.

Go ahead and add gulp, gulp-babel, and gulp-sourcemaps to your project, and create a gulpfile.js to start writing our Gulp tasks in:

Also, install Gulp globally (if necessary), so you can run Gulp tasks from the command line:

We’re going to write all of our source code in the “src” directory, so the first Gulp task will need to:

1. Grab all of the JavaScript files inside of “src”
2. Pipe the files through Babel
3. Deliver them to the “build” directory

Pretty straightforward. Time to test!

Add a “src” directory, and then create a file inside of it called index.js:

Put some kind of JavaScript statement inside of your newly created file, like:

Run the Gulp task, and then run the transpiled version of index.js:

Nice! However, do you really want to manually type gulp scripts in to build the project every time changes are made? Of course not. So, let’s set up a watch and a default task with Gulp to make this easier.

Add the following to gulpfile.js:

Now you can just run gulp from the command line, and it will listen for changes to our JavaScript files inside of “src” and re-run the scripts task whenever it detects changes.

### Flow

Moving on, to use Flow, we’ll use the gulp-flowtype plugin to interface with Flow. Download the dependency and head back over to gulpfile.js.

This is all well and good, but we’re going to configure a few more parts before we move forward. We need to tell Babel to strip out all of our Flow type annotations. While we’re doing that, we might as well install the other Babel dependencies:

With those installed create a .babelrc file in the root of the project, and add these settings:

Finally, we need a .flowconfig to tell Flow that this is a project with Flow annotated code. If you have the Flow CLI installed, you can do this with flow init. If you don’t, just create a file called .flowconfig file and paste this in:

Whew. Now that we’ve done all that configuring, let’s make sure it’s all working by testing out some Flow type annotations. If you’re familiar with TypeScript, this syntax will look very familiar. There are some notable differences, but in general TypeScript and Flow look pretty similar. Let’s start with a simple function that adds two numbers together.

Run the default gulp task:

Replace the contents of src/index.js with the following:

Since Gulp is watching for changes, you should automatically see the output from Flow as soon as you save the file:

Excellent! Flow is doing its job. Here, it’s telling us that when we try to call testFunc('banana') we’re going to run into issues because testFunc is clearly expecting its argument to be a number, not a string. Notice the // @flow comment that’s now at the top of the file. This tells Flow that this file should be typechecked. If you don’t put this comment at the top of the file you’re working on, Flow will ignore it. Keep this in mind as you develop your application.

If you read the post on TypeScript (Developing a RESTful API With Node and TypeScript), you may already be wondering how we can use types with third-party libraries. Well, with Flow there’s a command line tool called flow-typed that is used to manage libdefs (library definitions) for Flow.

First, install flow-typed globally:

The nice thing about flow-typed is that we don’t really have to manage it too much. It reads package.json and automatically downloads the libdefs for our dependencies and stores them in “flow-typed”.

To install the libdefs for the packages we’re using so far just run:

For packages that have no official libdef in the flow-typed repository, a stub is generated. Unfortunately, if you want to omit the --flowVersion=0.36.0 flag, you’ll need to install flow-bin and have it listed as a dependency in package.json.

Before moving forward, we need to make one more change to our Gulp task for Flow. Now that we’ve got flow-typed, tell Flow where we’re keeping these definitions:

Great! We’ve got Flow type checking our code, and Babel is stripping out our type annotations and transpiling.

Let’s construct the basics of the server.

## Server Setup

We’re going to use src/index.js as the entry point for our Express API along with the debug module to set up simple logging. Install it with yarn (yarn add debug@2.4.5) or npm (npm install debug@2.4.5 --save), and then wipe everything out of index.js and replace it with the following:

Alright. This looks like a lot of code, but it’s mostly boilerplate with some fancy type annotations added. Let’s break it down real quick though anyways:

• At the top we’ve got our Flow comment, imports, and our first bit of strictly Flow-enabled code - the ErrnoError interface declaration. This error type is used by Express. When using the flow check command from the official command line tool, Flow will not flag this as an error. For whatever reason, gulp-flowtype does. If you get a strange type check error, it may be worth it to install the Flow CLI and double check using flow check.
• After the ErrnoError definition, we set up some data and instantiate the server by attaching our future Express app with http.createServer.
• normalizePort looks for the $PORT environment variable and sets the app’s port to its value. If it doesn’t exist, it sets the port to the default value - 3000. • onError is just our basic error handler for the HTTP server. • onListening simply lets us know that our application has actually started and is listening for requests. Run gulp. Right now, you should see Flow complaining about trying to import the API: This makes sense because we don’t even have a file called Api.js, so let’s create it and set up the basic structure for the API. In this file, the third-party libraries we’ll be using are: With the dependencies and libdefs acquired, we’re ready to build out the Api.js file: Most of this file ends up just loading and initializing the libraries that we’re using. There are a few things to note though: • First, we create a field reference for the Api.express property, and tell Flow that it will be an object of type express$Application from Express.
• The constructor initializes an instance of Express, and attaches it to the instance of Api. Then it calls the other two methods, Api.middleware and Api.routes.
• Api.middleware - Initializes and attaches our middleware modules to the app.
• Api.routes - Right now, it attaches a single route handler that returns some JSON. However, notice the Flow annotations on the parameters of the anonymous function. These correspond to the base arguments for an Express route handler: $Request and $Response. These refer to Express' extended versions of Node’s IncomingMessage and ServerResponse objects, respectively.

At this point, you may start to see a Flow error in your terminal that looks something like this:

It would appear that Flow doesn’t get the memo that when app.express is called, it does return a request handler. This seems to be an issue with the libdef for Express, because it declares that the express$Application constructor has a return type of void. NOTE: After unsuccessfully messing with the libdef for a while, I decided I knew that it worked better than Flow, and moved on. If the terminal output bugs you, go ahead and add this comment to the line above where http.createServer is called: //$FlowFixMe: express libdef issue


Let’s go ahead and fire up the app and make sure everything is working as intended thus far. To run the app from the command line, you can run node build/index.js. However, we really should have a start script so we can just type npm start to run the server. Open up package.json and add the following:

The first part of the command just sets the DEBUG environment variable to flow-api:*, so that the debug module writes our logs to stdout. Now you can run npm start, and you should see:

Awesome! The server is listening. Now, if we hit any endpoint, it should send back our { message: "Hello Flow!" } payload. You can use httpi for this kind of thing. If you’re on a Mac you can install it with Homebrew: brew install httpie. Then within a new terminal window run:

And you should see:

And we’re up and running! At this point, we’ve got the base Express application up and running. Now we just need to build out a router that does something useful!

## Test Setup

Not so fast! Rather than jumping straight into the RESTful router, we’re going to set up our testing environment so that as we create endpoints and handlers we can test that they work as we expect. Since we’re using the FaceStack, we’ll use Jest as well as supertest-as-promised to interface with our Express API.

Install the packages:

Open up package.json again and add a few lines to configure Jest:

This just tells Jest to use Babel and our Babel configuration to interpret our test files and the files they test. To run our tests from the command line, we just need to add a test script to package.json:

Right now, if you run it, Jest is just going to tell you it couldn’t find any tests So, let’s fix that. Create a directory called __tests__ in the project root, and inside of it add a file to hold our first test:

This is a pretty simple test, but it should at least demonstrate the basic structure of what we’re doing here. If you’re saying to yourself, “Hey, this looks a lot like Jasmine!”, you’re right it does, because Jest is built on top of Jasmine. Here’s a quick breakdown of this first test file:

• We import the Api class and supertest-as-promised to create the interface to the API. This way we don’t have to manage starting and stopping the server or actually sending requests over a network connection.
• We assert that we’re expecting a 200 status code.
• When the response comes back, we assert that the payload should have a property called message, who’s value is a string, and that string should equal: “Hello Flow!”

Go ahead and run the tests, npm test, and you should see this output:

With the test environment set up, let’s build out our first endpoint!

## First Endpoint

Now, we’re going to implement CRUD with a single resource - produce. You can use any resource you want or grab the fake data we used here. In case you’re blanking on what CRUD means, we’re going to implement 4 actions that the API will support for the produce resource:

1. Create a produce item.
2. Read produce item(s).
3. Update a produce item.
4. Delete a produce item.

We’ll start by implementing the GET handler that returns all the produce in our inventory, with the following shape:

NOTE: The id property will not be supplied by the user, but assigned when an item is created by the API.

Let’s start by first writing some tests that we can test our implementation against as we write it. Rename first.test.js to ProduceRouter.test.js, and replace the current describe block with these tests for the GET all endpoint:

Inside of the outer describe, we’ve added a nested block to indicate that all of the tests inside of it are related and, thus, testing the same feature. These three tests are pretty basic and check that:

• We get an array back.
• The objects in the array have the required properties.
• The objects in the array do not have extra properties.

Run the tests from the terminal with npm test and you should see them all fail:

Now, let’s get rid of all those errors and failed tests, and implement the endpoint.

Create a new directory inside of “src” called “routers” and add a file called ProduceRouter.js. This is where we’ll implement the handler functions for all of the endpoints designated for the produce resource.

NOTE: Remember - For Flow to type check the file, you have to add the @flow comment at the very top of the file!

The ProduceRouter holds fields for an Express Router instance, and a path property that holds its mount point to the application. The constructor takes this mount point as its only argument and then attaches the endpoint handlers to their endpoints.

NOTE: The field type annotations for router and path are not strictly required (as far as I can tell). You can get rid of them, and Flow will not complain. But you can’t have field declarations without types. It doesn’t like that at all. I tend to use them because they’re a useful quick reference to the properties on an object.

The getAll function has the basic function signature of an Express route handler, and it simply responds to requests with the full inventory list. Notice that the return type is void. This is because of the middleware architecture that Express is built on. Each middleware function is run in sequence, rather than returning a value from the handler.

Finally, in init we will take each of our route handlers, and attach it to a mount path on the router. Each endpoint will be prefixed with the overall Router mount path that is passed to the ProduceRouter constructor. Right now, our ProduceRouter is responding to GET requests at the /api/v1/produce endpoint.

We’re done in this file for now, but we’ll have to hop back over to Api.js in order to finish linking these things up.

Add an import statement for ProduceRouter at the top:

And then replace the routes function with:

Here, we simply create an instance of the ProduceRouter class, and attach it to the Express application path specified by its path property. Now cross your fingers and run npm test:

Victory! Go ahead and pat yourself on the back, maybe stretch the legs or get a snack. We’ll work out the rest of the endpoints in the next section.

## Rounding out CRUD

We’ve already got one aspect of the “Read” part of CRUD complete. Let’s knock the other one out now. Rather than only being able to get the full list of items, we need to enable requesting items by their ids. First, we need some tests. Start by making sure that this getById handler will:

• Return an object of the correct type.
• Return the record that lines up with the id sent with the request.
• Reject out-of-bounds ids.

Run those new tests and make sure they fail like they should:

Good. Now we can work on making them pass. This one isn’t so bad. We just need to parse the ID number from the request params, and find an item in the inventory array with the same ID.

Not particularly exciting, but it works for now! Just make sure to add the handler as well:

That does it for the “R” in CRUD.

### POST - Create a New Item

Let’s knock out the “C” now. We’re going to allow POSTs to the endpoint /api/v1/produce to be used for creating new items for the inventory. In addition, we’ll require that the quantity, price, and name properties are passed.

Tests:

Verify that the tests fail with npm test, then add another method to ProduceRouter called postOne.

NOTE: I ended up also writing functions to parse the payload from the request, as well as one to re-write our JSON “database” file. You can either include those as helper methods somewhere in the same file as ProduceRouter, or define them in a different file and import it. If you decide to import it, make sure that you type annotate the function so that Flow can work with its types. I chose to define them in different files and export them from there.

Create a new folder within “src” called “util”. Then add a parsers.js file:

…and save.js:

We don’t have tests written currently for these, but they’re pretty simple functions. Most importantly, now we have a couple utility functions that we can reuse. We’ll definitely need to reuse saveInventory whenever we need to persist changes to the JSON file holding the inventory.

Add the imports to ProduceRouter.js:

Then update the init():

With this code filled in, run npm test again and when you’ve got all green check marks, head on to the next section.

### PUT - Update an Item

This route will allow requests to update the properties of a single item. We need to make sure that a user is unable to change the id property of the item so that they can’t create collisions. To solve this issue, we need to strip out all invalid keys from the submitted payload. But first, a few tests:

Add the new handler to ProduceRouter:

Then, within parsers.js, add the parseId() and parseUpdate() helpers, which are used to clean the payload and requested item ID:

These are fairly straightforward. parseUpdate takes in the payload from the request, and strips out any keys that are not name, quantity, or price. Then it just simply returns the trimmed object if there’s still keys left, and null if not. parseId is even simpler: It looks for an id property on the payload, converts it to a number (if necessary), and returns.

Update the import in ProduceRouter.js:

Then update the init():

Run the tests again and ensure they pass. One more route to go!

### DELETE - Remove an Item

This route will allow for deleting an item from the inventory by passing a valid id as a URL parameter. This is the same string route that the getById and updateOneById functions handle, but will use the DELETE HTTP method. Here’s a few basic tests:

Ensure those fail, and then add the implementation for the handler to ProduceRouter as removeById:

This obviously looks pretty similar to most of the other handlers. The only difference being that once we get a valid id, we search for the object it matches in the inventory, get its index, and then splice it out of the inventory array.

Don’t forget the handler:

Run the tests one last time:

Congratulations! You just built an Express API type checked with Flow!

## Conclusion

All in all, working with Flow is interesting, at the very least.

After using both it and TypeScript, Flow’s type checking tends to be more strict, but you also spend more time trying to figure out what Flow is getting at and how to fix errors. Part of this is probably that the tooling support for TypeScript is vastly superior. Flow offers a lot of the same functionality that TypeScript does, but there’s a TypeScript tool for every single thing you could ever want. It simply isn’t the same for Flow. The community doesn’t seem to have embraced it with as much enthusiasm. The number of libdefs in the flow-typed repository versus DefinitelyTyped for TypeScript is tiny. This is probably the biggest problem you’d have to face in choosing to use Flow for static type analysis over TypeScript.

That being said, Flow also offers some distinct advantages.

It’s plug-n-play with Babel, so adding Flow to a project using Babel would probably be much less painful than converting it to use TypeScript. Both allow you to do so bit by bit, but Flow handles this more gracefully. TypeScript would usually like to just have you pass everything through the compiler and deal with the type errors as you can. Flow allows you to annotate only the files you want to type check, so adding it to an existing project is much easier. Actually, this is probably the best use case for Flow. It would be cumbersome to start a brand new project with such strict type checking. It definitely slows down the rapid iteration needed at the beginning of a project’s life. However, once the project gets to a certain size it’s easy to drop in Flow and clean up the errors file by file as you move forward.

You can grab the code from the flow-node-api repo. Best!