In my previous blog post I gave a high-level overview of our MEAN-stack delivery solution. This time I will walk through actual environment preparation and the upcoming Part 3 blog post will focus on how to integrate with Jenkins and make every tool work nicely with each other.

Infrastructure Preparation

Github

For demo purpose I created a sample project that has two parts: api and static web. You can find the source code under this Github project.

If you are interested in running the app and see how it works in real life, clone the repo and run npm install in both api and staticweb folders. To get api up and running, type nodemon within api folder. To spin up staticweb, use grunt serverwithin staticweb folder.

Build Machine

Our Jenkins is installed on a Ubuntu 14.10 instance. Installing and upgrading Jenkins on Ubuntu take very little effort with the help of the instruction on this official wiki page.

The same Ubuntu instance will also be used as build agent thus Git is required to fetch source code from Github. To install Git on Ubuntu, simply ssh into the server and type sudo apt-get install git-core in the terminal.

Jenkins Plugins

Once Jenkins is setup, we will need to install the following plugins: AWS Elastic Beanstalk Deployment Plugin, S3 publisher Plugin, GIT client Plugin, GIT Plugin, Github Plugin, Github API Plugin, NodeJS Plugin, Build Pipeline Plugin

Most of these plugins are self-explanatory except for the Build Pipeline Plugin. This powerful plugins allows us to visualize connected Jenkins jobs and form a build pipeline, which will eventually look like this:

AWS S3

As mentioned in previous blog post, the static website will be hosted on Amazon S3. After setting up two S3 buckets, one for qa and one for production, we’ll need to enable static website hosting for both buckets so their content will be accessible to the public. The following official instruction is a good resource.

AWS Elastic Beanstalk

We will also need to create an application on EB to host our backend api. Similar to S3, we’ll need separate environments for qa and production. If you have not created EB applications before, this walkthrough is a good starting point. Once the EB environments are setup, no additional configurations are needed for now.

AWS Identity and Access Management

In order for Jenkins to interact with both S3 & EB, it is necessary to create a power user (service account) within AWS Identity and Access Management.

Once the user is created, take good care of the Access Key and Secret Key since they will be used in both S3 and EB plugins within Jenkins. Last but not least, remember to attach “AWS Elastic Beanstalk Full Access” policy to the user as it will grant all required permissions for our pipeline.

Code Walkthrough

At this point our infrastructure is pretty much in place. Let’s going through some key areas of the sample project and see what was done to the code in preparation for continuous deployment.

API

The api project is built on top of Sails 0.10‘s generated project. However we did make two main additions:

  • Added Separate Unit Tests

Sails doesn’t provide a testing framework out of the box so we had to manually add Mocha into our sample project. We simply added grunt-test-mocha to package.json, registered and configured tests task in Sails’ tasks folder. Note that we specified in the configuration that all our tests will be located under the tests folder.

Once this is setup, we can test whether unit tests are working by typing Grunt Test in your terminal.

  • Modified Environments and connections

In Sails, environmental configurations and data source adapters are handled through Environments and Connections.

Environment in Sails is the same concept as in other development framework. Connections is a way to specify what data sources will be used and what parameters are needed to connect to those data sources. Each environment can contain multiple connections.

In the sample project we created three environments, development (local), qa and production. As you discover in sample project setup, we chose to persist our data locally for development environment and use MongoDB for our qa and production environments.

If you look closer at the mongo connection, you can see that we do not put any actual values for connection parameters on purpose. Instead we plan to rely on the hosting environments’ parameters (which will be eventually provided by Elastic Beanstalk).

The main benefits of this approach is that it removes the need to transform configuration during build time and it also avoids storing credentials in source control.

Static Web

The static web component is built off ngbp. We did make a small amount of changes to the default code base and its grunt file. Below are the main additions:

  • Added local express server

ngbp does not come with a build task that starts up local web server. In our case we need to have a way to monitor local file changes and unit test them constantly, so we borrowed a solution from other projects and brought it in. With this addition in place, you can spin up a local server by simply typing grunt server

  • Added common angular module for configurations

Since we adopt SPA architecture, static web needs to know what api endpoints it is going to work with and these end points change per environment. We created an angular module that acts as the single source of end point configuration so we only need to modify this file when we deploy to different environments.

We also introduced a grunt plugin grunt-string-replace and created build tasks that will replace our configuration with the corresponding environments during build process.

  • Create build tasks for each environment

ngbp comes with a series of comprehensive build tasks, but we added additional ones that perform different end point transformations. For example, if you want to build QA version of the static web, use grunt build:qa and the outcome of this task will be placed under build folder.

If the code is ready for deployment, simply type grunt compile and grunt will look up your code under build folder, uglify, optimize them and then drop them under the bin folder.

To sum up, in this article I provided a short walkthrough on infrastructure preparation and some key code changes that need to happen before moving forward. In the next blog post I’ll focus on Jenkins and demonstrate how to construct a delivery pipeline.

Find part I of this series here

*This post was originally posted on Eric’s blog.

 

 

Get in Touch