Authored by Eric Huang

In this last part of a three-part series, I will demonstrate how I configure Jenkins and utilize different plugins to construct the deployment pipeline. You can find background information on this solution, and some infrastructure preparation guidance from Part 1 and Part 2 of this series. Click here to see the Jenkins referenced in this article.

Breakdown

The goal of this series is to be able to show a deployment pipeline that looks like this.

Every deployment activity will be shown in one place. A commit to Github will automatically trigger a deployment to QA environment. Each deployment activity is setup as a Jenkins job, including the breakdown for each job type:

Kickoff job: Serves as the single starting point of the deployment process. It gets notified every time a commit is pushed to Github repository, then it will fetch the source code and kick off subsequent test jobs.

Test jobs: Execute Grunt unit test tasks we setup in the previous post. If these tests pass, it will kick off QA deployment. Otherwise, the job will fail and stop the deployment process.

Deployment jobs: Take the source code from Kickoff job and use S3 deployment plugin & EB deployment plugin to push the application to QA and production environments. QA deployment jobs are triggered automatically by test jobs, however production deployment jobs still require manual trigger.

Jenkins Configuration

Assuming you have installed plugins and setup Git (as per Part 2), let’s  jump right into Jenkins configuration.

Git: First of all, we need to tell Jenkins to use Git.

Node: We will also need to specify which version of node, and which global packages to use. In this case we will need grunt-cli (Grunt command line) and bower (for front end package management).

Setting up S3 profile: In Part 2 of this series we created a special user role for deployment purposes. The “access public” and “private” keys come into use here. Jenkins will use them to communicate with AWS S3.

Wiring up Github and Jenkins: There are a couple of ways to integrate Jenkins with Github so Github activities will trigger Jenkins jobs. In our case we will use the  manual webhook option.

Then go to Github and add the webhook.

Setting up Jenkins Jobs

Now it’s time to configure the three types of Jenkins jobs: Kickoff, Test and Deployment.

Setting up the Kickoff Job

This job serves as a single entry point of the process. We’ll need to set the correct build trigger, which is triggered by Github commit.

The main responsibility of this job is to fetch source code from Github.

We will also need a post-build action so it will start the API and static web test jobs once code is fetched.

Setting up the Test Job

We created unit test tasks in Grunt for both static web and API. Before we can run the test, we need to fetch the source code from the Kickoff job. We can accomplish this by using the “copy artifact” function within Jenkins. (We could fetch the code from Github again, like in the Kickoff job, but this provides no additional benefit.)

API

Static Web

Since we’ll be using npm and Grunt to run tests, we will need to have the node in the build environment. We can reference the one we configured previously.

Then we can execute the unit tests under bash shell.

API

Static Web

If all tests pass, create the post-build actions to start the QA deployment jobs.

Setting up the Deployment Jobs

Similar to Test jobs, the Deployment jobs grab the source code from the Kickoff job using the “copy artifact” task. Right after the code-fetch step, we’ll add a task for actual deployments.

API deployment will reply on EB plugin.

Static Web deployment will use the S3 plugin. Before we deploy static web, we need to compile and optimize the front end code first.

Since we already put in the S3 security params in the system configuration, we will simply need to pick the default profile we set up previously and specify what bucket we will push our code to.

Once you finish setting up QA Deployment jobs, repeat the same actions to set up Production Deployment jobs, which are very similar except for the EB and S3 destinations.

After the Production Deployment jobs are set up, go back to the QA jobs and let Jenkins know that there are downstream projects. This can be done using post-build actions too. However, these will be different from the ones in the Test jobs; they are a manual step — so they won’t trigger automatically.

Setting up the Build Pipeline

We will already have a sequence of tasks and a deployment pipeline. It would be great if we can track all activities in one place visually, and that’s where the Build Pipeline plugin comes into play. Go ahead and add a new view with the “plus” button on the dashboard.

Configure the view as needed.

Voila! The deployment pipeline is up and running. You will be able to see statuses in real time. From this point forward, whenever you commit a change to Github’s master branch, they will be unit tested and deployed to QA environments automatically.

Conclusion

We have been using a similar setup and successfully performed over 1500+ QA and production deployments. Just like other software projects, this solution is a constant improving process. It gets better and more mature as we use it and tweak it.

The deployment pipeline we covered in this blog post is simplified for demonstration purposes, however, it is totally expandable. In real life you may need to handle additional activities such as database migration, environment preparation, integration tests and so on — you just need to follow the same principles and set them up as Grunt tasks and Jenkins jobs.

This series may seem long and there is a lot of setup required. However, once you set it up once, you and your team will benefit from your creation.

If you have any questions regarding the setup and see room for improvements. Please let me know in the comments section.

*This post was originally posted on Eric’s blog.

 

Let's Talk