Authored by Eric Huang
Recently I have been involved in a project that utilizes MEAN stack and AWS Elastic Beanstalk. After some trials and errors we managed to setup a simple deployment pipeline that has proven to work pretty well. Surprisingly, there isn’t a lot of information out there on setting this up so I decide to write up a short summary of what we have done.
This is not a short journey so I plan to separate the blog post into two parts. The first part of the blog will provide some context and high-level technical architecture, and in the second part we will dive into actual implementation.
What do we try to accomplish?
Coming from a DevOps background in the financial service industry, I have personally experienced the benefits of Continuous Delivery. There are already countless articles & books on how to implement Continuous Delivery, however most of them are too complex for a small & mid-size projects. Trying to be pragmatic and follow the KISS principle, we set out several goals when we build our deployment pipeline and gradually evolved them based on real life usage:
- Continuous Integration is a must
- Automatic deployment to DEV/QA environment for every commits in Master branch
- Production deployment needs to be triggered manually
Technology and Tools
We set out to build the application with SPA architecture. The app consists of four main building blocks: Static Web, API, Worker & Database.
The front-end is based on ng-boilerplate, which provides a very clean and structured Angular and Bootstrap template to jump start the project. It is also shipped with Grunt as the build tool and Bower as the package manager.
We also have a dedicated server that constantly processes data in the background. Waterline ORM is used here as well so that we can reuse some code base from API; We also use a library called Agenda to handle task scheduling.
The worker server is also being hosted on EB; however we use EB’s worker settings rather than web server. By default EB’s worker is configured to fetch & process tasks from Amazon Simple Queue Service but it works just fine processing recurring tasks from memory.
We also take advantage of a couple of other AWS services such as CloudFront and Route 53 but that’s out of scope of this article. To give you a recap, the diagram should sum up most of our infrastructure.
We borrowed the concept of Deployment Pipeline from Continuous Delivery and created our own simple version. Starting from development to production, our application goes through the following steps:
- Developers complete coding, check in code into source control (Github, Bitbucket and so on); Once code is checked in, a message will be sent to Jenkins to kick off the process.
- Jenkins receives the message, checks out code from source control and starts building and testing the code based on pre-configured steps.
- If build succeeds & tests pass, Jenkins makes necessary configuration changes and the pushes the code or binaries to DEV/QA environment.
- After code is deployed to DEV/QA environment, developers or testers perform some testing. If everything goes well, they have the option to deploy the same code base to production simply by clicking a button on Jenkins.
In our scenario, all these steps are set up as jobs within Jenkins and dependencies will be setup between jobs. We also rely heavily on Grunt to compile and test front-end code and unit-test back-end code. Then we utilize Jenkins’ powerful plugins to push the output to the correct environments. Last but not least, we also visualize all these steps on a dashboard within Jenkins:
We have covered a lot of stuff here, from the context behind this setup, to what technologies we use to make it happen. In part 2 and part 3 I’ll provide a tutorial on how to implement them with Grunt, Jenkins & AWS Elastic Beanstalk.