Written by Jonathas Hortense, Dalia’s Systems Infrastructure Architect
In an earlier post, we provided a bird’s eye view of Dalia’s systems and architecture to give you an impression of the robustness in our environment. With this post we’ll show you how we deploy our Apps and introduce you to the technology we use to do so.
As adopters of the micro-services movement, Dalia’s architecture is based on more than 20 applications working together and communicating with each other via APIs and queue systems.
Only a year ago, we were running all our Apps in one single box of Nginx + Passenger + Ruby. This static solution didn’t support our exponentially growing traffic, so we started looking for an auto-scaling solution that would seamlessly grow with us. From this need we decided to give AWS Elastic Beanstalk a try.
Elastic Beanstalk is an Amazon Web Services (AWS) product that enables a smoother deployment and scaling process by combining other AWS products like EC2 instances, Elastic Load Balancer, S3, and SQS into one central environment. Elastic Beanstalk is useful because it automatically manages capacity provisioning and auto-scaling so that we don’t have to do it manually.
The Elastic Beanstalk environment can be customized via configuration files; a feature that we’ve put to great use on several occasions here at Dalia. For example, we’ve used this feature to
- Update and configure Rsyslog to send the logs to our ELK stack;
- Install the Git package in order to download Gems from Github;
- Install and configure Collectd to send system metrics to Graphite;
- Customize Logrotate configuration;
- Customize Nginx;
- Tag the EC2 instances;
- Run some pre and post deploy scripts
To action a customization, we just need to create a hidden folder called .ebextensions inside the application’s repository and put the configuration files there in Yaml format.
Puma & Custom Configuration
We started in Elastic Beanstalk using Ruby 2.3 (Passenger). It comes with an Amazon Linux image with Nginx and Passenger Standalone already installed. For security reasons, we needed to customize Nginx to create a redirection from HTTP to HTTPs and establish a custom log format to feed into our centralized Log system. Using Ruby 2.3 (Puma) as the Web Server for Ruby, enabled a more organized Nginx installation, so we decided to migrate over from Passenger to Puma.
After updating Puma to the latest version, we recently decided to use our own configuration. This means that we created another .ebextensions file to customize the Puma initialization and point it to our custom configuration file instead of Beanstalk’s default one.
Our new configuration allows us to tune Puma’s threads and workers. Right now we are focusing on optimising the Memory Usage. We’re working to find the sweet spot between enhancing the Application’s memory capacity without blowing it up, and decreasing the amount of instances; saving us money and avoiding over-provisioning:
Before Puma tuning / After Puma tuning
Adapting our Applications and Deploying in Elastic Beanstalk
In order for our Applications to function within the EB environment, they too required fundamental changes. A key characteristic of our new deployment process is that our code must be flexible enough to be actioned by any instance that will pick it up for execution. For example, a user with multiple requests can be served by different instances, making it impossible for us to continue relying on process identifiers or local storage.
Before (Local storage)
After (External shared services)
In order to fully integrate to EB, without any operational interruptions (such as Auto-scaling killing instances for example), we also re-engineered Cron Jobs and Background Processes. Beanstalk Worker and SQS in combination manage this orchestration nicely.
Thanks to the Beanstalk CLI we can upload new versions of our code to the live environments directly from the Shell. This process is pretty straight forward until it comes to dealing with different deployments for the different environments; that’s where we had to get creative with the configuration files.
We created our own deployment script triggered by Jenkins jobs, so all we need to do is ‘git push’ to the ‘production’ or ‘staging’ Github branches and the new code comes to life.
Learning How to Navigate Elastic Beanstalk
Elastic Beanstalk has helped us a lot in launching and maintaining new applications, but adjusting to our new environment has been a learning experience. There are a few particularities that took some time to figure out, for example, why it’s not possible to simply change the InstanceType (t2.small, m3.medium, m4.large, etc.) using the .ebextensions files.
We discovered that according to the AWS documentation, there are default ‘Recommended values’ set at the API level that can only be changed during the creation of the environment or in the Web Console. In other words, even if you change the values in the .ebextensions files, they will be overridden during the next deployment. In order to change this via .ebextensions, the default InstanceType must be removed from the environment, using the AWS CLI:
For anyone considering joining our team, Docker is already our longtime friend which we use to run a cluster of MongoDB and Redis, as well as all the other satellite technologies we have in our architecture (such as ELK stack to store and query our logs, Graphite and Grafana for the performance monitoring, and Druid and Kafka to help speeding up the ETLs calculations).The learning saga continues at Dalia DevOps and we are always open to trying new things like functions (AWS Lambda) and Docker. We’ve even dabbled in some Kubernetes experiments! 🙂
Dalia’s international reach expands to all major continents, so what we do at the infrastructure level, and our choices of what technology to use, has a massive impact on our performance and efficiency at a global scale.
People who accessed our platform in a span of five minutes.
If you have any questions about how we use Elastic Beanstalk, don’t hesitate to reach out! And if you’re looking for your next adventure and want to dig into our microservice architecture, you should consider joining our team. We’re looking for enthusiastic, problem-solving developers to join Dalia here in Berlin. Visit our career site!