Replicating Heroku's Review Apps on AWS - ruby-on-rails

I'm currently working for a client that are using Heroku and migrating to AWS. However, we're having trouble in understanding how the Review Apps feature can be replicated in AWS.
Specifically, we want a Jenkins job that will allow us to specify a branch name and a set of environment variables. That job will then spin up our entire stack, so that the developer can test their changes in isolation, before moving to staging.
Our stack is 5 different Ruby on Rails applications, all of which must know each other's URLs, which does complicate things.
I'm told that tools like AWS Fargate or EKS might be suitable, but I'm not sure.

Related

On premise Docker Central

We have team of j2ee spring and angular developers. We are developing small applications in short span. As of now we don't have luxury to have DevOps team to maintain staging and QA environments.
I am checking feasibility that developer who want to get their application tested can build docker image and float it on on-premise central docker server (At times they work from remote locations as well). We are in process of CI but it may take some time.
Due to cost pressure we can not use AWS except for production.
Any pointer will be helpful.
Thanks in advance.
Since you plans on using Docker, you can infact setup a simple build flow which makes lives easier in the long run.
Use DockerHub for building and storing docker images (This saves time for building and also provides a easy way of rolling back shipping and DevOps). It takes few minutes to connect your Github/Bitbucket repository to DockerHub and tell for each branch/tag build an image upon PR merge or push. Also the cost for the service is minimal.
Use these images for your local environment as well as production environment (Giving guarantee that you refer the correct versions)
For production use AWS Elastic Beanstalk or AWS ECS (I prefer ECS due to Container Orchestration capabilities) to simplify the Deployments & DevOps where most of the configurations can be done from AWS Web Console). Cost is only for the underlying EC2 instances.
For Dockerizing your Java application, this video might be helpful to get insights of JVM
Note: Later on you can connect these Dots using your CI environment reducing further efforts

How could travis CI prepare the test environment for Ruby on Rails and its backend

My infrastructure is based on AWS, 3 for EC2 instances for Rails App Server, 1 for RDS (MongoDB), 1 for EC2 instance as Redis server.
Will the TravisCI launch similar services (eg. MongoDB, Redis) for pass the RSpec tests.
If not, what's the logic behind the TravisCI?
Would it be more practical to run the test on my real infrastructure rather than in TravisCI?
Yes! Travis CI fully supports Ruby on Rails, and can launch the same services you need for the tests, so I expect you'd be all set. When you go to create your travis.yml file, you'll be able to set the configuration for your build environment, including setting up services including MongoDB and/or Redis. Here's some sample code on how that looks:
services
- mongodb
- redis
From a practical standpoint, using a separate environment makes it easier to ensure test integrity, though you do have to do the additional software setup. The main benefit though is that you get a clean slate at each build for all your tests and it's well away from your production code in case there's a problem.

Rails deployment to production the right way

I have a Rails application that I develop on my local workstation and want to deploy this app to my Amazon AWS VPC in a best practice way. Currently, I give my web-server and database server a public IP and SSH into these boxes to configure. I am pretty sure this is nasty and want to explore better ways of doing this.
How should one correctly deploy code and database migrations to servers that sit within a private sub-net on AWS VPC? I have read that automation is key and people should disable SSH and port 22 all together, but I have no idea where to start configuring without logging in via SSH.
There is no right answer.
Rails via elastic beanstalk is great, it can be automated with CI.
Ansible, puppet, any configuration manager would be an improvement.
The only thing that is safe to say: manual deployment is never the best practice. It's prone to error and create "user specific knowlege". Best practice would say to do anything that removes manual process, even if that's doing it via SSH from CI.

Sustainable Solution To Configuring Rails, Sidekiq, Redis All On AWS Elastic Beanstalk

AWS Elastic Beanstalk rails app that needs a sidekiq worker processes running alongside Puma/Passenger. Getting the sidekiq process to run has resulted in hours failed attempts. Also, getting the rails app and sidekiq to talk to my AWS ElastiCache cluster apparently needs some security rule changes.
Background
We started out with an extremely simple Rails app that was easily deployed to AWS Elastic Beanstalk. Since those early times we've evolved the app to now use the worker framework Sidekiq. Sidekiq in turn likes to use Redis to pull its jobs. Anyway, getting all these puzzle pieces assembled in the AWS world is a little challenging.
Solutions From The Web...with some sustainability problems
The AWS ecosystem goes through updates and upgrades, many aren't documented with clarity. For example environment settings change regularly; a script you have written may break in subsequent versions.
I used the following smattering of solutions to try to solve this:
http://blog.noizeramp.com/2013/04/21/using-sidekiq-with-elastic-beanstalk/ (please note that the comments in this blog post contains a number of helpful gists). Many thanks to the contributor and commenters in this post.
http://qiita.com/sawanoboly/items/d28a05d3445901cf1b25 (starting sidekiq with upstart/initctl seems like the simplest and most sustainable approach). This page is in japanese, but the sidekiq startup code makes complete sense. Thanks!
Use AWS's ElastiCache for Redis. Make sure to configure your security groups accordingly: this AWS document was helpful...

AWS OpsWorks vs AWS Beanstalk vs AWS CloudFormation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I would like to know what are the advantages and disadvantages of using AWS OpsWorks vs AWS Beanstalk and AWS CloudFormation?
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
Instead of having a separate instance for each app, Ideally I would like to share some hardware resources efficiently. In the past I have used mostly an EC2 instance + RDS + Cloudfront + S3
The stack system will host some high traffic ruby on rails apps that we are migrating from Heroku, also some python/django apps and some PHP apps as well.
I would like to know what are the advantages and disadvantages of using AWS OpsWorks vs AWS Beanstalk and AWS CLoudFormation?
The answer is: it depends.
AWS OpsWorks and AWS Beanstalk are (I've been told) simply different ways of managing your infrastructure, depending on how you think about it. CloudFormation is simply a way of templatizing your infrastructure.
Personally, I'm more familiar with Elastic Beanstalk, but to each their own. I prefer it because it can do deployments via Git. It is public information that Elastic Beanstalk uses CloudFormation under the hood to launch its environments.
For my projects, I use both in tandem. I use CloudFormation to construct a custom-configured VPC environment, S3 buckets and DynamoDB tables that I use for my app. Then I launch an Elastic Beanstalk environment inside of the custom VPC which knows how to speak to the S3/DynamoDB resources.
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
Under the hood, OpsWorks and Elastic Beanstalk use EC2 + CloudWatch + Auto Scaling, which is capable of handling the loads you're talking about. RDS provides support for scalable SQL-based databases.
Instead of having a separate instance for each app, Ideally I would like to share some hardware resources efficiently. In the past I have used mostly an EC2 instance + RDS + Cloudfront + S3
Depending on what you mean by "some hardware resources", you can always launch standalone EC2 instances alongside OpsWorks or Elastic Beanstalk environments. At present, Elastic Beanstalk supports one webapp per environment. I don't recall what OpsWorks supports.
The stack system will host some high traffic ruby on rails apps that we are migrating from Heroku, also some python/django apps and some PHP apps as well.
All of this is fully supported by AWS. OpsWorks and Elastic Beanstalk have optimized themselves for an array of development environments (Ruby, Python and PHP are all on the list), while EC2 provides raw servers where you can install anything you'd like.
OpsWorks is an orchestration tool like Chef - in fact, it's derived from Chef - Puppet, Ansible or Saltstalk. You use Opsworks to specify the state that you want your network to be in by specifying the state that you want each resource - server instances, applications, storage - to be in. And you specify the state that you want each resource to be in by specifying the value that you want for each attribute of that state. For example, you might want the Apache service to be always up and running and start on boot-up with Apache as the user and Apache as the Linux group.
CloudFormation is a json template (**) that specifies the state of the resource(s) that you want to deploy i.e. you want to deploy an AWS EC2 micro t2 instance in us-east-1 as part of VPC 192.168.1.0/24. In the case of an EC2 instance, you can specify what should run on that resource through your custom bash script in the user-data section of the EC2 resource. CloudFormation is just a template. The template gets fleshed ourt as a running resource only if you run it either through the AWS Management Console for CloudFormation or if you run the aws cli command for Cloudformation i.e. aws cloudformation ...
ElasticBeanstalk is a PAAS- you can upload the specifically Ruby/Rails, node.js or Python/django or Python/Flask apps. If you're running anything else like Scala, Haskell or anything else, create a Docker image for it and upload that Docker image into Elastic Beanstalk (*).
You can do the uploading of your app into Elastic Beanstalk by either running the aws cli for CloudFormation or you create a recipe for Opsworks to upload your app into Elastic Beanstalk. You can also run the aws cli for Cloudformation through Opsworks.
(*) In fact, AWS's documentation on its Ruby app example was so poor that I lost patience and embedded the example app into a Docker image and uploaded the Docker image into Elastic Beanstalk.
(**) As of Sep 2016, Cloudformation also supports YAML templates.
AWS Beanstalk:
It is Deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs yor web applications with Elastic Beanstalk.
No need to worry about EC2 or else installations.
AWS OpsWorks
AWS OpsWorks is nothing but an application management service that makes it easy for the new DevOps users to model & manage the entire their application
In Opsworks you can share "roles" of layers across a stack to use less resources by combining the specific jobs an underlying instance maybe doing.
Layer Compatibility List (as long as security groups are properly set):
HA Proxy : custom, db-master, and memcached.
MySQL : custom, lb, memcached, monitoring-master, nodejs-app, php-app, rails-app, and web.
Java : custom, db-master, and memcached.
Node.js : custom, db-master, memcached, and monitoring-master
PHP : custom, db-master, memcached, monitoring-master, and rails-app.
Rails : custom, db-master, memcached, monitoring-master, php-app.
Static : custom, db-master, memcached.
Custom : custom, db-master, lb, memcached, monitoring-master, nodejs-app, php-app, rails-app, and web
Ganglia : custom, db-master, memcached, php-app, rails-app.
Memcached : custom, db-master, lb, monitoring-master, nodejs-app, php-app, rails-app, and web.
reference : http://docs.aws.amazon.com/opsworks/latest/userguide/layers.html
AWS CloudFormation - Create and Update your environments.
AWS Opsworks - Manage your systems inside that environments like we do with Chef or Puppet
AWS Beanstalk - Create, Manage and Deploy.
But personally I like CloudFormation and OpsWorks both by using its full power for what they are meant for.
Use CloudFormation to create your environment then you can call Opsworks from cloud formation scripts to launch your machine. Then you will have Opsworks stack to manage it. For example add a user in linux box by using Opsworks or do patching of your boxes using chef recipes. You can write down chef recipes for deployment also. Otherwise you can use CodeDeploy specifically build for deployment.
AWS OpsWorks - This is a part of AWS management service. It helps to configure the application using scripting. It uses Chef as the devops framework for this application management and operation.
There are templates which can be used for configuration of server, database, storage. The templates can also be customized to perform any other task. DevOps Engineers have control on application's dependencies and infrastructure.
AWS Beanstalk - It provides the environment for language like Java, Node Js, Python, Ruby Go. Elastic Bean stalk provide the resource to run the application. Developers not to worry about the infrastructure and they don't have control on infrastructure.
AWS CloudFormation - CloudFormation has sample templates to manage the AWS resources in order.
As many others have commented AWS Beanstalk, AWS OpsWorks and AWS Cloud Formation offers different solutions for different problems.
In order to acomplish with
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
And taking into consideration you are in migration process I strongly recommend you to start taking a look at AWS Lambda & AWS DynamoDB solution (or hybrid one).
Both two are designed for auto scaling in a simple way and may be a very cheap solution.
You should use OpsWorks in place of CloudFormation if you need to deploy an application that requires updates to its EC2 instances. If your application uses a lot of AWS resources and services, including EC2, use a combination of CloudFormation and OpsWorks
If your application will need other AWS resources, such as database or storage service. In this scenario, use CloudFormation to deploy Elastic Beanstalk along with the other resources.
Just use terraform and ECS or EKS.
opsworks, elastic beanstalk and cloudformation old tech now. -)

Resources