AWS OpsWorks vs AWS Beanstalk vs AWS CloudFormation? [closed] - ruby-on-rails

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I would like to know what are the advantages and disadvantages of using AWS OpsWorks vs AWS Beanstalk and AWS CloudFormation?
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
Instead of having a separate instance for each app, Ideally I would like to share some hardware resources efficiently. In the past I have used mostly an EC2 instance + RDS + Cloudfront + S3
The stack system will host some high traffic ruby on rails apps that we are migrating from Heroku, also some python/django apps and some PHP apps as well.

I would like to know what are the advantages and disadvantages of using AWS OpsWorks vs AWS Beanstalk and AWS CLoudFormation?
The answer is: it depends.
AWS OpsWorks and AWS Beanstalk are (I've been told) simply different ways of managing your infrastructure, depending on how you think about it. CloudFormation is simply a way of templatizing your infrastructure.
Personally, I'm more familiar with Elastic Beanstalk, but to each their own. I prefer it because it can do deployments via Git. It is public information that Elastic Beanstalk uses CloudFormation under the hood to launch its environments.
For my projects, I use both in tandem. I use CloudFormation to construct a custom-configured VPC environment, S3 buckets and DynamoDB tables that I use for my app. Then I launch an Elastic Beanstalk environment inside of the custom VPC which knows how to speak to the S3/DynamoDB resources.
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
Under the hood, OpsWorks and Elastic Beanstalk use EC2 + CloudWatch + Auto Scaling, which is capable of handling the loads you're talking about. RDS provides support for scalable SQL-based databases.
Instead of having a separate instance for each app, Ideally I would like to share some hardware resources efficiently. In the past I have used mostly an EC2 instance + RDS + Cloudfront + S3
Depending on what you mean by "some hardware resources", you can always launch standalone EC2 instances alongside OpsWorks or Elastic Beanstalk environments. At present, Elastic Beanstalk supports one webapp per environment. I don't recall what OpsWorks supports.
The stack system will host some high traffic ruby on rails apps that we are migrating from Heroku, also some python/django apps and some PHP apps as well.
All of this is fully supported by AWS. OpsWorks and Elastic Beanstalk have optimized themselves for an array of development environments (Ruby, Python and PHP are all on the list), while EC2 provides raw servers where you can install anything you'd like.

OpsWorks is an orchestration tool like Chef - in fact, it's derived from Chef - Puppet, Ansible or Saltstalk. You use Opsworks to specify the state that you want your network to be in by specifying the state that you want each resource - server instances, applications, storage - to be in. And you specify the state that you want each resource to be in by specifying the value that you want for each attribute of that state. For example, you might want the Apache service to be always up and running and start on boot-up with Apache as the user and Apache as the Linux group.
CloudFormation is a json template (**) that specifies the state of the resource(s) that you want to deploy i.e. you want to deploy an AWS EC2 micro t2 instance in us-east-1 as part of VPC 192.168.1.0/24. In the case of an EC2 instance, you can specify what should run on that resource through your custom bash script in the user-data section of the EC2 resource. CloudFormation is just a template. The template gets fleshed ourt as a running resource only if you run it either through the AWS Management Console for CloudFormation or if you run the aws cli command for Cloudformation i.e. aws cloudformation ...
ElasticBeanstalk is a PAAS- you can upload the specifically Ruby/Rails, node.js or Python/django or Python/Flask apps. If you're running anything else like Scala, Haskell or anything else, create a Docker image for it and upload that Docker image into Elastic Beanstalk (*).
You can do the uploading of your app into Elastic Beanstalk by either running the aws cli for CloudFormation or you create a recipe for Opsworks to upload your app into Elastic Beanstalk. You can also run the aws cli for Cloudformation through Opsworks.
(*) In fact, AWS's documentation on its Ruby app example was so poor that I lost patience and embedded the example app into a Docker image and uploaded the Docker image into Elastic Beanstalk.
(**) As of Sep 2016, Cloudformation also supports YAML templates.

AWS Beanstalk:
It is Deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs yor web applications with Elastic Beanstalk.
No need to worry about EC2 or else installations.
AWS OpsWorks
AWS OpsWorks is nothing but an application management service that makes it easy for the new DevOps users to model & manage the entire their application

In Opsworks you can share "roles" of layers across a stack to use less resources by combining the specific jobs an underlying instance maybe doing.
Layer Compatibility List (as long as security groups are properly set):
HA Proxy : custom, db-master, and memcached.
MySQL : custom, lb, memcached, monitoring-master, nodejs-app, php-app, rails-app, and web.
Java : custom, db-master, and memcached.
Node.js : custom, db-master, memcached, and monitoring-master
PHP : custom, db-master, memcached, monitoring-master, and rails-app.
Rails : custom, db-master, memcached, monitoring-master, php-app.
Static : custom, db-master, memcached.
Custom : custom, db-master, lb, memcached, monitoring-master, nodejs-app, php-app, rails-app, and web
Ganglia : custom, db-master, memcached, php-app, rails-app.
Memcached : custom, db-master, lb, monitoring-master, nodejs-app, php-app, rails-app, and web.
reference : http://docs.aws.amazon.com/opsworks/latest/userguide/layers.html

AWS CloudFormation - Create and Update your environments.
AWS Opsworks - Manage your systems inside that environments like we do with Chef or Puppet
AWS Beanstalk - Create, Manage and Deploy.
But personally I like CloudFormation and OpsWorks both by using its full power for what they are meant for.
Use CloudFormation to create your environment then you can call Opsworks from cloud formation scripts to launch your machine. Then you will have Opsworks stack to manage it. For example add a user in linux box by using Opsworks or do patching of your boxes using chef recipes. You can write down chef recipes for deployment also. Otherwise you can use CodeDeploy specifically build for deployment.

AWS OpsWorks - This is a part of AWS management service. It helps to configure the application using scripting. It uses Chef as the devops framework for this application management and operation.
There are templates which can be used for configuration of server, database, storage. The templates can also be customized to perform any other task. DevOps Engineers have control on application's dependencies and infrastructure.
AWS Beanstalk - It provides the environment for language like Java, Node Js, Python, Ruby Go. Elastic Bean stalk provide the resource to run the application. Developers not to worry about the infrastructure and they don't have control on infrastructure.
AWS CloudFormation - CloudFormation has sample templates to manage the AWS resources in order.

As many others have commented AWS Beanstalk, AWS OpsWorks and AWS Cloud Formation offers different solutions for different problems.
In order to acomplish with
I am interested in a system that can be auto scaled to handle any high number of simultaneous web requests (From 1000 requests per minute to 10 million rpm.), including a database layer that can be auto scalable as well.
And taking into consideration you are in migration process I strongly recommend you to start taking a look at AWS Lambda & AWS DynamoDB solution (or hybrid one).
Both two are designed for auto scaling in a simple way and may be a very cheap solution.

You should use OpsWorks in place of CloudFormation if you need to deploy an application that requires updates to its EC2 instances. If your application uses a lot of AWS resources and services, including EC2, use a combination of CloudFormation and OpsWorks
If your application will need other AWS resources, such as database or storage service. In this scenario, use CloudFormation to deploy Elastic Beanstalk along with the other resources.

Just use terraform and ECS or EKS.
opsworks, elastic beanstalk and cloudformation old tech now. -)

Related

Replicating Heroku's Review Apps on AWS

I'm currently working for a client that are using Heroku and migrating to AWS. However, we're having trouble in understanding how the Review Apps feature can be replicated in AWS.
Specifically, we want a Jenkins job that will allow us to specify a branch name and a set of environment variables. That job will then spin up our entire stack, so that the developer can test their changes in isolation, before moving to staging.
Our stack is 5 different Ruby on Rails applications, all of which must know each other's URLs, which does complicate things.
I'm told that tools like AWS Fargate or EKS might be suitable, but I'm not sure.

Feasibility of choosing EC2 + Docker as a production deployment option

I am trying to deploy my microservice in EC2 machine. I already launched my EC2 machine with Ubuntu 16.04 LTS AMI. And also I found that we can install Docker and run containers through Docker installation. Also I tried sample service deployment using Docker in my Ubuntu. I successfully run commands using -d option for running image in background also.
Can I choose this EC2 + Docker for deployment of my microservice for actual production environment? Then I can deploy all my Spring Boot microservice in this option.
I know that ECS is another option for me.To be frank trying to avoid ECR, ECS optimized AMI and its burdens, Looking for machine with full control that only belongs to me.
But still I need to know about the feasibility of choosing EC2 + Docker through my Ubuntu machine. Also I am planning to deploy my Angular 2 app. I don't need to install, deploy and manage any application server for both Spring Boot and Angular, since it will gives me about a serverless production environment.
What you are describing is a "traditional" single server environment and does not have much in common with a microservices deployment. However keep in mind that this may be OK if it is only you, or a small team working on the whole application. The microservices architectural style was introduced to be able to handle huge, complex applications with large development teams that require to scale out immensely due to fast business growth. Here an example story from Uber.
Please read this for more information about how and why the microservices architectural style was introduced as well as the benefits/drawbacks. Now about your question:
"Can I choose this EC2 + Docker for deployment of my microservice for actual production environment? "
Your question can be simply answered: You can, but it is probably not a good idea assuming you have a large enough project to require a microservices architecture.
You would have to implement all of the following deployment aspects yourself, which is typically covered by an orchestration system, like kubernetes:
Service Discovery and Load Balancing
Horizontal Scaling
Multi-Container Application Deployment
Container Health-Management / Self-Healing
Virtual Networking
Rolling Updates
Storage Orchestration
"Since It will gives me about a serverless production environment to
me."
EC2 is by definition not serverless, of course. You will have to maintain your EC2 instances, including OS updates, security patches etc. And if you only have a single server you will have service outages because of it.
You can do it. I have had Docker on standard EC2 instances running without problem. By "my microservice" you mean a single microservice, right?
You don't need service discovery or routing rules?
Can I choose this EC2 + Docker for deployment of my microservice for actual production environment?
Yes, this is totally possible, although I suggest using kubernetes as the container-orchestrator as it manages the lifecycle of the containers for you:
Running Kubernetes on AWS EC2
Amazon Elastic Container Service for Kubernetes
Manage Kubernetes Clusters on AWS Using Kops
Amazon EKS

best approach to deploy dockerized rails app on AWS?

I dockerized existing Rails Application and it runs on development properly. I want to deploy the app to production environment. I used docker-compose locally.
Application Stack is as follows:
Rails app
Background Workers for mails and cleanup
Relation DB - postgres
NoSQL DB - DynamoDB
SQS queues
Action Cable - Redis
Caching - Memcached
As far as I know, options for deployment are as below:
ECS (Trying this, but having difficulties in relating concepts such as Task and Task Definitions with docker-compose concepts)
ECS with Elastic Beanstalk
With Docker Machine according to this docker documentation: https://docs.docker.com/machine/drivers/aws/
I do not have experience with capistrano, haven't used it yet on this project yet so I am not aiming to use it for docker deployment either. I am planning to use some CD/CI solution for easy deployment. I want advice on options available, and how to deploy the stack in a way that is easy to maintain and push updates with minimal deployment efforts?

Jenkins deployment to elastic beanstalk

Hello everyone I am new here and have a question regrading Jenkins deployments to AWS Elastic Beanstalk.
Our app currently consists of 3 components that include the front-end, api and an admin tool all of which run on nodejs. I am trying to cut down on our ec2 instances and would like all 3 components dockerized and running on the same elastic beanstalk instance for our dev environment.
My question is .... is it possible to do 3 separate Jenkins deployments (api, front-end & admin) to a single AWS Elastic Beanstalk instance?
Our current Elastic Beanstalk application is running Multi-container Docker and containers are built using dockerrunaws(v2) and docker compose.
If I deploy the api from Jenkins to our Elastic Beanstalk instance it works as expected but if I then deploy the front-end it overwrites the api container and so on.... is it possible for each separate deployment to create a new container on the instance?
1.Is it possible to do 3 separate Jenkins deployments (api, front-end & admin) to a single AWS Elastic Beanstalk instance?
2.Is it possible for each separate deployment to create a new container on the instance?
If you are asking if it is possible to have three separate "modules" deployed independently via 3 different jenkins deployment pipelines then yes.
Yes, this can be done without overwriting your containers.
This article is a good place to start if you were wanting to deploy in a multi docker container setup with Elastic Beanstalk.
Deploying Dockerized multi-container applications to aws with jenkins

Setup SVN repository on AWS for asp.net MVC 2 website

I am new to amazon web services (AWS). I have an asp.net MVC 2 website. I want to setup SVN repository on AWS. What do I need to do for this. I installed AWS toolkit for visual studio (http://aws.amazon.com/visualstudio/). I used amazon services RDS ( http://aws.amazon.com/rds/) to setup database instance and create database. I then got backup script of the database from my local system and run on newly created amazon database. I then changed connectionstrings in code and deployed the code to amazon using AWS elastic beanstalk( http://aws.amazon.com/elasticbeanstalk/). This is all related to deployment. Is there a better way to deploy asp.net MVC 2 website on AWS. Do I need to use EC2 ? What is its purpose ? How can I setup SVN repository on AWS?
Please suggest
I wouldn't bother setting SVN up yourself - there are plenty of preconfigured amazon machine images that already have it done - for example
https://aws.amazon.com/marketplace/pp/B007I9KE5C
https://aws.amazon.com/amis/bitnami-subversion-stack-1-6-6-0-ebs
Just install these, and boom - it works.
As far as deployment goes, if you're new to AWS, I'd stay with Elastic Beanstalk at this point. Personally, I usually have a build server and create a zip file and copy that to S3 to bootstrap my server installation. You need the following to make it scaleable and available
Load balancer
2 or more web servers
Scaling group
Security configuration
Cloud Formation script to tie it all together
I wrote about bootstrapping IIS on AWS which provides a bit more detail, but you'd need to add in your own RDS Cloud Formation, VPC and own requirements. Not the easiest task though.

Resources