Why Amazon Elastic Beanstalk takes a long time to update my deploy? - ruby-on-rails

I have Amazon EB. with (Puma, Nginx) 64bit Amazon Linux 2014.09 v1.0.9 running Ruby 2.1 (Puma).
Suddenly when I deployed my project send the next error in my terminal:
ERROR: Timed out while waiting for command to Complete
Note: Before didn't happen.
I see the event in the console and this is the log:
Update environment operation is complete, but with command timeouts. Try increasing the timeout period. For more information, see troubleshooting documentation.
I'v already incrementing the time without success.
option_settings:
- namespace: aws:elasticbeanstalk:command
option_name: Timeout
value: 1800
The Health takes a long time to put it in green (aprox, 20 min), and then it takes other long time for updating the instance with the new changes (aprox, other 20 min), (I have only one instance).
How can I see other logs?

This seems like rather common problem with elasticbeanstalk. In short your EC2 instance is going haywired. What you can do is to terminate the EC2 instance on the EC2 dashboard and the loader balancer will start new instance and that may save your problem. To minimise any down time you may start the new instance first and then terminate your older instance. Just be wary that you will lose any ephemeral data and you may have to reinstall certain dependencies (if they are not in your ebextensions 0
Let me know if you need any more help. Do check out the aws ebs forum
Cheers,
biobirdman

The problem was the RAM in the instance, so I had to change that instance by other bigger.

Related

AWS EC2 becomes unreachable (Rails, Phusion_Passenger)

My Rails app production is running on AWS EC2 instance, on Apache through Phusion Passenger. And I am facing different problems with it.
Sometimes this instance becomes unreachable via ssh and my rails application cannot be accessed via browser. Probably some memory issues. I have created swap memory, but it is not helping.
Sometimes my Rails app gets shut down, and application/current folder gets removed.
First one can be fixed only by stopping AWS instance and starting it again, and second one by deploying application.
Any suggestions on what can be causing it? Or even more important how I can fix that once and for all?

AWS Fargate startup time

Currently I'm researching on how our dockerised microservices could be orchestrated on AWS.
The Fargate option of ECS looks promising eliminating the need of managing EC2 instances.
Although it's a surprisingly long time needed to start a "task" in Fargate, even for a simple one-container setup. A 60 seconds to 90 seconds are typical for our Docker app images. And I heard it may take even more time like minutes or so.
So the question is: while Docker containers typically may start in say seconds what is exactly a reason for such an overhead in Fargate case?
P.S. The search on related questions returns such options:
Docker image load/extract time
Load Balancer influence -
registering, healthchecks grace period etc
But even in simplest possible config with no Load Balancer deployed and assuming the Docker image is not cached in ECS, it is still at least ~2 times slower to start task with single Docker image in Fargate (~ 60 sec) than launch the same Docker image on bare EC2 instance (25 sec)
Yes takes a little longer but we can't generalize the startup time for fargate. You can reduce this time tweaking some settings.
vCPU is directly impacting the start up time, So you have to keep in mind that in bare EC2 instance you have complete vCPU at your disposal , while in cases of fargate you may be assigning portion of it.
Since AWS manages servers for you they have to do few underline things. Assigning the VM into your VPC to docker images download/extract, assigning IPs and running the container can take this much time.
It's a nice blog and at the end of following article you can find good practices.
Analyzing AWS Fargate

Jenkins: Cloud or AMI instance cap would be exceeded for: <name>

Using the ec2 plugin v1.39 to start worker nodes on EC2, I am faced with this error (and huge stack trace) every time I start a new node.
Cloud or AMI instance cap would be exceeded for: <name>
I have set the (previously unset) Instance Cap to 10 in both fields in Configure System. This did not fix it.
Can anyone suggest what might be the problem? Thanks
EDIT 1:
I have tried changing the instance size, with no change (I went M3Medium -> M4Large).
See full stack trace here.
I can also launch an m4.large from the console. Turns out the m3.medium doesn't exist in Sydney.. Hmm
Setting all the log levels to ALL might give you extra information about the error, endpoint in /log/levels
Anyway it seems like an issue we had previously with the private ssh key not set properly, therefore the slave can't be connected and keeps increasing the cap.

AWS OpsWorks - setup_failed and eternal pending logs

I'm trying to create an Q.A. stack at OpsWorks. My knowledge in OpsWorks are very superficial, so I began creating a stack with 1 layer and 1 instance. I used only AWS recipes to create an PHP Application layer:
[IMG]
When I try to boot my first instance, I got the error "start_failed". My problem is: I can't see any logs to find out what is going on, because it keep in pending status forever:
[IMG]
I already tried to access via SSH and AWS CLI, but I still can't get any log.
If your instance is in a start_failed state, this can indicate quite a few possible issues. A lot of issues are covered in this specific troubleshooting documentation.
Since you appear to be able to SSH into the instance, you're going to want to check the OpsWorks Agent logs for errors. These are available(with elevated privileges) in:
/var/log/aws/opsworks

Error when importing large csv file to Rails app on AWS ECS container

I tried uploading a csv with 2700 rows and my service (running in a docker container on A.W.S) stopped running after a some seconds but the upload was complete (as all the data is present in my database). The logs (cloudwatch) do not show any error, instead the service is stopped and restarted (sometimes successfully and sometimes not).
I found a similar issue that Heroku has here, where the answer says Heroku has a 30 second timeout on all request, does A.W.S have something similar? If not how can overcome this as CSV uploads are frequent at my workplace.
Thank you.
I would suggest to use sidekiq for uploading data in bulk amount. It does this job at background. Moreover I have run the issue where even sidekiq stopped if you face similar issue then I would recommend God gem for monitoring purpose for sidekiq.
Regarding AWS timeout information kindly have a look on this. Elastic Load Balancing

Resources