VMC Push error on Cloudfoundry - ruby-on-rails

I have installed CloudFoundry on Ubuntu and tried to push a sample helloworld application. I am getting the below exception. Can anyone faced the same issue. Please let me know how to resolve this problem. Springs applications are pushed correctly but this exception is raised when I push rails or sinatra applications.
root#CFDemo1:~/helloworld# vmc push myapp03
Would you like to deploy from the current directory? [Yn]:
Application Deployed URL: 'myapp03.vcap.me'?
Detected a Sinatra Application, is this correct? [Yn]:
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M, 1G or 2G)
Creating Application: OK
Would you like to bind any services to 'myapp03'? [yN]:
Uploading Application:
Checking for available resources: OK
Processing resources: OK
Packing application: OK
Uploading (0K): OK
Push Status: OK
Staging Application: OK
Starting Application: .
Error: Application [myapp03] failed to start, logs information below.
====> /logs/staging.log <====
Logfile created on 2011-08-02 16:56:28 +0530 by logger.rb/25413
Adding rack-1.3.1.gem to app...
Adding sinatra-1.2.6.gem to app...
Adding tilt-1.3.2.gem to app...
Adding bundler-1.0.10.gem to app...
====> logs/stderr.log <====
/usr/local/rvm/rubies/ruby-1.8.7-p352/bin/ruby:
No such file or directory -- ./rubygems/ruby/1.8/bin/bundle (LoadError)

This issue is resolved in the latest Cloud Foundry source at http://github.com/cloudfoundry

Related

Dokku DigitalOceans droplet: remote rejected error... how to solve it?

I recently heard about Dokku, and wanted to deploy a dockerized Rails application using the DigitalOceans droplet. I followed these guidelines, and everything seemed working fine... till I tried to push on Dokku :'( I get always the same "remote rejected error", but with not explicits informations that could help me to solve my problem... So if anyone could help, it would be really great!
Here are my steps:
Created the droplet, with the 5$ plan. On setup I left the fields as they were (hostname, ...).
Added a swap file (as recommended in the tutorial)
Created the dokku app, and linked it to the PG plugin I installed just before
Added the remote as git remote add dokku dokku#my-droplet-ip:myapp
Updated the DB url on my Rails configuration
Pushed my branch using git push dokku <branchname>: for branchname other than master, you have to configure Dokku... ;)
Dokku push logs: https://gist.github.com/soykje/1ddeb5f04fd85e8bd2d2b1f46e63da1e
Dokku app report: https://gist.github.com/soykje/f5192775742848f96437705c6608080f
Thx in advance
I had to include a file named Procfile in the root of the project with the contents of
release: bundle exec rails db:migrate
web: bundle exec rails s
https://dokku.com/docs/deployment/builders/dockerfiles/#procfiles-and-multiple-processes

AWS Beanstalk - Worker env is not processing background jobs after nginx force SSL config changes

I am working on a Ruby on Rails application and it is deployed on AWS Beanstalk. My Beanstalk application has two environments:
- Web Env
- config:
- Ruby 2.4.3
- Rails 5.1.4
- Puma as App server
- Nginx as Web Server
- Uses active_elastic_job
- Worker Env
- config:
- Ruby 2.4.3
- Rails 5.1.4
- Puma as App server
- Nginx as Web server
- Uses Amazon - SQS
- Uses active_elastic_job
Both Envs uses the same repo/codebase and my app was fully configured.
Last week, I came to know that my application is not force redirecting to https. Though, I was able to access my site with https but when accessed via http or accessing directly via the domain name was not redirecting me to secure site.
I came across with this link https://gist.github.com/petelacey/e35c98f9a35063a89fa9 and after deploying this file using .ebextensions on Web Env, I am now able to redirect to https --- Till here no problem
But, when I tried deploying the same Running version to my Worker Env, my background jobs have stopped working
To troubleshoot that, I ssh into my Worker env and inspected below files:
/var/log/nginx/error.log -- Nothing suspicious found
/var/log/puma/puma.log -- Nothing suspicious found
/var/log/aws-sqsd/default.log -- I see lots of http-err
/var/log/amazon/ssm/errors.log
2018-05-08 11:28:19 ERROR [HandleAwsError # awserr.go.48] [instanceID=i-YYYYYYYYYY] [MessagingDeliveryService] [Association] error when calling AWS APIs. error details - AccessDeniedException: User: arn:aws:sts::XXXXXXXXXX:assumed-role/role/i-YYYYYYYYYY is not authorized to perform: ssm:ListInstanceAssociations on resource: arn:aws:ec2:us-east-1:XXXXXXXXXX:instance/i-YYYYYYYYYY
status code: 400, request id: ''
2018-05-08 11:28:19 ERROR [HandleAwsError # awserr.go.48] [instanceID=i-YYYYYYYYYY] [MessagingDeliveryService] [Association] error when calling AWS APIs. error details - AccessDeniedException: User: arn:aws:sts::XXXXXXXXXX:assumed-role/aws-elasticbeanstalk-ec2-role/i-YYYYYYYYYY is not authorized to perform: ssm:ListAssociations on resource: arn:aws:ssm:us-east-1:XXXXXXXXXX:*
status code: 400, request id: ''
2018-05-08 11:28:19 ERROR [ProcessAssociation # processor.go.157] [instanceID=i-YYYYYYYYYY] [MessagingDeliveryService] [Association] Unable to load instance associations, unable to retrieve associations unable to retrieve associations AccessDeniedException: User: arn:aws:sts::XXXXXXXXXX:assumed-role/aws-elasticbeanstalk-ec2-role/i-YYYYYYYYYY is not authorized to perform: ssm:ListAssociations on resource: arn:aws:ssm:us-east-1:XXXXXXXXXX:*
status code: 400, request id: ''
Before rolling this nginx proxy file, everything was working fine. I am not sure what I did wrong?
Two things I am trying immediately:
Override /etc/nginx/conf.d/proxy.conf on my worker env manually with the old proxy.conf file I have.
Restart nginx to see if job/s are back to normal
But few points I would like to point here:
Both the ENVs are not supposed to use the same Running version?
If my above approach works, that means I will have 2 different proxy files on different ENV. In future, if I deploy to my worker ENV, it will override the custom one. Can this be skipped?
Thanks for the help in advance!
I got the solution for this. My friend told me to handle this in below way:
STEP:1 inside config/environments/production.rb
change config.force_ssl = true to config.force_ssl = 'web'.eql?(ENV.fetch('EB_ENV', 'web'))
STEP:2 Define EB_ENVenvironment variable as web for Web ENV or worker/whatever you like for Worker ENV
Thanks friend! Much appreciated.

400 Bad Request

I try to use cloudfoundry first time.
I created a simple application in grails and installed cloud-foundry plugin. Plugin is installed correctly. I was trying to run prod cf-push command but received this error. Any ideas about this error ?
org.cloudfoundry.client.lib.CloudFoundryException: 400 Bad Request (Invalid application description)
It may be worth trying with VMC (if you have it installed).
Package your Grails app
grails prod war
then deploy with vmc
vmc push [app name here] --path target/
follow the interactive prompts, but VMC should recognise the war file as a Grails app

TorqueBox deployment not honoring context?

I am trying out TorqueBox and having issues with my deployment descriptor. I'm using 2.0-beta2 with jruby-1.6.5. When I deploy to using the torquebox deploy command, the application gets deployed within the application server; however, it is always at the root context (/) instead of the context I am specifying within my config. Here is my config/torquebox.rb:
TorqueBox.configure do |cfg|
cfg.environment do
RACK_ENV "qa"
end
cfg.web do |web|
web.host "localhost"
web.context "/my_application"
end
cfg.ruby do |ruby|
ruby.version "1.9"
end
end
I tried it with and without having the host defined as well, and nothing changed. Its interesting because I know that its reading my config as I see the following within the run log:
14:53:00,497 INFO [org.torquebox.core] (MSC service thread 1-2) evaling: "/Users/ejlevin1/Documents/Workspace/my_application/config/torquebox.rb"
However, I feel like the line within the log a few lines down is showing it isn't honoring my context:
14:53:01,499 INFO [org.torquebox.core.runtime] (Thread-95) Creating ruby runtime (ruby_version: RUBY1_9, compile_mode: JIT, app: my_application, context: web)
Does anyone know what I am doing wrong? I tried deploying 2 applications to see if the server only honored this in the case of multiple applications running; however, that just gave me an error that seemed to be because they were both mounting off of root (/).
I think what's happening is your "external" descriptor is overriding your "internal" one. Your internal one is what you have above. But the 'torquebox deploy' command generates an external descriptor that tries to deploy your app at the root by default. Try running 'torquebox deploy /path/to/your/app --context-path=/my_application'

Why are certain aspects of my Rails app throwing an exception publicly when I deploy to Heroku?

I am working my way through Michael Hartl's Ruby on Rails Tutorial (on Mac OSX 10.7.2/Ruby 1.9.2/Rails 3.1.1) and just finished Chapter 2, which concludes with deploying a demo twitter app to Heroku.
Everything appears to be working properly when I run the app locally AND I was able to successfully deploy the app to Heroku in some capacity because it is available here: http://rich-twitter-baby.heroku.com/
However, what I can't figure out is why the /users and /microposts pages aren't showing up publicly (with lists of users and microposts respectively) as they do locally. I migrated my database to Heroku and pushed the info up there and everything seemed to work properly, but I get this error message when I try to view the pages publicly.
I've tried running "heroku console" but get this error:
Unable to attach to a dyno to open a console session.
Your application may have crashed.
Check the output of "heroku ps" and "heroku logs" for more information.
And the logs say error H13, while the ps looks like this:
Process State Command
------------ ------------------ ------------------------------
web.1 idle for 1h thin -p $PORT -e $RACK_ENV -R $HER..
Let me know if anyone has any ideas or if more info would help.
Thanks!
I would contact Heroku support on this. Dynos can crash and become 'zombiefied' which means they just sit there idle.
Normally these will clear themselves out within a few hours, but it shouldn't happen that often if at all.
Doing a new deploy will also normally restart everything back to clean.
If it's consistently happening, have you tried spinning up the application locally in production mode to try and reproduce the problem
rails server -e production
, or adding something like the Airbrake add-on to your app to capture the error.
Check your log using
$> heroku logs
At the command line of your development system that you used to push to heroku.
Post the log here if you can't figure it out from that.
I contacted Heroku Support about this issue and it turns out that the answer had to do with which stack my app was being deployed to. I did their workaround and everything is now up and running. Here's the full info from them:
It looks like the problem is that you're using Rails 3.1 and our Bamboo stack; we have full >asset pipeline support on our Cedar stack[1]. Since this is just a demo app, an easy >workaround is to precompile locally and commit the files:
rake assets:precompile
git add -A
git commit -m "precompiling assets"
git push heroku master
To get full asset pipeline support, you need to create your app on the Cedar stack and then >repeat the process you did to get your Bamboo app to work.
[1]: http://devcenter.heroku.com/articles/rails31_heroku_cedar

Resources