Rake spec fails with: command terminated with exit code 137 - ruby-on-rails

I've a Rails app running in Kubernetes and try to run bin/rake spec
via exec in my running pod.
kubectl exec backend-xxxxx -- sh
bin/rake -T # lists my tasks (and works)
Other commands like rake db:migrate are working.
Now I try to run my specs, but equally what I try (using suite, models only or a single file) my pod crashes:
bin/rake spec
bin/rake spec:models
bin/rake spec spec/models/user_spec.rb
Output of rake task:
Running via Spring preloader in process 119
/usr/local/bin/ruby -I/usr/local/bundle/ruby/2.7.0/gems/rspec-core-3.10.1/lib:/usr/local/bundle/ruby/2.7.0/gems/rspec-support-3.10.2/lib /usr/local/bundle/ruby/2.7.0/gems/rspec-core-3.10.1/exe/rspec --pattern spec/\*\*\{,/\*/\*\*\}/\*_spec.rb
command terminated with exit code 137
This is my complete log:
Puma starting in single mode...
* Version 3.12.6 (ruby 2.7.2-p137), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
Rails Version & environment: 6.1.4.1 - development
Elastic URL credentials provided
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
Exiting with code 137 # <<<<<<----- this line appears when running rake spec
I've no idea why that happens. My Minikube has 12GB of assigned RAM (minikube config set memory 12288)

Related

AWS elastic beanstalk is not getting the environment variables

I'm trying to run a Rails 6 app on AWS Elastic Beanstalk, but I get from puma log the following (repeats every few seconds)
[21776] + Gemfile in context: /var/app/current/Gemfile
[21776] Early termination of worker
The version numbers:
Rails 6.0.3.3
puma 4.3.5
ElasticBeanstalk Ruby 2.7 running on 64bit Amazon Linux 2/3.1.1
ruby 2.7.1p83
The server is unresponsive from outside the instance, and there's nothing on log/production.log.
Running on a dev machine on production mode there's no errors, and the database is reachable (no migration failure).
Running on the AWS instance the command bundle exec puma -p 3000 -e production I get
Puma starting in single mode...
Version 4.3.5 (ruby 2.7.1-p83), codename: Mysterious Traveller
Min threads: 5, max threads: 5
Environment: production
Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
so there's no obvious error that may cause the worker to halt.
How can I find out what's causing the workers to fail?
Edit 1:
I ran Rails console on the instance and found that the environment variables are missing - e.g. the production database user/pass/host. Once I hardcoded them I could connect to the database.
I suspect the absence of other environment variables is making the app crash.
A user on AWS forum had the answer.
Setting in my Gemfile
gem "nio4r", "= 2.5.2"
fixed the issue.

AppEngine Flexible Ruby environment, application startup error: /usr/bin/env: 'ruby2.5': No such file or directory

I'm trying to deploy an API-only Rails 5 application to AppEngine Flex w/ the standard Ruby runtime, and I'm getting the following error at the very end:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error:
/usr/bin/env: 'ruby2.5': No such file or directory
I'm specifying ruby '2.5.1' in my Gemfile and I've added an explicit .ruby-version file to the root of my project set to 2.5.1 as well.
I have no other debugging information available to me in the logs, no other fancieness. My entrypoint command is:
bundle exec rails server Puma -p $PORT
I can provide more details if needed, not sure what else might be relevant. Any pointers? As far as I can tell, nothing on my side is asking for a version of ruby that specifically at execution time.
Thanks!
EDIT: Here's my app.yaml file
entrypoint: bundle exec rails server Puma -p $PORT
env: flex
runtime: ruby
UPDATE:
I can verify that I'm having similar problems when trying to exec rake tasks like db:migrate:
--------- EXECUTE COMMAND ----------
bundle exec rake db:migrate
/usr/bin/env: 'ruby2.5': No such file or directory
ERROR
ERROR: build step 0 "gcr.io/google-appengine/exec-wrapper:latest" failed: exit status 127
--------------------------------------------------------------------------------------------------------------------------------------------------------
OK I now see what has happened after debugging the docker image locally. Because I was on ubuntu and had used system ruby to install gems it had embedded /usr/bin/env ruby2.5 into every executable script that was bundled into my app. I deleted all gems and switched to rbenv for managing Ruby version which mitigated this odd behavior between unbuntu's ruby and my app.

OpsWorks deploy failed after bundle exec rake assets:precompile

I'm trying to deploy an application using AWS OpsWorks with chef, I had have run the deploy in other times and had never failed but this time I got the next message in the log. I run the command bundle
exec rake assets:precompile] in localhost and everything is alright,
What can be?
[2018-03-01T18:50:54+00:00] INFO: Processing execute[cd
/srv/www/my_project/releases/20180301185045 && RAILS_ENV=production bundle
exec rake assets:precompile] action run
(/opt/aws/opsworks/releases/20160504095744_3437-
20160504095744/vendor/bundle/ruby/2.0.0/gems/chef-
11.10.4/lib/chef/provider/deploy.rb line 63)
Error executing action 'run' on resource 'execute[cd
/srv/www/my_project/releases/20180301185045 && RAILS_ENV=production bundle
exec rake assets:precompile]'
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '137'
---- Begin output of cd /srv/www/my_project/releases/20180301185045 &&
RAILS_ENV=production bundle exec rake assets:precompile ----
STDOUT:
Agreed that Opsworks will always run into out of memory state during deployment especially on instances (micro/small). SSH to the instance and make swap memory (e.g. 2GB / 4GB) will help to reduce the issue a lot.
Sometimes the instances got out of memory.
Only stop and start the instance where you are doing the deploy.

Running Standard Notes on self-hosted Standard File Server with Ruby Implementation

I'm trying to have Standard Notes to run on my self-hosted Ubuntu 16.04 server. I've followed the basic instructions given here on Github, which is to say, install Ruby 2.2+, Rails 5, MySQL 5.6+ database. All done, and running.
After that, in a subdirectory, I pulled the the Standard File Server git clone, set up the .env file with this content:
RAILS_ENV=production
SECRET_KEY_BASE=use "bundle exec rake secret"
RAILS_SERVE_STATIC_FILES=true
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=db_name
DB_USERNAME=db_user
DB_PASSWORD="db_password"
SALT_PSEUDO_NONCE=use "bundle exec rake secret"
Initialized the project with:
bundle install
bower install
rails db:create db:migrate
Again, all fine. When I start the server with rails s, this is the output:
/usr/local/rvm/gems/ruby-2.4.1/gems/activesupport-5.0.1/lib/active_support/xml_mini.rb:51: warning: constant ::Fixnum is deprecated
/usr/local/rvm/gems/ruby-2.4.1/gems/activesupport-5.0.1/lib/active_support/xml_mini.rb:52: warning: constant ::Bignum is deprecated
=> Booting Puma
=> Rails 5.0.1 application starting in development on http://localhost:3000
=> Run `rails server -h` for more startup options
/usr/local/rvm/gems/ruby-2.4.1/gems/activesupport-5.0.1/lib/active_support/core_ext/numeric/conversions.rb:138: warning: constant ::Fixnum is deprecated
Puma starting in single mode...
* Version 3.10.0 (ruby 2.4.1-p111), codename: Russell's Teapot
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
I think the two warnings can safely be ignored. The thing that doesn't make sense is that the .env file is not read as it's running in dev instead of prod, and on port 3000 instead of 3306. Any idea why?
typo at:
AILS_ENV=production
should be:
RAILS_ENV=production

Opsworks - Chef: Rake in opsworks chef hook running sidekiq

I'm using a before_restart.rb hook in opsworks and I have a problem when it run "rake i18n:js:export". I don't know why is running sidekiq with this rake. it fails only in setup stage of opsworks. When I deploy it the error disappears.
[2015-01-09T18:52:17+00:00] INFO: deploy[/srv/www/XXX] queueing checkdeploy hook /srv/www/XXX/releases/20150109185157/deploy/before_restart.rb
[2015-01-09T18:52:17+00:00] INFO: Processing execute[rake i18n:js:export] action run (/srv/www/XXXX/releases/20150109185157/deploy/before_restart.rb line 3)
Error executing action `run` on resource 'execute[rake i18n:js:export]'
Mixlib::ShellOut::ShellCommandFailed
Expected process to exit with [0], but received '1'
---- Begin output of bundle exec rake i18n:js:export ----
STDOUT: 2015-01-09T18:52:30Z 1808 TID-92c6g INFO: Sidekiq client with redis options {}
STDERR: /home/deploy/.bundler/XXXX/ruby/2.1.0/gems/redis-3.1.0/lib/redis/client.rb:309:in `rescue in establish_connection': Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED) (Redis::CannotConnectError)
Sidekiq client (NOT sidekiq server) is running because it is defined in an initializer. When rake runs, it loads the entire rails app environment. So either allow for an environment variable to disable sidekiq client in config/initializers/sidekiq.rb or make sure redis-server is properly configured on the instance you're running this on.
unless ENV['DISABLE_SIDEKIQ']
# Sidekiq.configure...
end
DISABLE_SIDEKIQ=true bundle exec rake do:stuff

Resources