So I have my rails backend running at port 3001 and may React frontend running at port 3000.
I want to setup a simple rake start task to start both.
To do so, I use the foreman gem, which works perfectly when I run: foreman start -f Procfile.dev.
However: when I run my task: rake start, I get the following error:
Running via Spring preloader in process 36257
15:56:57 web.1 | started with pid 36258
15:56:57 api.1 | started with pid 36259
15:56:57 api.1 | /usr/local/opt/rbenv/versions/2.3.4/lib/ruby/gems/2.3.0/gems/foreman-0.64.0/bin/foreman-runner: line 41: exec: PORT=3001: not found
15:56:57 api.1 | exited with code 127
15:56:57 system | sending SIGTERM to all processes
15:56:57 web.1 | terminated by SIGTERM
Here is my my start.rake file:
namespace :start do
desc 'Start dev server'
task :development do
exec 'foreman start -f Procfile.dev'
end
desc 'Start production server'
task :production do
exec 'NPM_CONFIG_PRODUCTION=true npm run postinstall && foreman start'
end
end
task :start => 'start:development'
and my Procfile.dev file:
web: cd client && PORT=3000 npm start
api: PORT=3001 && bundle exec rails s
Any idea ?
I faced the same issue. I am not sure why, but when foreman is running from rake, it is unable to handle multiple commands on the same line, e.g.
web: cd client && PORT=3000 npm start
To solve the issue, I changed my Procfile.dev to
web: npm start --prefix client
api: bundle exec rails s -p 3001
and in my package.json, I changed
"scripts": {
"start": "react-scripts start",
...
}
to
"scripts": {
"start": "PORT=3000 react-scripts start",
...
}
This allows you to specify the ports for both react and rails servers, and works fine with both
foreman start -f Procfile.dev and rake start
I don't know Foreman, but every morning I start my dev environment with teamocil. Here is an example file.
Add an alias to your .bash_alias file:
alias s2="cd /home/manuel/chipotle/schnell && tmux new-session -d 'teamocil schnell' \; attach"
so you just need to type "s2" in the console and all, including the database prompt, is up and ready.
I know I'm a little bit late, but essentially you shouldn't use multi-line command, always try avoiding then. you could change your syntax and make it online with some flags as fallow:
web: npm start --port 3000 --prefix client
api: bundle exec rails s -p 3001
I hope it could help anyone facing the same issue.
Happy Coding
Related
I can log into console from one of the pods (on kubernetes) and run this command:
RAILS_ENV=production bin/delayed_job start
The jobs are run correctly doing that. However when the pods are deleted or restarted, the jobs stop running.
I also tried adding the command above in an initializer file (eg config/initializers/delayed_jobs_runner.rb), but I get a recursive loop when starting the app.
Another thing I tried to do is create a new file called my-jobs.yaml with this
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
- name: job
image: gcr.io/test-app-123/somename:latest
command: ["/bin/bash", "-l", "-c"]
args: ["RAILS_ENV=production bundle exec rake jobs:work"]
restartPolicy: Never
backoffLimit: 4
I then do kubectl apply -f my-jobs.yaml, but the jobs aren't running.
Any idea how to run delayed_jobs correctly in kubernetes?
EDIT: Here's my Dockerfile:
FROM gcr.io/google_appengine/ruby
# Install 2.5.1 if not already preinstalled by the base image
RUN cd /rbenv/plugins/ruby-build && \
git pull && \
rbenv install -s 2.5.1 && \
rbenv global 2.5.1 && \
gem install -q --no-rdoc --no-ri bundler
# --version 1.11.2
ENV RBENV_VERSION 2.5.1
# Copy the application files.
COPY . /app/
# Install required gems.
RUN bundle install --deployment && rbenv rehash
# Set environment variables.
ENV RACK_ENV=production \
RAILS_ENV=production \
RAILS_SERVE_STATIC_FILES=true
# Run asset pipeline.
RUN bundle exec rake assets:precompile
CMD ["setup.sh"]
# Reset entrypoint to override base image.
ENTRYPOINT ["/bin/bash"]
################### setup.sh ############################
cd /app && RAILS_ENV=production bundle exec script/delayed_job -n 2 start
bundle exec foreman start --formation "$FORMATION"
#########################################################
Running multiple processes in one docker container is problematic as you cannot easily observe lifetime of particular process - every container need one process which is "main" and when it exit, container also exit.
As looking on Github (https://github.com/collectiveidea/delayed_job#user-content-running-jobs) I would strongly suggest to change a little your starting command to run it in foreground because now when you are starting Kubernetes job with daemons
- job is ending immediately as docker container lifetime is directly related to "main" foreground process lifetime so when you run only background process your main process exit immediately and your container too.
Change your command to:
RAILS_ENV=production script/delayed_job run
What start worker in foreground so your Kubernetes Job won't exit. Please note also that Kubernetes Jobs are not intended to such infinitive tasks (job should has start and end) so I would suggest to use ReplicaSet for that
Now I am doing this:
this_pid=$$
(while [[ $(ps -ef | grep delayed_job | grep -v -e grep -e tail | head -c1 | wc -c) -ne 0 ]]; do sleep 10; done; kill -- -$this_pid) &
after starting multiple workers. And after this I tail -f the logs so that those go to the standard output of the container. I am quite crazy, so I am also running logrotate to keep the logs in check. The rails environment is pretty big anyway, so the container needs to be pretty big, and we need to be able to run many jobs and I don't want many pods running to do so. This seems to be efficient and will stop and restart if the workers die for some reason.
I'm struggling with starting sidekiq remotely with a custom v2 capistrano task:
namespace :sidekiq do
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --version"
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
end
end
Output:
* 2018-01-05 11:40:51 executing `sidekiq:start'
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --version"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] Sidekiq 5.0.5
** [out :: 198.58.110.211]
command finished in 1424ms
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] OK
command finished in 1128ms
I can confirm I'm getting the environment (rbenv & bundler correctly) as the first run cmd shows. But unexpectedly the sidekiq task starts and dispersal into obliviom: 1) tmp/pids/sidekiq.pid gets initialized but the process not exists and 2) logs/sidekiq.log gets created but only with the header:
# Logfile created on 2018-01-05 11:34:09 -0300 by logger.rb/56438
If I remove the --daemon switch I get the process running perfectly, but of course the capistrano deploy task never ends and when I do CTRL+C sidekiq closes.
If I just ssh into the remote and execute the command (replacing current_path obviously) it works perfectly.
I've tried almost everything I can imagine: not using a config.file, using RAILS_ENV instead of --environment, etc.
As the "&& echo OK" shows, the command is not returning an error.
Capistrano is using "/bin/bash --login -c 'cd /home/deploy/applications/microgestion/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml'" as far as I can tell to run the command.
Ruby v2.3.3, Capistrano 2.15.5, Sidekiq 5.0.5, Rails 4.0.12
Solved it by adding && sleep 1 at the end as explained here: http://blog.hartshorne.net/2013/03/capistrano-nohup-and-sleep.html.
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && sleep 1"
end
Thanks #user3309314 for pointing me in the correct direction.
If you use plain Capistrano to daemonize Sidekiq, any crash will lead to downtime. Don't do this. You need to use a process monitor that will restart the Sidekiq process if it dies. Use systemd, upstart and/or Foreman as explained in the docs.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process
To avoid issues in development with Foreman I need to I need to wrap the Puma server in a Socks proxy for Quotaguard to get a static IP in production and production only.
Using a link to this article that I found from this question I implemented the following code in bin/web call from Procfile.
#!/bin/sh
if [ "$RACK_ENV" == "production" ]; then
echo Using qgsocksify to wrap Puma
bin/qgsocksify bundle exec puma -C config/puma.rb
else
echo Using straight Puma
bundle exec puma -C config/puma.rb
fi
Although the environment is production set on RACK_ENV the unwrapped "straight Puma" server launches. Testing this locally works but it fails on Heroku.
What's the syntax error that would cause it to fail on Heroku?
Apparently sh doesn't use == for equivalence on Heroku but works locally on my Mac which is why it works in development. Googling revealed standard sh uses = for equivalence. In this case however I just changed #!/bin/sh to #!/bin/bash as Heroku supports bash as well.
#!/bin/bash
if [ "$RACK_ENV" == "production" ]; then
echo Using qgsocksify to wrap Puma
bin/qgsocksify bundle exec puma -C config/puma.rb
else
echo Using straight Puma
bundle exec puma -C config/puma.rb
fi
I used foreman to export my Procfile to an upstart task.
Procfile:
web: bundle exec rails server
websocket: bundle exec rails runner websocket-server/em_websocket.rb
One of the upstart tasks (they are very alike and fail with the same error):
start on starting app-web
stop on stopping app-web
respawn
env PORT=5000
setuid app
chdir /var/www/app
exec bundle exec rails server
And the error (I got it via dmesg):
[35207.676836] init: Failed to spawn app-websocket-1 main process: unable to execute: No such file or directory
[35207.679577] init: Failed to spawn app-web-1 main process: unable to execute: No such file or directory
When I switch to the app user, I am actually able to run bundle exec rails server from the given directory.
Is there any way to pin down the error a little more? I didn't find any related logs in /var/log/upstart/.
If you installed ruby via RVM it may be possible that the init is run before the rvm script runs. Did you try using absolute references to the bundle bin?
whereis bundle
to obtain it
RVM was apparently not initialized or is not available in the upstart enviroment. Luckily rvm has wrappers for this case: https://rvm.io/integration/init-d
You can run bundle in another way.
Instead of:
web: bundle exec rails server
You need to run:
web: bash -c '~/.rvm/bin/rvm default do bundle exec rails server'
Note: ~/.rvm/bin/rvm - can be replaced with actual path of rvm installation on your server.
Upstart commands require sudo privileges for the underlying user. Have you considered defining some form of passwordless sudo privileges for your app user to run the rails application service restarts?
e.g In Ubuntu creating a new sudoer definition under /etc/sudoers.d/?
username ALL=(ALL) NOPASSWD:ALL
Once defined 'username' should be able to run the rails app via sudo service 'appname' stop|start|restart.
Here is an explanation for providing the sudo privileges to the user. My Capistrano deployment contains a foreman export definition as below -
namespace :foreman do
desc 'Export the Procfile to Ubuntu upstart scripts'
task :export do
on roles(:app) do |host|
log_path = shared_path.join('log')
within release_path do
execute :mv, ".env .envbkup"
execute :echo, "'RACK_ENV=#{fetch(:deploy_env)}' >> .env"
execute :echo, "'RAILS_ENV=#{fetch(:deploy_env)}' >> .env"
execute :bundle, "exec foreman export upstart #{shared_path}/init -a #{fetch(:application)} -u #{host.user} -l #{log_path}"
execute :rm, ".env"
execute :mv, ".envbkup .env"
as :root do
execute :cp, "#{shared_path}/init/* /etc/init/"
end
end
end
end
This capistrano definition is invoked from the deploy_env.rb 'after' action.
I can successfully run a rails application on my server using Puma as the application server. I start Puma like this:
bundle exec puma -e production -b unix:///var/run/my_app.sock
That is a unix command that starts puma in production mode at the specified location. However, if I need to reboot my vps, I'll need to go through all of my apps and run that command over and over to start the Puma server for each app.
What's the best way to go about doing this? I'm a bit of an Ubuntu noob, but would the best way to be this:
Every time I install a new rails application on my vps, I
sudo vi /etc/rc.local
and append rc.local with the command? So that rc.local looks like this after a while:
#!/bin/sh -e
#
# rc.local
#
bundle exec puma -e production -b unix:///var/run/app_1.sock
bundle exec puma -e production -b unix:///var/run/app_2.sock
bundle exec puma -e production -b unix:///var/run/app_3.sock
bundle exec puma -e production -b unix:///var/run/app_4.sock
bundle exec puma -e production -b unix:///var/run/app_5.sock
exit 0
Ubuntu uses upstart to manage services. Puma actually provides upstart scripts that make it incredibly easy to do what you want. Have a look at the scripts in their repo:
https://github.com/puma/puma/tree/master/tools/jungle/upstart
Ubuntu makes this very difficult. The simplest solution I've seen so far is with OpenBSD. To make sure your apps start on reboot, add this to your /etc/rc.conf.local:
pkg_scripts="myapp myapp2 myapp3"
Each app would need a startup script like this (/etc/rc.d/myapp):
#!/bin/sh
# OPENBSD PUMA STARTUP SCRIPT
# Remember to `chmod +x` this file
# http://www.openbsd.org/cgi-bin/cvsweb/ports/infrastructure/templates/rc.template?rev=1.5
puma="/usr/local/bin/puma"
pumactl="/usr/local/bin/pumactl"
puma_state="-S /home/myapp/tmp/puma.state"
puma_config="-C /home/myapp/config/puma.rb"
. /etc/rc.d/rc.subr
rc_start() {
${rcexec} "${pumactl} ${puma_state} start ${puma_config}"
}
rc_reload() {
${rcexec} "${pumactl} ${puma_state} restart ${puma_config}"
}
rc_stop() {
${rcexec} "${pumactl} ${puma_state} stop"
}
rc_check() {
${rcexec} "${pumactl} ${puma_state} status"
}
rc_cmd $1
Then do like:
% /etc/rc.d/myapp start
% /etc/rc.d/myapp reload
% /etc/rc.d/myapp stop
% /etc/rc.d/myapp status