I am using a project based on Bullet Train. Bullet train is based on Ruby on Rails
It's Procfile.dev file looks like this
web: bin/rails server -p 3000
worker: bundle exec sidekiq -t 25
js: yarn build --watch
light-js: yarn light:build --watch
light-css: yarn light:build:css --watch
light-mailer-css: yarn light:build:mailer:css --watch
So when I run bin/dev starts ok. Whenever I need to make a change, I hit Ctrl+C, but instead of just shutting down, it shuts down and starts again, only after the second Ctrl+C the script stops.
Any ideas on what could be causing the restart?
Related
I'm trying to get a rails 7 app going with bootstrap. At the end of the dockerfile if I change
CMD ["bin/rails", "s", "-b", "0.0.0.0"]
to
CMD ["./bin/dev"]
or
CMD ["bin/dev"]
So that foreman spins up css, js, and web processes, I get
[WARNING] Could not load command "rails/commands/server/server_command"
If I run the container with bin/rails it loads and the bootstrap css is there, but the javascript popovers are absent. Help?
The Procfile.dev is
web: bin/rails server -p 3000 -b 0.0.0.0
css: yarn build:css --watch
js: yarn build --watch
I'm trying to figure out the way docker handles commands presented to it.
For example if I run this the JS app starts fine.
docker run ...name etc.. /bin/bash -c "cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup; stunnel; nginx;"
However, If I do it like this in a different order
"stunnel; nginx; cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup;"
The JS app does not run.
What behavior is docker looking for to continue to the next command?
Similarly if I use in my docker file:
ENTRYPOINT stunnel && nginx -g 'daemon off;' && bash
and then do a
docker run ...name etc.. /bin/bash -c "cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup;"
The JS app never runs.
Either && or ; between command, shell will execute in order. So, the first command needs to finish first and then the subsequent command run.
BUT you call nginx -g 'daemon off;' will make it run in the foreground. Therefore, it is never finished running. The commands follows won't run.
However, I am still not sure why stunnel; nginx; cd /video_recordings/voddirectory; pm2 start app.js; pm2 startup; does not run as the normal behaviour of Nginx should go background.
I'm building docker containers for a simple rails/postgres app. The rails app has started and is listening on port 3000. I have exposed port 3000 for the rails container. However, http://localhost:3000 is responding with ERR_EMPTY_RESPONSE. I assumed that the rails container should be accessible on port 3000. Is there something else I need to do?
greg#MemeMachine ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eed45208bbda realestate_web "entrypoint.sh bash …" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp realestate_web_1
a9cb8cae310e postgres "docker-entrypoint.s…" About a minute ago Up About a minute 5432/tcp realestate_db_1
greg#MemeMachine ~ $ docker logs realestate_web_1
=> Booting Puma
=> Rails 6.0.2.2 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 3.12.4 (ruby 2.6.3-p62), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
greg#MemeMachine ~ $ curl http://localhost:3000
curl: (52) Empty reply from server
Dockerfile
FROM ruby:2.6.3
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN gem install bundler -v 2.0.2
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
env_file:
- '.env'
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
env_file:
- '.env'
entrypoint.sh
#!/bin/bash
# Compile the assets
bundle exec rake assets:precompile
# Start the server
bundle exec rails server
When you provide both an ENTRYPOINT and a CMD, Docker combines them together into a single command. If you just docker run your image as it's built, the entrypoint script gets passed the command part rails server -b 0.0.0.0 as command-line parameters; but it ignores this and just launches the Rails server itself (in this case, without the import -b 0.0.0.0 option).
The usual answer to this is to not try to run the main process directly in the entrypoint, but instead end the script with exec "$#" to run the command from additional arguments.
In this case, there are two additional bits. The command: in the docker-compose.yml file indicates that there's some additional setup that needs to be done in the entrypoint (you should not need to override the image's command to run the same server). You also need the additional environment setup that bundle exec provides. Moving this all into the entrypoint script, you get
#!/bin/sh
# ^^^ this script only uses POSIX shell features
# Compile the assets
bundle exec rake assets:precompile
# Clean a stale pid file
rm -f tmp/pids/server.pid
# Run the main container process, inside the Bundler context
exec bundle exec "$#"
Your Dockerfile can stay as it is; you can remove the duplicate command: from the docker-compose.yml file.
* Listening on tcp://localhost:3000
This logline makes me think rails is binding to only the localhost ip. This means that rails will only listen to requests from within the container. To make rails bind to all ips, and listen to requests from outside the container you use the rails server -b parameter. The last line in your entrypoint.sh should change to:
bundle exec rails server -b 0.0.0.0
I'm deploying a rails application using Google App Engine and it takes a lot of time to reinstall libraries like rbenv, ruby,...
Is there anyway to prevent this, I just want to install new library only
Yeah... we're actively working on making this faster. In the interim, here's how you can make it faster. At the end of the day - all we're really doing with App Engine Flex is creating a Dockerfile for you, and then doing a docker build. With Ruby, we try to play some fancy tricks like letting you tell us what version of rbenv or ruby you want to run. If you're fine hard coding all of that, you can just use our base image.
To do that, first open the terminal and cd into the dir with your code. Then run:
gcloud beta app gen-config --custom
Follow along with the prompts. This is going to create a Dockerfile in your CWD. Go ahead and edit that file, and check out what it's doing. In the simplest form, you can delete most of it and end up with something like this:
FROM gcr.io/google_appengine/ruby
COPY . /app/
RUN bundle install --deployment && rbenv rehash;
ENV RACK_ENV=production \
RAILS_ENV=production \
RAILS_SERVE_STATIC_FILES=true
RUN if test -d app/assets -a -f config/application.rb; then \
bundle exec rake assets:precompile; \
fi
ENTRYPOINT []
CMD bundle exec rackup -p $PORT
Most of the heavy lifting is already done in gcr.io/google_appengine/ruby, so you can just essentially add your code, perform any gem installs you need, and then set the entrypoint. You could also fork our base docker image and create your own. After you have this file, you should do a build to test it:
docker build -t myapp .
Now go ahead and run it, just to make sure:
docker run -it -p 8080:8080 myapp
Visit http://localhost:8080 to make sure it's all looking good. Now when you run glcoud app deploy the next time, we're going to use this Dockerfile. Should be much, much faster.
Hope this helps!
I'm looking at http://progrium.viewdocs.io/dokku/process-management/ and trying to work out how to get several services running from a single project.
I have a repo with a Dockerfile:
FROM wjdp/flatcar
ADD . app
RUN /app/bin/install.sh
EXPOSE 8000
CMD /app/bin/run.sh
run.sh starts up a single threaded web server. This works fine but I'd like to run several services.
I tried making a Procfile with a single line of web: /app/bin/run.sh
and removing the CMD line from the Dockerfile. This doesn't work as without a command to run the Docker container doesn't stay alive and dokku gets sad:
remote: Error response from daemon: Cannot kill container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e: Container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e is not running
remote: Error: failed to kill containers: [ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e]
Your best bet is probably to use supervisord. Supervisord is a very lightweight process manager.
You would launch supervisord with your CMD, and then put all the processes you want to launch into the supervisord.conf file.
For more information, look at the Docker documentation about this: https://docs.docker.com/articles/using_supervisord/ . The most relevant excerpts (taken from that page, but reworded):
You would put this into your Dockerfile:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
And the supervisord.conf file would contain something like this:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Obviously, you will also need to make sure that supervisord is installed in your image to begin with. It's part of most distros, so you can probably use yum or apt-get to install it.