I'm trying to get a rails 7 app going with bootstrap. At the end of the dockerfile if I change
CMD ["bin/rails", "s", "-b", "0.0.0.0"]
to
CMD ["./bin/dev"]
or
CMD ["bin/dev"]
So that foreman spins up css, js, and web processes, I get
[WARNING] Could not load command "rails/commands/server/server_command"
If I run the container with bin/rails it loads and the bootstrap css is there, but the javascript popovers are absent. Help?
The Procfile.dev is
web: bin/rails server -p 3000 -b 0.0.0.0
css: yarn build:css --watch
js: yarn build --watch
Related
I am using a project based on Bullet Train. Bullet train is based on Ruby on Rails
It's Procfile.dev file looks like this
web: bin/rails server -p 3000
worker: bundle exec sidekiq -t 25
js: yarn build --watch
light-js: yarn light:build --watch
light-css: yarn light:build:css --watch
light-mailer-css: yarn light:build:mailer:css --watch
So when I run bin/dev starts ok. Whenever I need to make a change, I hit Ctrl+C, but instead of just shutting down, it shuts down and starts again, only after the second Ctrl+C the script stops.
Any ideas on what could be causing the restart?
I am unable to run rails g commands in the docker CLI.
It is throwing the following error, even though everything is already installed and running.
Could not find rake-12.3.2 in any of the sources
Run `bundle install` to install missing gems.
rails db:create and rails db:migrate are fine.
I have tried running the commands from inside the docker CLI and via docker-compose run, and they throw the same error.
My dockerfile, named Dockerfile.dev is as follows
# syntax=docker/dockerfile:1
FROM ruby:2.6.2-stretch
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN gem install bundler && bundle install
RUN rails db:create db:migrate
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
My docker-compose file as as follows
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build:
context: .
dockerfile: Dockerfile.dev
image: project-x-image-annotator:v1
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- db
Another development is that I have 2 copies of rake, but only 1 rails.
xxxx#yyyy project-x % docker-compose run web whereis rails
[+] Running 1/0
⠿ Container project-x-db_1 Running 0.0s
rails: /usr/local/bundle/bin/rails
xxxx#yyyy project-x % docker-compose run web whereis rake
[+] Running 1/0
⠿ Container project-x-db_1 Running 0.0s
rake: /usr/local/bin/rake /usr/local/bundle/bin/rake
I finally solved it.
I think the Gemfile.lock had conflicts on it that affected my container but not my buddy's.
I removed the Gemfile.lock and ran bundle install. This fixed the issue of rails g not working.
Would love to hear from a rails expert on why the bundle install did not make an entirely new lock file when run inside the container.
I'm trying to deploy my rails app to the Heroku using heroku.yml. This is my yml setup
setup:
addons:
- plan: cleardb-mysql
as: DATABASE
build:
docker:
web: Dockerfile
config:
RAILS_ENV: development
DATABASE_URL: mysql2://abcdef:1234567#somewhere-someplace-123.cleardb.net/heroku_abcdefg123456
run:
web: bin/rails server -p $PORT -b 0.0.0.0
and I'm using MySQL as a database. And here's my Dockerfile that the Heroku is using to build the image.
FROM ruby:2.6.5-alpine
ARG DATABASE_URL
ARG RAILS_ENV
# Adding the required dependencies
# Installing Required Gems
# Other Configs...
# Copying Gem Files
COPY Gemfile Gemfile.lock ./
# Installing Gems
RUN bundle install --jobs=4 --retry=9
# Copying package and package.lock
COPY package.json yarn.lock ./
# Installing node_modules
RUN yarn
# Copy everything to the from current dir to container_root
COPY . ./
#Compiling assets
RUN bundle exec rake assets:precompile # this precompilation step need DATABASE_URL
CMD ["rails", "server", "-b", "0.0.0.0"]
This is working as expected but the problem is I've to pass the database string in the heroku.yml file. Is there a way I can reference the config vars that are declared in the Heroku?
I tried this but it's not working as the docs also saying that the config-vars declared in Heroku are not available at the build time.
setup:
addons:
- plan: cleardb-mysql
as: DATABASE
build:
docker:
web: Dockerfile
config:
RAILS_ENV: $RAILS_ENV
DATABASE_URL: $DATABASE_URL
run:
web: bin/rails server -p $PORT -b 0.0.0.0
What could be the possible workaround for this issue?
Without using the heroku.yml a possible solution is to include the env variable in the Dockerfile.
Example with Java code:
CMD java -Dserver.port=$PORT -DmongoDbUrl=$MONGODB_SRV $JAVA_OPTS -jar /software/myjar.jar
You need to build the image locally, push and release it into the Heroku Registry: when it runs the Config Vars (ie MONGODB_SRV) are injected
I'm building docker containers for a simple rails/postgres app. The rails app has started and is listening on port 3000. I have exposed port 3000 for the rails container. However, http://localhost:3000 is responding with ERR_EMPTY_RESPONSE. I assumed that the rails container should be accessible on port 3000. Is there something else I need to do?
greg#MemeMachine ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eed45208bbda realestate_web "entrypoint.sh bash …" About a minute ago Up About a minute 0.0.0.0:3000->3000/tcp realestate_web_1
a9cb8cae310e postgres "docker-entrypoint.s…" About a minute ago Up About a minute 5432/tcp realestate_db_1
greg#MemeMachine ~ $ docker logs realestate_web_1
=> Booting Puma
=> Rails 6.0.2.2 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 3.12.4 (ruby 2.6.3-p62), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
greg#MemeMachine ~ $ curl http://localhost:3000
curl: (52) Empty reply from server
Dockerfile
FROM ruby:2.6.3
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN gem install bundler -v 2.0.2
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
env_file:
- '.env'
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
env_file:
- '.env'
entrypoint.sh
#!/bin/bash
# Compile the assets
bundle exec rake assets:precompile
# Start the server
bundle exec rails server
When you provide both an ENTRYPOINT and a CMD, Docker combines them together into a single command. If you just docker run your image as it's built, the entrypoint script gets passed the command part rails server -b 0.0.0.0 as command-line parameters; but it ignores this and just launches the Rails server itself (in this case, without the import -b 0.0.0.0 option).
The usual answer to this is to not try to run the main process directly in the entrypoint, but instead end the script with exec "$#" to run the command from additional arguments.
In this case, there are two additional bits. The command: in the docker-compose.yml file indicates that there's some additional setup that needs to be done in the entrypoint (you should not need to override the image's command to run the same server). You also need the additional environment setup that bundle exec provides. Moving this all into the entrypoint script, you get
#!/bin/sh
# ^^^ this script only uses POSIX shell features
# Compile the assets
bundle exec rake assets:precompile
# Clean a stale pid file
rm -f tmp/pids/server.pid
# Run the main container process, inside the Bundler context
exec bundle exec "$#"
Your Dockerfile can stay as it is; you can remove the duplicate command: from the docker-compose.yml file.
* Listening on tcp://localhost:3000
This logline makes me think rails is binding to only the localhost ip. This means that rails will only listen to requests from within the container. To make rails bind to all ips, and listen to requests from outside the container you use the rails server -b parameter. The last line in your entrypoint.sh should change to:
bundle exec rails server -b 0.0.0.0
So I have pre built a docker image of a rails app. When the image is built the migration is ran. When I run the image with docker run everything works fine. But when I try and run a docker-compose file, when I visit the app, it is telling me I need to run the migration, but the migration was ran in the build step.
Folder structure:
root/
my_app/
Dockerfile
docker-compose
Here are the steps I took:
I run docker build -t my_app . on the Dockerfile:
FROM ruby:2.4-jessie
WORKDIR /usr/src/app
COPY ./my_app/Gemfile* ./
RUN bundle install
COPY ./my_app .
EXPOSE 3000
RUN rails db:migrate
CMD ["rails", "server", "-b", "0.0.0.0"]
It buids fine and I can see that the migration is successfully ran.
Next I run it with docker run -p 3000:3000 my_app
I visit it in the browser and everything is fine.
Next I run docker-compose up on the docker-compose file:
version: '3'
services:
my-app-container:
image: my_app
volumes:
- ./my_app:/usr/src/app
ports:
- 3000:3000
The image starts fine but when I visit it in the browse is get:
Migrations are pending. To resolve this issue, run: bin/rails db:migrate RAILS_ENV=development
# Raises <tt>ActiveRecord::PendingMigrationError</tt> error if any migrations are pending.
def check_pending!(connection = Base.connection)
raise ActiveRecord::PendingMigrationError if ActiveRecord::Migrator.needs_migration?(connection)
end
You add command to Dockerfile first, but must add them to docker-compose.yml or call manually
after docker-compose up you can send commands to container
docker exec -it 'container name/id' rails db:migrate or another commands.
-it for intearactive terminal inside container
Ok, so the solution was to have commands in the docker-compose file to handle the migration and the starting of the rails app:
version: '3'
services:
my-app-run-container:
image: my_app_run_container
volumes:
- ./my_app:/usr/src/app
ports:
- 3000:3000
command: rails db:migrate
command: rails server -b 0.0.0.0
I guess if I was going to use docker-compose for everything I could remove the migration and server start commands from the Dockerfile