I'm trying to run my Rails app in production locally as part of platform migration. I'm using Docker with Docker Compose.
I've ran into issues with rake assets:precompile. It look as if the docker deletes the generated files during build.
Here's my Dockerfile
FROM ruby:2.2.2
RUN apt-get update -qq && apt-get install -y build-essential nodejs npm nodejs-legacy mysql-client vim
RUN mkdir /lunchiatto
ENV RAILS_ENV production
ENV RACK_ENV production
WORKDIR /tmp
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install --without production test
ADD . /myapp
WORKDIR /myapp
RUN bundle exec rake assets:clobber
RUN bundle exec rake assets:precompile --trace
And here's my docker-compose.yml
db:
image: postgres:9.4.1
ports:
- "5432:5432"
environment:
RACK_ENV: production
RAILS_ENV: production
web:
build: .
command: bundle exec puma -C config/puma.rb
ports:
- "3000:3000"
links:
- db
volumes:
- .:/myapp
environment:
RACK_ENV: production
RAILS_ENV: production
The docker-compose build command runs fine. I've also inserted RUN ls -l /myapp/public/assets into Dockerfile before and after the rake assets:precompile and all seems fine. However if I run docker-compose run web ls -l /myapp/public/assets after the build with the docker-compose up running in a different tab all the asset's files are gone.
It's unlikely that the container is readonly during build, so what could that be?
You hide the containers folder /myapp by a volume that you mount from your local folder ..
You need to make sure that the required files are inside the local folder when you want to mount it. When you do not mount that folder the files would be available on your image.
The effect is similar to a Linux system: when you have files in a folder /my/folder and you mount a disk to the same folder the original files are hidden. Instead the files from that disk are visible.
Related
I am unable to run rails g commands in the docker CLI.
It is throwing the following error, even though everything is already installed and running.
Could not find rake-12.3.2 in any of the sources
Run `bundle install` to install missing gems.
rails db:create and rails db:migrate are fine.
I have tried running the commands from inside the docker CLI and via docker-compose run, and they throw the same error.
My dockerfile, named Dockerfile.dev is as follows
# syntax=docker/dockerfile:1
FROM ruby:2.6.2-stretch
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN gem install bundler && bundle install
RUN rails db:create db:migrate
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
My docker-compose file as as follows
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build:
context: .
dockerfile: Dockerfile.dev
image: project-x-image-annotator:v1
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- db
Another development is that I have 2 copies of rake, but only 1 rails.
xxxx#yyyy project-x % docker-compose run web whereis rails
[+] Running 1/0
⠿ Container project-x-db_1 Running 0.0s
rails: /usr/local/bundle/bin/rails
xxxx#yyyy project-x % docker-compose run web whereis rake
[+] Running 1/0
⠿ Container project-x-db_1 Running 0.0s
rake: /usr/local/bin/rake /usr/local/bundle/bin/rake
I finally solved it.
I think the Gemfile.lock had conflicts on it that affected my container but not my buddy's.
I removed the Gemfile.lock and ran bundle install. This fixed the issue of rails g not working.
Would love to hear from a rails expert on why the bundle install did not make an entirely new lock file when run inside the container.
I'm trying to deploy my rails app to the Heroku using heroku.yml. This is my yml setup
setup:
addons:
- plan: cleardb-mysql
as: DATABASE
build:
docker:
web: Dockerfile
config:
RAILS_ENV: development
DATABASE_URL: mysql2://abcdef:1234567#somewhere-someplace-123.cleardb.net/heroku_abcdefg123456
run:
web: bin/rails server -p $PORT -b 0.0.0.0
and I'm using MySQL as a database. And here's my Dockerfile that the Heroku is using to build the image.
FROM ruby:2.6.5-alpine
ARG DATABASE_URL
ARG RAILS_ENV
# Adding the required dependencies
# Installing Required Gems
# Other Configs...
# Copying Gem Files
COPY Gemfile Gemfile.lock ./
# Installing Gems
RUN bundle install --jobs=4 --retry=9
# Copying package and package.lock
COPY package.json yarn.lock ./
# Installing node_modules
RUN yarn
# Copy everything to the from current dir to container_root
COPY . ./
#Compiling assets
RUN bundle exec rake assets:precompile # this precompilation step need DATABASE_URL
CMD ["rails", "server", "-b", "0.0.0.0"]
This is working as expected but the problem is I've to pass the database string in the heroku.yml file. Is there a way I can reference the config vars that are declared in the Heroku?
I tried this but it's not working as the docs also saying that the config-vars declared in Heroku are not available at the build time.
setup:
addons:
- plan: cleardb-mysql
as: DATABASE
build:
docker:
web: Dockerfile
config:
RAILS_ENV: $RAILS_ENV
DATABASE_URL: $DATABASE_URL
run:
web: bin/rails server -p $PORT -b 0.0.0.0
What could be the possible workaround for this issue?
Without using the heroku.yml a possible solution is to include the env variable in the Dockerfile.
Example with Java code:
CMD java -Dserver.port=$PORT -DmongoDbUrl=$MONGODB_SRV $JAVA_OPTS -jar /software/myjar.jar
You need to build the image locally, push and release it into the Heroku Registry: when it runs the Config Vars (ie MONGODB_SRV) are injected
TL;DR - yarn install installs node_modules in an 'intermediate container' and the packages disappear after the build step.
I'm trying to get webpacker going with our dockerized rails 5.0 app.
Dockerfile
FROM our_company_centos_image:latest
RUN yum install wget -y
RUN wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
RUN yum install sqlite-devel yarn -y
RUN mkdir -p $APP_HOME/node_modules
COPY Gemfile Gemfile.lock package.json yarn.lock $APP_HOME/
RUN bundle install --path /bundle
RUN yarn install --pure-lockfile
ADD . $APP_HOME
When yarn install runs, it installs the packages, followed immediately by
Removing intermediate container 67bcd62926d2
Outside of the container, running ls node_modules shows an empty directory, and the docker-compose up process will eventually fail when running webpack_dev_server exits due to the modules not being present.
I've done various things link adding node_modules as a volume in docker-compose.yml to no effect.
The only thing that HAS worked is running yarn install locally to build the directory and then doing it again in the directory, but then I've got OS X versions of the packages which may eventually cause a problem.
What am I doing wrong here?
docker-compose.yml
version: '2'
services:
web:
build: .
network_mode: bridge
environment:
WEBPACK_DEV_SERVER_HOST: webpack_dev_server
links:
- webpack_dev_server
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- ./node_modules:/app/node_modules
- .:/app
ports:
- "3000:3000"
tty: true
stdin_open: true
webpack_dev_server:
image: myapp_web
network_mode: bridge
command: bin/webpack-dev-server
environment:
NODE_ENV: development
RAILS_ENV: development
WEBPACK_DEV_SERVER_HOST: 0.0.0.0
volumes:
- .:/app
ports:
- "3035:3035"
The last step is to ADD . $APP_HOME. You also mention that node_modules folder is empty in your local tree. Does that mean node_modules exists still as an empty folder?
If this is true, then the node_modules empty folder is likely getting copied over during the ADD step and overwriting everything that was done in the previous yarn step.
One solution I found, is to add the node_modules as a volume.
For example, if you node_modules directory is located at /usr/src/app/node_modules, just add:
volumes:
- /usr/src/app/node_modules
I have a Rails 5.2.0.rc1 app with webpacker working at https://github.com/archonic/limestone. It's not 100% right yet but I've found that running docker-compose webpacker yarn install --pure-lockfile gets things up and running on a new environment before docker-compose up --build. I'm not entirely sure yet why that's required since it's in the Dockerfile.
Also as far as I know your volume for web should just be - '.:/app' and the statement with node_modules is redundant.
Below is the docker file in project's root directory:-
FROM ruby:2.2
MAINTAINER technologies.com
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get install -y libxml2-dev libxslt1-dev
RUN apt-get install -y libqt4-webkit libqt4-dev xvfb
RUN apt-get install -y nodejs
ENV INSTALL_PATH /as_app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile Gemfile
RUN bundle install
COPY . .
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
Below is the contents in docker-compose.yml file in project's root directory :-
as_web:
build: .
environment:
- RAILS_ENV=development
- QUEUE=*
- REDIS_URL=redis://redis:6379
volumes:
- .:/as_app
ports:
- 3000:3000
links:
- as_mongo
- as_redis
command: rails server -b 0.0.0.0
as_mongo:
image: mongo:latest
ports:
- "27017:27017"
as_redis:
image: redis
ports:
- "6379:6379"
as_worker:
build: .
environment:
- QUEUE=*
- RAILS_ENV=development
- REDIS_URL=redis://redis:6379
volumes:
- .:/as_app
links:
- as_mongo
- as_redis
command: bundle exec rake environment resque:work
docker version 1.11.2, docker-machine version 0.8.0-rc1, docker-compose version 1.8.0-rc1, ruby 2.2.5, rails 4.2.4.
My problem is as:-
1) When I build the image with "docker-compose build" from project root directory the image builds successfully with gems installed.
2) But when I do "docker-compose up" the as_web and as_worker services exits with code 1 and 10 resp. giving error as no gemfile or .bundler found. When I login in image through bash and see the working directory then no project files are seen.
3) Knowledge I want to know is:-
i) when I start terminal, I start VirtualBox instance manually like "docker-machine start default"
ii) Then I execute command "eval $(docker-machine env dev)" to point current shell to virtualbox docker-daemon, So after this when i do "docker build -t as_web ." the terminal gives message like "sending current build context to docker daemon",
a) Is this message saying that build in being done in VirtualBox ?
if I do "docker-compose build" no such message like "sending...." appears,
B) Does docker-compose too point to docker daemon in virtual box or it's being build in localhost(myubuntuOS), I'm little bit confused?
Hoping you guys understood the details if you need any extra info. then let me know, Thanking you all. Happy Coding.
docker-compose build and docker build both do the same thing. They both use the docker engine API to build an image in the virtualbox. The output messages are just a little different.
Your problem is because of this:
volumes:
- .:/as_app
You're overriding the app directory with the project directory from the host. If you haven't run bundle install on the host, the files won't be in the container when it starts.
You can fix this by running docker-compose run as_app bundle install
Currently I'm setting up my app using docker. I've got a minimal rails app, with 1 controller. You can get my setup by running these:
rails new app --database=sqlite --skip-bundle
cd app
rails generate controller --skip-routes Home index
echo "Rails.application.routes.draw { root 'home#index' }" > config/routes.rb
echo "gem 'foreman'" >> Gemfile
echo "web: rails server -b 0.0.0.0" > Procfile
echo "port: 3000" > .foreman
And I have the following setup:
Dockerfile:
FROM ruby:2.3
# Install dependencies
RUN apt-get update && apt-get install -y \
nodejs \
sqlite3 \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
# Configure bundle
RUN bundle config --global frozen 1
RUN bundle config --global jobs 7
# Expose ports and set entrypoint and command
EXPOSE 3000
CMD ["foreman", "start"]
# Install Gemfile in different folder to allow caching
WORKDIR /tmp
COPY ["Gemfile", "Gemfile.lock", "/tmp/"]
RUN bundle install --deployment
# Set environment
ENV RAILS_ENV production
ENV RACK_ENV production
# Add files
ENV APP_DIR /app
RUN mkdir -p $APP_DIR
COPY . $APP_DIR
WORKDIR $APP_DIR
# Compile assets
RUN rails assets:precompile
VOLUME "$APP_DIR/public"
Where VOLUME "$APP_DIR/public" is creating a volume that's shared with the Nginx container, which has this in the Dockerfile:
FROM nginx
ADD nginx.conf /etc/nginx/nginx.conf
And then docker-compose.yml:
version: '2'
services:
web:
build: config/docker/web
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
app:
build: .
environment:
SECRET_KEY_BASE: 'af3...ef0'
ports:
- 3000:3000
This works, but only the first time I build it. If I change any assets, and build the images again, they're not updated. Possibly because volumes are not updated on image build, I think because how Docker handles caching.
I want the assets to be updated every time I run docker-compose built && docker-compose up. Any idea how to accomplish this?
Compose preserves volumes on recreate.
You have a couple options:
don't use volumes for the assets, instead build the assets and ADD or COPY them into the web container during build
docker-compose rm app before running up to remove the old container and volumes.