Currently I'm setting up my app using docker. I've got a minimal rails app, with 1 controller. You can get my setup by running these:
rails new app --database=sqlite --skip-bundle
cd app
rails generate controller --skip-routes Home index
echo "Rails.application.routes.draw { root 'home#index' }" > config/routes.rb
echo "gem 'foreman'" >> Gemfile
echo "web: rails server -b 0.0.0.0" > Procfile
echo "port: 3000" > .foreman
And I have the following setup:
Dockerfile:
FROM ruby:2.3
# Install dependencies
RUN apt-get update && apt-get install -y \
nodejs \
sqlite3 \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
# Configure bundle
RUN bundle config --global frozen 1
RUN bundle config --global jobs 7
# Expose ports and set entrypoint and command
EXPOSE 3000
CMD ["foreman", "start"]
# Install Gemfile in different folder to allow caching
WORKDIR /tmp
COPY ["Gemfile", "Gemfile.lock", "/tmp/"]
RUN bundle install --deployment
# Set environment
ENV RAILS_ENV production
ENV RACK_ENV production
# Add files
ENV APP_DIR /app
RUN mkdir -p $APP_DIR
COPY . $APP_DIR
WORKDIR $APP_DIR
# Compile assets
RUN rails assets:precompile
VOLUME "$APP_DIR/public"
Where VOLUME "$APP_DIR/public" is creating a volume that's shared with the Nginx container, which has this in the Dockerfile:
FROM nginx
ADD nginx.conf /etc/nginx/nginx.conf
And then docker-compose.yml:
version: '2'
services:
web:
build: config/docker/web
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
app:
build: .
environment:
SECRET_KEY_BASE: 'af3...ef0'
ports:
- 3000:3000
This works, but only the first time I build it. If I change any assets, and build the images again, they're not updated. Possibly because volumes are not updated on image build, I think because how Docker handles caching.
I want the assets to be updated every time I run docker-compose built && docker-compose up. Any idea how to accomplish this?
Compose preserves volumes on recreate.
You have a couple options:
don't use volumes for the assets, instead build the assets and ADD or COPY them into the web container during build
docker-compose rm app before running up to remove the old container and volumes.
Related
What could be the reason of Deployment not being able to see config files?
This is a part from Deployment
command: ["bundle", "exec", "puma", "-C", "config/puma.rb"]
already tried with ./config/.. and using args instead of command
I'm getting Errno::ENOENT: No such file or directory # rb_sysopen - config/puma.rb
Everything used to work fine with docker-compose
When I keep the last line (CMD) from the Dockerfile below and omit the command: in Deployment, everything works fine but, to reuse the image for sidekiq, I need to provide config files.
Dockerfile
FROM ruby:2.7.2
RUN apt-get update -qq && apt-get install -y build-essential ca-certificates libpq-dev nodejs postgresql-client yarn vim -y
ENV APP_ROOT /var/www/app
RUN mkdir -p $APP_ROOT
WORKDIR $APP_ROOT
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
COPY public public/
RUN gem install bundler
RUN bundle install
# tried this
COPY config config/
COPY . .
EXPOSE 9292
# used to have this line but I want to reuse the image
# CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
error message
bundler: failed to load command: puma (/usr/local/bundle/bin/puma)
Errno::ENOENT: No such file or directory # rb_sysopen - config/puma.rb
upd
It seems that the issue was related to wrong paths and misunderstanding of command and args fields. The following config worked for me. It also possible there were cache issues with docker(happened to me earlier)
command:
- bundle
- exec
- puma
args:
- "-C"
- "config/puma.rb"
For some reason providing commands inside of values.yaml doesn't seem to work properly.
But it works when commands are provided through template.
There's the following section in the app/templates/deployment.yaml of my app. Everything works fine now.
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.container.image }}
command:
- bundle
- exec
- puma
args:
- "-C"
- "config/puma.rb"
I have also found this k8s rails demo https://github.com/lewagon/rails-k8s-demo/blob/master/helm/templates/deployments/sidekiq.yaml
As you can see the commans section is provided through templates/../name.yaml rather than values.yaml
I'm using Rails in a Docker container, and every once in a while I run into this issue that I have no idea how to solve. When adding a new gem to the Gemfile, upon rebuilding the Docker Image + Container, the build will fail with the common bundler error Could not find [GEM_NAME] in any of the sources; Run 'bundle install' to install missing gems. This only occurs to me when I try to build the image in Docker, if I run a regular bundle install on my local machine, the Gemfile gets installed correctly and everything works as expected.
I have a fairly standard Dockerfile & docker-compose file.
Dockerfile:
FROM ruby:2.6.3
ARG PG_MAJOR
ARG BUNDLER_VERSION
ARG UID
ARG MODE
# Add POSTGRESQL to the source list using the right version
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
ENV RAILS_ENV $MODE
RUN apt-get update -qq && apt-get install -y postgresql-client-$PG_MAJOR vim
RUN apt-get -y install sudo
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
ENV BUNDLER_VERSION $BUNDLER_VERSION
RUN gem install bundler:$BUNDLER_VERSION
RUN bundle install
COPY . /usr/src/app
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml:
version: '3'
services:
backend:
build:
dockerfile: Dockerfile
args:
UID: ${UID:-1001}
BUNDLER_VERSION: 2.0.2
PG_MAJOR: 10
mode: development
tty: true
stdin_open: true
volumes:
- ./[REDACTED]:/usr/src/app
- gem_data_api:/usr/local/bundle:cached
ports:
- "3000:3000"
user: root
I've tried docker system prune -a, docker builder prune -a, reinstalling Docker, multiple rebuilds in a row, restarting my machine and so on, to no avail. The weird part is that it doesn't happen with every new Gem that I decide to add, only for some specific gems. For example I got this issue once again when trying to add gem 'sendgrid-ruby' to my Gemfile. This is the repo for the gem for reference, and the specific error I get with sendgrid-ruby is Could not find ruby_http_client-3.5.1 in any of the sources. I tried specifying ruby_http_client in my Gemfile, and I also tried sshing into the Docker container and running gem install ruby_http_client, but I get the same errors.
What might be happening here?
You're mounting a named volume over the container's /usr/local/bundle directory. The named volume will get populated from the image, but only the very first time you run the container. After that the old contents of the named volume will take precedence over the content of the image: using a volume this way will cause Docker to completely ignore any changes you make in the Gemfile.
You should be able to delete that volumes: line from the docker-compose.yml file. I'm not clear what benefit you would get from keeping the installed gems in a named volume.
I am trying to create my rails application in a docker environment. I have used volumes to mount source directories from the host at a targeted path inside the container. The application is in the development phase and I need to continuously add new gems to it. I install a gem from the bash of my running container, it installs the gem and the required dependencies. But when I removed the running containers(docker-compose down) and then again instantiated them(docker-compose up), my rails web image shows errors of missing gems. I know re-building the image will add the gems but IS THERE ANY WAY TO ADD GEMS WITHOUT REBUILDING THE IMAGE?
I Followed docker-compose docs for setting the rails app
https://docs.docker.com/compose/rails/#define-the-project
DOCKERFILE
FROM ruby:2.7.1-slim-buster
LABEL MAINTAINER "Prayas Arora" "<prayasa#mindfiresolutions.com>"
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update \
&& apt-get install -qq -y --no-install-recommends \
build-essential \
libpq-dev \
netcat \
postgresql-client \
nodejs \
&& rm -rf /var/lib/apt/lists/*
ENV APP_HOME /var/www/repository/repository_api
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY ./repository_api/Gemfile $APP_HOME/Gemfile
COPY ./repository_api/Gemfile.lock $APP_HOME/Gemfile.lock
RUN bundle install
# Copy the main application.
COPY ./repository_api $APP_HOME
# Add a script to be executed every time the container starts.
COPY ./repository_docker/development/repository_api/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["rails","server","-b","0.0.0.0"]
docker-compose.yml
container_name: repository_api
build:
context: ../..
dockerfile: repository_docker/development/repository_api/Dockerfile
user: $UID
env_file: .env
stdin_open: true
environment:
DB_NAME: ${POSTGRES_DB}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_USER: ${POSTGRES_USER}
DB_HOST: ${POSTGRES_DB}
volumes:
- ../../repository_api:/var/www/repository/repository_api
networks:
- proxy
- internal
depends_on:
- repository_db
A simple solution is to cache the gems in a docker volume. You can create a volume in docker and attach it to the path to bundle gems. This will maintain a shared state and you will not require to install the gems in every container you spun.
container_name: repository_api
build:
context: ../..
dockerfile: repository_docker/development/repository_api/Dockerfile
user: $UID
env_file: .env
stdin_open: true
environment:
DB_NAME: ${POSTGRES_DB}
DB_PASSWORD: ${POSTGRES_PASSWORD}
DB_USER: ${POSTGRES_USER}
DB_HOST: ${POSTGRES_DB}
volumes:
- ../../repository_api:/var/www/repository/repository_api
- bundle_cache:/usr/local/bundle
networks:
- proxy
- internal
.
.
volumes:
bundle_cache:
Also, a/c to bundler.io, the official Docker images for Ruby assume that you will use only one application, with one Gemfile, and no other gems or Ruby applications will be installed or run in your container. So once you have added all the gems required in your application development, you can remove this bundle_cache volume and rebuild your image with your final Gemfile.
I'm trying to run my Rails app in production locally as part of platform migration. I'm using Docker with Docker Compose.
I've ran into issues with rake assets:precompile. It look as if the docker deletes the generated files during build.
Here's my Dockerfile
FROM ruby:2.2.2
RUN apt-get update -qq && apt-get install -y build-essential nodejs npm nodejs-legacy mysql-client vim
RUN mkdir /lunchiatto
ENV RAILS_ENV production
ENV RACK_ENV production
WORKDIR /tmp
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install --without production test
ADD . /myapp
WORKDIR /myapp
RUN bundle exec rake assets:clobber
RUN bundle exec rake assets:precompile --trace
And here's my docker-compose.yml
db:
image: postgres:9.4.1
ports:
- "5432:5432"
environment:
RACK_ENV: production
RAILS_ENV: production
web:
build: .
command: bundle exec puma -C config/puma.rb
ports:
- "3000:3000"
links:
- db
volumes:
- .:/myapp
environment:
RACK_ENV: production
RAILS_ENV: production
The docker-compose build command runs fine. I've also inserted RUN ls -l /myapp/public/assets into Dockerfile before and after the rake assets:precompile and all seems fine. However if I run docker-compose run web ls -l /myapp/public/assets after the build with the docker-compose up running in a different tab all the asset's files are gone.
It's unlikely that the container is readonly during build, so what could that be?
You hide the containers folder /myapp by a volume that you mount from your local folder ..
You need to make sure that the required files are inside the local folder when you want to mount it. When you do not mount that folder the files would be available on your image.
The effect is similar to a Linux system: when you have files in a folder /my/folder and you mount a disk to the same folder the original files are hidden. Instead the files from that disk are visible.
I tried to make a simple application with Yesod and PostgreSQL using Docker Compose but RUN yesod init -n myApp -d postgresql didn't seem to work as expected.
I defined Dockerfile and docker-compose.yml as below:
Dockerfile:
FROM shuny/ghc-7.8.4:latest
MAINTAINER shuny
# Create default config
RUN cabal update
# Add stackage remote repo
RUN sed -i 's/^remote-repo: [a-zA-Z0-9_\/:.]*$/remote-repo: stackage:http:\/\/www.stackage.org\/lts/g' /root/.cabal/config
# Update packages
RUN cabal update
# Generate locale otherwise happy (because of tf-random) will fail
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN echo $LANG
# Install build tools for yesod
RUN cabal install alex happy yesod-bin
# Install library for yesod-postgres
RUN apt-get update && apt-get install -y libpq-dev
RUN mkdir /code
WORKDIR /code
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
ADD . /code
WORKDIR /code
# ADD settings.yml /code/myApp/config/
docker-compose.yml:
database:
image: postgres
ports:
- "5432"
web:
build: .
tty: true
command: yesod devel
volumes:
- .:/code/
ports:
- "3000:3000"
links:
- database
and docker-compose build returned as below:
Step 0 : FROM shuny/ghc-7.8.4:latest
...
Step 17 : WORKDIR /code
---> Running in bf99d0aca48c
---> 37c3c94338d7
Removing intermediate container bf99d0aca48c
Successfully built 37c3c94338d7
but when I check like this:
$docker-compose run web /bin/bash
root#0fe5fb1a3b20:/code# ls
root#0fe5fb1a3b20:/code#
it showed nothing while this commands seem to work as expected:
docker run -ti 37c3c94338d7
root#31e94428de37:/code# ls
docker-compose.yml Dockerfile myApp settings.yml
root#31e94428de37:/code# ls myApp/
app config Handler Model.hs Settings.hs test
Application.hs dist Import myApp.cabal static
cabal.sandbox.config Foundation.hs Import.hs Settings templates
How can I fix it?
I really appliciate any feedback, thank you.
You are doing strange things with volumes and the ADD instruction.
First you build your application inside the image:
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
Then you add the content of the folder that contains the Dockerfile in the /code folder of the image. I guess this step is useless.
ADD . /code
Then, if you run a container without -volume option, everything works fine
docker run -ti 37c3c94338d7
But in your docker-compose.yml file, you specified a volume option that overides the /code folder in the container with the folder that contains the docker-compose.yml file on the host machine. Therefore, you don't have the content generated during the build of your image anymore.
There are two possibilities:
Don't use the volume instruction in the docker-compose.yml file
Put the content of the /code/myApp/ folder of the image inside the ./myApp folder of the host.
It depends on why you want to use the volume option in docker-compose.yml.
I don't really know what is your goal. But if what you are trying to do is to access to the files built inside the container from the host machine, maybe this should do what you are looking for:
Remove the build steps from your Dockerfile
Run a shell inside a "web" container: docker-compose run web bash
Launch the build commands
So you will have built your application while the volume was mounted and will see the files on the host machine.
Exit the shell
Launch Docker Compose normally
If you just want to be able to backup the content of the /code/myApp/ folder, maybe you should omit the path on the host machine from the volume section of docker-compose.yml.
volumes:
- /code/
And follow this section of the documentation
I hope it helps