I'm trying to deploy my rails app to the Heroku using heroku.yml. This is my yml setup
setup:
addons:
- plan: cleardb-mysql
as: DATABASE
build:
docker:
web: Dockerfile
config:
RAILS_ENV: development
DATABASE_URL: mysql2://abcdef:1234567#somewhere-someplace-123.cleardb.net/heroku_abcdefg123456
run:
web: bin/rails server -p $PORT -b 0.0.0.0
and I'm using MySQL as a database. And here's my Dockerfile that the Heroku is using to build the image.
FROM ruby:2.6.5-alpine
ARG DATABASE_URL
ARG RAILS_ENV
# Adding the required dependencies
# Installing Required Gems
# Other Configs...
# Copying Gem Files
COPY Gemfile Gemfile.lock ./
# Installing Gems
RUN bundle install --jobs=4 --retry=9
# Copying package and package.lock
COPY package.json yarn.lock ./
# Installing node_modules
RUN yarn
# Copy everything to the from current dir to container_root
COPY . ./
#Compiling assets
RUN bundle exec rake assets:precompile # this precompilation step need DATABASE_URL
CMD ["rails", "server", "-b", "0.0.0.0"]
This is working as expected but the problem is I've to pass the database string in the heroku.yml file. Is there a way I can reference the config vars that are declared in the Heroku?
I tried this but it's not working as the docs also saying that the config-vars declared in Heroku are not available at the build time.
setup:
addons:
- plan: cleardb-mysql
as: DATABASE
build:
docker:
web: Dockerfile
config:
RAILS_ENV: $RAILS_ENV
DATABASE_URL: $DATABASE_URL
run:
web: bin/rails server -p $PORT -b 0.0.0.0
What could be the possible workaround for this issue?
Without using the heroku.yml a possible solution is to include the env variable in the Dockerfile.
Example with Java code:
CMD java -Dserver.port=$PORT -DmongoDbUrl=$MONGODB_SRV $JAVA_OPTS -jar /software/myjar.jar
You need to build the image locally, push and release it into the Heroku Registry: when it runs the Config Vars (ie MONGODB_SRV) are injected
Related
I am trying to set the RAILS_ENV var so that the bundle install cmd within my Dockerfile installs the desired gems, but cant seem to set cli output as the value.
#heroku.yml
build:
docker:
web: Dockerfile
config:
RAILS_ENV: $(heroku config:get RAILS_ENV -a APPNAME)
Is there a way to set that without hardcoding the value?
I am unable to run rails g commands in the docker CLI.
It is throwing the following error, even though everything is already installed and running.
Could not find rake-12.3.2 in any of the sources
Run `bundle install` to install missing gems.
rails db:create and rails db:migrate are fine.
I have tried running the commands from inside the docker CLI and via docker-compose run, and they throw the same error.
My dockerfile, named Dockerfile.dev is as follows
# syntax=docker/dockerfile:1
FROM ruby:2.6.2-stretch
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN gem install bundler && bundle install
RUN rails db:create db:migrate
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
My docker-compose file as as follows
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build:
context: .
dockerfile: Dockerfile.dev
image: project-x-image-annotator:v1
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- db
Another development is that I have 2 copies of rake, but only 1 rails.
xxxx#yyyy project-x % docker-compose run web whereis rails
[+] Running 1/0
⠿ Container project-x-db_1 Running 0.0s
rails: /usr/local/bundle/bin/rails
xxxx#yyyy project-x % docker-compose run web whereis rake
[+] Running 1/0
⠿ Container project-x-db_1 Running 0.0s
rake: /usr/local/bin/rake /usr/local/bundle/bin/rake
I finally solved it.
I think the Gemfile.lock had conflicts on it that affected my container but not my buddy's.
I removed the Gemfile.lock and ran bundle install. This fixed the issue of rails g not working.
Would love to hear from a rails expert on why the bundle install did not make an entirely new lock file when run inside the container.
What could be the reason of Deployment not being able to see config files?
This is a part from Deployment
command: ["bundle", "exec", "puma", "-C", "config/puma.rb"]
already tried with ./config/.. and using args instead of command
I'm getting Errno::ENOENT: No such file or directory # rb_sysopen - config/puma.rb
Everything used to work fine with docker-compose
When I keep the last line (CMD) from the Dockerfile below and omit the command: in Deployment, everything works fine but, to reuse the image for sidekiq, I need to provide config files.
Dockerfile
FROM ruby:2.7.2
RUN apt-get update -qq && apt-get install -y build-essential ca-certificates libpq-dev nodejs postgresql-client yarn vim -y
ENV APP_ROOT /var/www/app
RUN mkdir -p $APP_ROOT
WORKDIR $APP_ROOT
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
COPY public public/
RUN gem install bundler
RUN bundle install
# tried this
COPY config config/
COPY . .
EXPOSE 9292
# used to have this line but I want to reuse the image
# CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
error message
bundler: failed to load command: puma (/usr/local/bundle/bin/puma)
Errno::ENOENT: No such file or directory # rb_sysopen - config/puma.rb
upd
It seems that the issue was related to wrong paths and misunderstanding of command and args fields. The following config worked for me. It also possible there were cache issues with docker(happened to me earlier)
command:
- bundle
- exec
- puma
args:
- "-C"
- "config/puma.rb"
For some reason providing commands inside of values.yaml doesn't seem to work properly.
But it works when commands are provided through template.
There's the following section in the app/templates/deployment.yaml of my app. Everything works fine now.
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.container.image }}
command:
- bundle
- exec
- puma
args:
- "-C"
- "config/puma.rb"
I have also found this k8s rails demo https://github.com/lewagon/rails-k8s-demo/blob/master/helm/templates/deployments/sidekiq.yaml
As you can see the commans section is provided through templates/../name.yaml rather than values.yaml
Currently I'm setting up my app using docker. I've got a minimal rails app, with 1 controller. You can get my setup by running these:
rails new app --database=sqlite --skip-bundle
cd app
rails generate controller --skip-routes Home index
echo "Rails.application.routes.draw { root 'home#index' }" > config/routes.rb
echo "gem 'foreman'" >> Gemfile
echo "web: rails server -b 0.0.0.0" > Procfile
echo "port: 3000" > .foreman
And I have the following setup:
Dockerfile:
FROM ruby:2.3
# Install dependencies
RUN apt-get update && apt-get install -y \
nodejs \
sqlite3 \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
# Configure bundle
RUN bundle config --global frozen 1
RUN bundle config --global jobs 7
# Expose ports and set entrypoint and command
EXPOSE 3000
CMD ["foreman", "start"]
# Install Gemfile in different folder to allow caching
WORKDIR /tmp
COPY ["Gemfile", "Gemfile.lock", "/tmp/"]
RUN bundle install --deployment
# Set environment
ENV RAILS_ENV production
ENV RACK_ENV production
# Add files
ENV APP_DIR /app
RUN mkdir -p $APP_DIR
COPY . $APP_DIR
WORKDIR $APP_DIR
# Compile assets
RUN rails assets:precompile
VOLUME "$APP_DIR/public"
Where VOLUME "$APP_DIR/public" is creating a volume that's shared with the Nginx container, which has this in the Dockerfile:
FROM nginx
ADD nginx.conf /etc/nginx/nginx.conf
And then docker-compose.yml:
version: '2'
services:
web:
build: config/docker/web
volumes_from:
- app
links:
- app:app
ports:
- 80:80
- 443:443
app:
build: .
environment:
SECRET_KEY_BASE: 'af3...ef0'
ports:
- 3000:3000
This works, but only the first time I build it. If I change any assets, and build the images again, they're not updated. Possibly because volumes are not updated on image build, I think because how Docker handles caching.
I want the assets to be updated every time I run docker-compose built && docker-compose up. Any idea how to accomplish this?
Compose preserves volumes on recreate.
You have a couple options:
don't use volumes for the assets, instead build the assets and ADD or COPY them into the web container during build
docker-compose rm app before running up to remove the old container and volumes.
I'm trying to run my Rails app in production locally as part of platform migration. I'm using Docker with Docker Compose.
I've ran into issues with rake assets:precompile. It look as if the docker deletes the generated files during build.
Here's my Dockerfile
FROM ruby:2.2.2
RUN apt-get update -qq && apt-get install -y build-essential nodejs npm nodejs-legacy mysql-client vim
RUN mkdir /lunchiatto
ENV RAILS_ENV production
ENV RACK_ENV production
WORKDIR /tmp
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install --without production test
ADD . /myapp
WORKDIR /myapp
RUN bundle exec rake assets:clobber
RUN bundle exec rake assets:precompile --trace
And here's my docker-compose.yml
db:
image: postgres:9.4.1
ports:
- "5432:5432"
environment:
RACK_ENV: production
RAILS_ENV: production
web:
build: .
command: bundle exec puma -C config/puma.rb
ports:
- "3000:3000"
links:
- db
volumes:
- .:/myapp
environment:
RACK_ENV: production
RAILS_ENV: production
The docker-compose build command runs fine. I've also inserted RUN ls -l /myapp/public/assets into Dockerfile before and after the rake assets:precompile and all seems fine. However if I run docker-compose run web ls -l /myapp/public/assets after the build with the docker-compose up running in a different tab all the asset's files are gone.
It's unlikely that the container is readonly during build, so what could that be?
You hide the containers folder /myapp by a volume that you mount from your local folder ..
You need to make sure that the required files are inside the local folder when you want to mount it. When you do not mount that folder the files would be available on your image.
The effect is similar to a Linux system: when you have files in a folder /my/folder and you mount a disk to the same folder the original files are hidden. Instead the files from that disk are visible.