I have an issue with paperclip when using it with rails under docker development environment,
I followed all the steps on the README to add an image to an existing model, everything works no errors but the image is not uploading locally, even I tried to upload to S3 directly and the same issue no errors at all and image is missing folders are empty?
my code is clean I tried it out of docker and it works, any suggestions ?
to mention I even tried carrierwave and it works very well but I do love to use paperclip I do find it more lightweight and powerful.
This is my Dockerfile
# Use the barebones version of Ruby 2.3.
FROM ruby:2.3.1-slim
# Optionally set a maintainer name to let people know who made this image.
MAINTAINER Chris de Bruin <chris#studytube.nl>
# Install dependencies:
# - build-essential: To ensure certain gems can be compiled
# - nodejs: Compile assets
# - imagemagick: converting images
# - file: needed by paperclip
# - wkhtmltopdf: generating pdf from html
# - libxml2: needed for nokogiri
RUN apt-get update && apt-get install -qq -y --no-install-recommends \
build-essential libmysqlclient-dev git-core imagemagick wkhtmltopdf \
libxml2 libxml2-dev libxslt1-dev nodejs file
# Set an environment variable to store where the app is installed to inside
# of the Docker image. The name matches the project name out of convention only.
ENV INSTALL_PATH /backend
RUN mkdir -p $INSTALL_PATH
# This sets the context of where commands will be ran in and is documented
# on Docker's website extensively.
WORKDIR $INSTALL_PATH
# Ensure gems are cached and only get updated when they change. This will
# drastically increase build times when your gems do not change.
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install
# Copy in the application code from your work station at the current directory
# over to the working directory.
COPY . .
# Ensure the static assets are exposed through a volume so that nginx can read
# in these values later.
VOLUME ["$INSTALL_PATH/public"]
# The default command that gets ran will be to start the Puma server.
CMD bundle exec puma -C config/puma.rb
So you should have your own Dockerfile and use this in your docker-compose.yml
Related
I'm trying to upgrade a Dockerized Rails app to Ruby 3+ and Rails 7. Due to other project dependencies, I've landed on this version of Ruby Docker image ruby:3.0.4-alpine. Here's my Dockerfile:
FROM ruby:3.0.4-alpine
RUN apk --update add --no-cache build-base bash git vim curl postgresql-dev openssl-dev nodejs npm yarn tzdata less \
imagemagick postgresql-client gcompat
RUN mkdir /app
WORKDIR /app
COPY Gemfile Gemfile.lock package.json yarn.lock ./
RUN yarn set version berry
RUN bundle update --bundler
RUN bundle install --jobs 5
ADD . /app
RUN yarn install
RUN yarn install --frozen-lockfile \
&& RAILS_SECRET_KEY_BASE=secret_key_base RAILS_ENV=production bundle exec rails assets:precompile apipie:cache \
&& rm -rf node_modules
EXPOSE 5000
VOLUME ["/app/public", "/usr/local/bundle"]
CMD bash -c "bundle exec puma -C config/puma.rb"
At this point the only dependency in my Rails app that doesn't work with this Dockerfile is wkhtmltopdf.
I would prefer to install wkhtmltopdf with the Alpine Package Keeper (apk) as part of RUN apk --update add --no-cache.
But, it appears that the latest version of Alpine that has the wkhtmltopdf package available is Alpine 3.14. The release notes for Alpine 3.15 state "qt5-qtwebkit, kdewebkit, wkhtmltopdf, and py3-pdfkit have been removed due to known vulnerabilities and lack of upstream support for qtwebkit."
So other than switching off of Alpine Linux, which would present other dependency issues. I would prefer not to do as that seems to be the version used by most Dockerized Rails apps for the foreseeable future, what are my options?
I also tried using the wkhtmltopdf_binary gem, but that was last updated in 2016. When I try to export a PDF using wicked_pdf with wkhtmltopdf installed by that gem, I get this error:
/usr/local/bundle/gems/wkhtmltopdf-binary-0.12.6.5/bin/wkhtmltopdf:61:in `<top (required)>': Invalid platform, must be running on Ubuntu
16.04/18.04/20.04 CentOS 6/7/8, Debian 9/10, archlinux amd64, or intel-based Cocoa macOS (missing binary: /usr/local/bundle/gems/wkhtmltopdf-binary-0.12.6.5/bin/wkhtmltopdf_alpine_3.16.0_i386). (RuntimeError) from /usr/local/bundle/bin/wkhtmltopdf:25:in `load' from /usr/local/bundle/bin/wkhtmltopdf:25:in `<main>'
I believe I'm getting this error because the docker image ruby:3.0.4-alpine is running Alpine 3.16 and Alpine dropped support for wkhtmltopdf on version 3.14. The wkhtmltopdf_binary gem hasn't been updated since 2016, so it wouldn't target more recent versions of Alpine.
So, what are my options to work around this? Is there a way to script a compatible installation of the wkhtmltopdf in my Dockerfile? If so, how would I do that?
I haven't been able to find another gem that targets more recent versions of Alpine that install wkhtmltopdf. Is there one out there that I'm missing?
This has to be a common problem (Rails app running on Docker Alpine image with Ruby 3+ that needs to export PDFs using wkhtmltopdf).
Or, is there another way to export PDFs in ruby that doesn't depend on wkhtmltopdf that I haven't found? (I've found other NodeJS options, but that introduces a whole other set of dependencies, so a Docker/Ruby solution is preferable.)
I am trying to put my rails project on the google cloud engine for the first time and I have a lot of trouble.
I've wanted to upload my project with a custom runtime app.yaml (because I would like yarn to install the dependencies as well), but the deployment command fails with this error:
Error Response: [4] Your deployment has failed to become healthy in the allotted time and therefore was rolled back. If you believe this was an error, try adjusting the 'app_start_timeout_sec' setting in the 'readiness_check' section.
PS: the app runs locally (development and production env).
My app.yaml looks like this:
entrypoint: bundle exec rails s -b '0.0.0.0' --port $PORT
env: flex
runtime: custom
env_variables:
My Environment variables
beta_settings:
cloud_sql_instances: ekoma-app:us-central1:ekoma-db
readiness_check:
path: "/_ah/health"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 1
app_start_timeout_sec: 120
And my Dockerfile looks like this:
FROM l.gcr.io/google/ruby:latest
RUN apt-get update -qq && apt-get install apt-transport-https
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev imagemagick yarn
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN gem install pkg-config -v "~> 1.1"
RUN bundle install && npm install
COPY . /app
When deploying with a ruby runtime I realized that the dockerfile generated was much more complex and probably complete and google provide a repo to generate it.
So, I tried to look into the ruby-docker public repo that google shared but I don't know how to use their generated docker images and therefore fix my Dockerfile issue
https://github.com/GoogleCloudPlatform/ruby-docker
Could someone help me figure what's wrong in my setup and how to run these ruby-docker image (seems very useful!)?
Thank you!
The "entrypoint" field in app.yaml is not used when a custom runtime is in play. Instead, set the CMD in your Dockerfile. e.g.:
CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "--port", "8080"]
That probably will get your application running. (Remember that environment variables are not interpolated in exec form, so I replaced your $PORT with the hard-coded port 8080, which is the port App Engine expects.)
As an alternative:
It may be possible to use the Ruby runtime images in the ruby-docker repo, and not have to use a custom runtime (i.e. you may not need to write your own Dockerfile), even if you have custom build steps like doing yarn installs. Most of the build process in runtime: ruby is customizable, but it's not well-documented. If you want to try this path, the TL;DR is:
Use runtime: ruby in your app.yaml and don't provide your own Dockerfile. (And reinstate the entrypoint of course.)
If you want to install ubuntu packages not normally present in runtime: ruby, list them in app.yaml under runtime_config:packages. For example:
runtime_config:
packages:
- libgeos-dev
- libproj-dev
If you want to run custom build steps, list them in app.yaml under runtime_config:build. They get executed in the Dockerfile after the bundle install step (which cannot itself be modified). For example:
runtime_config:
build:
- npm install
- bundle exec rake assets:precompile
- bundle exec rake setup_my_stuff
Note that by default, if you don't provide custom build steps, the ruby runtime behaves as if there is one build step: bundle exec rake assets:precompile || true. That is, by default, runtime: ruby will attempt to compile your assets during app engine deployment. If you do modify the build steps and you want to keep this behavior, make sure you include that rake task as part of your custom build steps.
Why would you copy Gemfile.lock, run bundle install to create a new Gemfile.lock, then immediately copy the current directory which contains the original Gemfile.lock and overwrite the Gemfile.lock that was just created by Bundler within the Docker container?
Also why can you get away with not having EXPOSE 3000?
https://docs.docker.com/compose/rails/#define-the-project
FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
That isn't the only place it does that. It's also done here, which seems pretty official. Maybe I'm missing a fundamental aspect of Docker?
https://hub.docker.com/_/ruby/
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY . .
More a guess than an answer, but sometimes you order the steps in Dockerfiles slightly differently to improve the caching mechanism. When you change things in your application, it's less likely that it will affect the Gemfiles, so you don't have to do a bundle install after everything you change. Ordering the steps in this way avoids having to execute bundle install for changes to your application that do not affect the Gemfiles.
Documentation on build caching: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#build-cache
In regards to the second part of this question:
Also why can you get away with not having EXPOSE 3000?
The full Dockerfile that you referenced does contain this line:
EXPOSE 3000
I'm trying to move our Rails app over to a Docker deployment, however I can't manage to get bundle to install from a Github reference.
With the below Dockerfile:
FROM ruby:2.3.0-slim
MAINTAINER Chris Jewell <chrisjohnjewell#gmail.com>
# Install dependencies:
# - build-essential: To ensure certain gems can be compiled
# - nodejs: Compile assets
# - libpq-dev: Communicate with postgres through the postgres gem
# - postgresql-client-9.4: In case you want to talk directly to postgres
RUN apt-get update && apt-get install -qq -y build-essential nodejs libpq-dev postgresql-client-9.4 --fix-missing --no-install-recommends
# Set an environment variable to store where the app is installed to inside
# of the Docker image.
ENV INSTALL_PATH /ventbackend
RUN mkdir -p $INSTALL_PATH
# This sets the context of where commands will be ran in and is documented
# on Docker's website extensively.
WORKDIR $INSTALL_PATH
# Ensure gems are cached and only get updated when they change. This will
# drastically increase build times when your gems do not change.
COPY Gemfile Gemfile
RUN bundle install
# Copy in the application code from your work station at the current directory
# over to the working directory.
COPY . .
# Provide dummy data to Rails so it can pre-compile assets.
RUN bundle exec rake RAILS_ENV=production DATABASE_URL=postgresql://user:pass#127.0.0.1/dbname SECRET_TOKEN=pickasecuretoken assets:precompile
# Expose a volume so that nginx will be able to read in assets in production.
VOLUME ["$INSTALL_PATH/public"]
# The default command that gets ran will be to start the Unicorn server.
CMD bundle exec unicorn -c config/unicorn.rb
I get the following error when trying to run docker-compose up:
You need to install git to be able to use gems from git repositories. For help
installing git, please refer to GitHub's tutorial at
https://help.github.com/articles/set-up-git
I'm assuming that this is because of lines in the Gemfile like:
gem 'logstasher', github: 'MarkMurphy/logstasher', ref: 'be3e871385bde7b1897ec2a1831f868a843d8000'
However, we also use some private Gems as well.
Is installing Git on the container the way to go? How will this authenticate with Github?
Is installing Git on the container the way to go?
In that case, yes: you can see an example at "Using Docker to maintain a Ruby gem". It Dockerfile does include a:
# ~~~~ OS Maintenance ~~~~
RUN apt-get update && apt-get install -y git
How will this authenticate with Github?
It does not have to authenticate to GitHub in order to read, ie clone.
It you needed to push back a gem (to publish it), then you would need for instance your ssh keys (mounted through a volume).
But that is not needed here.
I have been trying to dockerize my Rails application on Elastic Beanstalk. There are a lot of examples out there but most don't fit my specific use case. That is:
Running under a single container Docker environment (so no need for docker-compose/fig)
Run on Amazon Elastic Beanstalk.
Make use of passenger-docker as the base image (one of the Ruby variants).
Pass environment variables set by Elastic Beanstalk (either through CLI of console).
Nginx and Passenger in the container.
Ability to install custom packages (extend it).
Reasonable .dockerignore file.
The process on how to deploy is not the question here but rather the right Docker configuration that would work with Amazon Elastic Beanstalk with the above specific criteria.
What's the right configuration to get this running?
This is what worked for me ...
Dockerfile
In this example I use phusion/passenger-ruby22:0.9.16 as the base image because:
Your Dockerfile can be smaller.
It reduces the time needed to write a correct Dockerfile. You won't have to worry about the base system and the stack, you can focus on just your app.
It sets up the base system correctly. It's very easy to get the base system wrong, but this image does everything correctly. Learn more.
It drastically reduces the time needed to run docker build, allowing you to iterate your Dockerfile more quickly.
It reduces download time during redeploys. Docker only needs to download the base image once: during the first deploy. On every subsequent deploys, only the changes you make on top of the base image are downloaded.
You can learn more about it here ... anyway, onto the Dockerfile.
# The FROM instruction sets the Base Image for subsequent instructions. As such,
# a valid Dockerfile must have FROM as its first instruction. We use
# phusion/baseimage as a base image. To make our builds reproducible, we make
# sure we lock down to a specific version, not to `latest`!
FROM phusion/passenger-ruby22:0.9.16
# The MAINTAINER instruction allows you to set the Author field of the generated
# images.
MAINTAINER "Job King'ori Maina" <yo#kingori.co> (#itsmrwave)
# The RUN instructions will execute any commands in a new layer on top of the
# current image and commit the results. The resulting committed image will be
# used for the next step in the Dockerfile.
# === 1 ===
# Prepare for packages
RUN apt-get update --assume-yes && apt-get install --assume-yes build-essential
# For a JS runtime
# http://nodejs.org/
RUN apt-get install --assume-yes nodejs
# For Nokogiri gem
# http://www.nokogiri.org/tutorials/installing_nokogiri.html#ubuntu___debian
RUN apt-get install --assume-yes libxml2-dev libxslt1-dev
# For RMagick gem
# https://help.ubuntu.com/community/ImageMagick
RUN apt-get install --assume-yes libmagickwand-dev
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# === 2 ===
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# === 3 ====
# By default Nginx clears all environment variables (except TZ). Tell Nginx to
# preserve these variables. See nginx-env.conf.
COPY nginx-env.conf /etc/nginx/main.d/rails-env.conf
# Nginx and Passenger are disabled by default. Enable them (start Nginx/Passenger).
RUN rm -f /etc/service/nginx/down
# Expose Nginx HTTP service
EXPOSE 80
# === 4 ===
# Our application should be placed inside /home/app. The image has an app user
# with UID 9999 and home directory /home/app. Our application is supposed to run
# as this user. Even though Docker itself provides some isolation from the host
# OS, running applications without root privileges is good security practice.
RUN mkdir -p /home/app/myapp
WORKDIR /home/app/myapp
# Run Bundle in a cache efficient way. Before copying the whole app, copy just
# the Gemfile and Gemfile.lock into the tmp directory and ran bundle install
# from there. If neither file changed, both instructions are cached. Because
# they are cached, subsequent commands—like the bundle install one—remain
# eligible for using the cache. Why? How? See ...
# http://ilikestuffblog.com/2014/01/06/how-to-skip-bundle-install-when-deploying-a-rails-app-to-docker/
COPY Gemfile /home/app/myapp/
COPY Gemfile.lock /home/app/myapp/
RUN chown -R app:app /home/app/myapp
RUN sudo -u app bundle install --deployment --without test development doc
# === 5 ===
# Adding our web app to the image ... only after bundling do we copy the rest of
# the app into the image.
COPY . /home/app/myapp
RUN chown -R app:app /home/app/myapp
# === 6 ===
# Remove the default site. Add a virtual host entry to Nginx which describes
# where our app is, and Passenger will take care of the rest. See nginx.conf.
RUN rm /etc/nginx/sites-enabled/default
COPY nginx.conf /etc/nginx/sites-enabled/myapp.conf
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "80"
}
],
"Logging": "/home/app/myapp/log"
}
.dockerignore
/.bundle
/.DS_Store
/.ebextensions
/.elasticbeanstalk
/.env
/.git
/.yardoc
/log/*
/tmp
!/log/.keep
nginx-env.conf
Please note that rails-env.conf doesn't set any environment variables outside Nginx, so you won't be able to see them in the shell (i.e. the Dockerfile). You will have to use different methods to set environment variables for the shell too.
# By default Nginx clears all environment variables (except TZ) for its child
# processes (Passenger being one of them). That's why any environment variables
# we set with docker run -e, Docker linking and /etc/container_environment,
# won't reach Nginx. To preserve these variables, place an Nginx config file
# ending with *.conf in the directory /etc/nginx/main.d, in which we tell Nginx
# to preserve these variables.
# Set by Passenger Docker
env RAILS_ENV;
env RACK_ENV;
env PASSENGER_APP_ENV;
# Set by AWS Elastic Beanstalk (examples, change accordingly)
env AWS_ACCESS_KEY_ID;
env AWS_REGION;
env AWS_SECRET_KEY;
env DB_NAME;
env DB_USERNAME;
env DB_PASSWORD;
env DB_HOSTNAME;
env DB_PORT;
env MAIL_USERNAME;
env MAIL_PASSWORD;
env MAIL_SMTP_HOST;
env MAIL_PORT;
env SECRET_KEY_BASE;
nginx.conf
server {
listen 80;
server_name _;
root /home/app/myapp/public;
# The following deploys your app on Passenger.
# Not familiar with Passenger, and used (G)Unicorn/Thin/Puma/pure Node before?
# Yes, this is all you need to deploy on Passenger! All the reverse proxying,
# socket setup, process management, etc are all taken care automatically for
# you! Learn more at https://www.phusionpassenger.com/.
passenger_enabled on;
passenger_user app;
# Ensures that RAILS_ENV, RACK_ENV, PASSENGER_APP_ENV, etc are set to
# "production" when your application is started.
passenger_app_env production;
# Since this is a Ruby app, specify a Ruby version:
passenger_ruby /usr/bin/ruby2.2;
}