Problem Loading Ruby Rails Unicorn in ECS Fargate When Building Image in CircleCI (Works Locally) - ruby-on-rails

I am having issues deploying a Ruby on Rails App to ECS Fargate. When I build the image locally (the same way it is done in the pipeline). I can easily start the web service with this command ["bundle", "exec", "unicorn", "-c", "config/unicorn.rb"]. However, running this same image in Fargate returns this: 2022-07-27 15:46:33bundler: failed to load command: unicorn (/usr/local/bundle/bin/unicorn)
Here is my Docker file:
FROM ruby:2.6.7
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client sudo
WORKDIR /app
ENV RAILS_ENV="staging"
ENV NODE_ENV="staging"
ENV LANG="en_US.UTF-8"
ENV RACK_ENV ="staging"
ENV BUNDLE_WITHOUT='development:test'
ARG GITHUB_TOKEN
RUN gem install bundler -v '2.2.28'
RUN bundle config https://github.com/somename/somerepo someuser:"${GITHUB_TOKEN}"
COPY . /app
RUN rm -rf /app/tmp
RUN mkdir -p /app/tmp
RUN bundle install
RUN bundle exec rails assets:precompile
EXPOSE 3000
The CMD is missing from Docker because its'a added in the task definition. Has anyone run into this issue? I've tried a number of different approaches, but am unsure of what is being changed locally running and running in Fargate.
Update
Looking into this issue further and found some more information that will need to be updated here.
I am using CircleCI to build and push this image to ECR. The issue seems to be that when created in CircleCI, any artifacts created by the bundle install become unreachable on run time and Docker is unable to run any gems because the GEM path is not even accessible to the root user. I pulled the image created by CircleCI Docker, locally, and confirmed the same errors. EXECing into the container and running chmod 755 -R /usr/local/bundle/bin/ and then executing the bundle exec to start the service, properly starts unicorn.
Next Steps
As a result, I attempted to add those changes into the Dockerfile on build and the same behavior still persists.
RUN bundle install
RUN chmod 755 -R /usr/local/bundle/bin/
Then I tried changing permissions in an entrypoint script and the container won't start at all.

Finally figured this out a few days ago. The answer is to add VOLUME arguments at the end of your Dockerfile. This will maintain persistence with any changes you have made. My final Dockerfile:
FROM ruby:2.6.7
ARG NPM_TOKEN
RUN apt-get update -qq && apt-get install -y \
build-essential \
curl \
postgresql-client \
software-properties-common \
sudo
RUN curl -fsSL https://deb.nodesource.com/setup_9.x | sudo -E bash - && \
sudo apt-get install -y nodejs
RUN apt-get install -y \
npm \
yarn
ENV BUNDLE_WITHOUT='development:test'
ENV RAILS_ENV="staging"
ENV RACK_ENV="staging"
ENV NPM_TOKEN="${NPM_TOKEN}"
RUN mkdir -p /dashboard
WORKDIR /dashboard
RUN mkdir -p /dashboard/tmp/pids
COPY Gemfile* ./
COPY gems/rails_admin_history_rollback /dashboard/gems/rails_admin_history_rollback
ARG GITHUB_TOKEN
RUN sudo gem install bundler -v '2.2.28' && \
bundle config https://github.com/some-company/some-repo some-name:"${GITHUB_TOKEN}"
RUN bundle install
COPY . /dashboard
RUN bundle exec rails assets:precompile
RUN chmod +x bin/entrypoint.sh
VOLUME /dashboard/
VOLUME /usr/local/bundle
EXPOSE 3000

Related

Different conditionals on the same Dockerfile

I have a Dockerfile with some commands I would like to use conditionally:
FROM + image_name (I have a M1 chip MacOS so I need to add --platform=linux/amd64 to it but I want to deploy in a AWS EC2 linux instance that doesn't need it)
On production I would like to run my project with nginx so I want the Dockerfile to end with this RUN mkdir -p tmp/sockets. But for testing, I have no need of the nginx so I would like my Dockerfile to end with this
# Expose port
EXPOSE 3000
# Start rails
CMD ["rails", "server", "-b", "0.0.0.0"]
I thought of using the multi stage dockerfile to solve the FROM image problem but the Dockerfile resulting is quite lengthy since it is basically the same except for the FROM image part.
For the nginx part I wanted to use a shell script but I am not sure how to write the exposing port and final command to start rails.
These are the files:
run_dockerfile.sh
#!/bin/bash
if [ ${RUN_DOCKERFILE} = "PROD" ]; then
mkdir -p tmp/sockets
else
????
fi
My Dockerfilelooks like this:
# Start from the official ruby image
# To run Dockerfile with arm64 architecture (M1 chip MacOS for example)
FROM --platform=linux/amd64 ruby:2.6.6 AS ARM64
# Set environment
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set RAILS_ENV to 'development' or set to null otherwise.
ENV RAILS_ENV=${BUILD_DEVELOPMENT:+development}
# if RAILS_ENV is null, set it to 'production' (or leave as is otherwise).
ENV RAILS_ENV=${RAILS_ENV:-production}
# Update and install JS & DB
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
# Create a directory for the application and use it
RUN mkdir /myapp
WORKDIR /myapp
# Gemfile and lock file need to be present, they'll be overwritten immediately
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
# Install gem dependencies
RUN gem install bundler:2.2.32
RUN bundle install
RUN curl https://deb.nodesource.com/setup_12.x | bash
ADD https://dl.yarnpkg.com/debian/pubkey.gpg /tmp/yarn-pubkey.gpg
RUN apt-key add /tmp/yarn-pubkey.gpg && rm /tmp/yarn-pubkey.gpg
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y yarn && apt-get install -y npm
RUN yarn add bootstrap
COPY . /myapp
# So that webpacker compiles
RUN yarn config set ignore-engines true
RUN rm -rf bin/webpack*
RUN rails webpacker:install
RUN bundle exec rails webpacker:compile
RUN bundle exec rake assets:precompile
# This script runs every time the container is created, necessary for rails
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Run run_dockerfile.sh
COPY run_dockerfile.sh run_dockerfile.sh
RUN chmod u+x run_dockerfile.sh && ./run_dockerfile.sh
##################################################
# Start from the official ruby image
# To run Dockerfile without arm64 architecture
FROM ruby:2.6.6 AS AMD64
# Set environment
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set RAILS_ENV to 'development' or set to null otherwise.
ENV RAILS_ENV=${BUILD_DEVELOPMENT:+development}
# if RAILS_ENV is null, set it to 'production' (or leave as is otherwise).
ENV RAILS_ENV=${RAILS_ENV:-production}
# Update and install JS & DB
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
# Create a directory for the application and use it
RUN mkdir /myapp
WORKDIR /myapp
# Gemfile and lock file need to be present, they'll be overwritten immediately
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
# Install gem dependencies
RUN gem install bundler:2.2.32
RUN bundle install
RUN curl https://deb.nodesource.com/setup_12.x | bash
ADD https://dl.yarnpkg.com/debian/pubkey.gpg /tmp/yarn-pubkey.gpg
RUN apt-key add /tmp/yarn-pubkey.gpg && rm /tmp/yarn-pubkey.gpg
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y yarn && apt-get install -y npm
RUN yarn add bootstrap
COPY . /myapp
# So that webpacker compiles
RUN yarn config set ignore-engines true
RUN rm -rf bin/webpack*
RUN rails webpacker:install
RUN bundle exec rails webpacker:compile
RUN bundle exec rake assets:precompile
# This script runs every time the container is created, necessary for rails
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Run run_dockerfile.sh
COPY run_dockerfile.sh run_dockerfile.sh
RUN chmod u+x run_dockerfile.sh && ./run_dockerfile.sh
Is there any way I could do the .sh or are there any recommendations on the proper way to do it? Thank you!
From the way you've described the problem, you don't really need very many special cases at all.
The one important detail is that it's very easy to override the image's CMD when you run a container. If you have two Compose files, for example, you can just set the service's command:
# docker-compose.yml
version: '3.8'
services:
myapp:
image: registry.example.com/myapp:${MYAPP_TAG:-latest}
ports: ['3000:80']
# docker-compose.override.yml
# for developer use
version: '3.8'
services:
myapp:
build: .
command: rails server -b 0.0.0.0 -p 80
The other variations you list shouldn't matter. You should get consistent results if you build your image FROM --platform=linux/amd64 on an x86-64 host, explicitly specifying the native platform; RUN mkdir a directory you won't use is harmless. The one inconsistency seems to be the container port, but you can explicitly tell rails server which port to use so it matches. I'd use the same image in all environments.
FROM --platform=linux/amd64 ruby:2.6.6 # even on an Intel/AMD host system
...
RUN mkdir tmp/sockets # even if it's unused
CMD ["nginx", "-g", "daemon off;"] # can be overridden when the container runs

Docker image stuck on bundling

I've been trying to build my Docker image:
docker build -t <tag> -f Dockerfile.production .
However, this hangs while bundling.
I have tried bundling with:
DEBUG_RESOLVER=true bundle install --verbose
Running bundle on my host machine works fine - only the Docker image has this problem.
Attached is my Dockerfile:
FROM cimg/ruby:2.7.4-node
LABEL maintainer=budgeneration#gmail.com
SHELL ["/bin/bash", "-c"]
USER root
RUN sudo apt-get update && \
apt-get install -y nodejs npm libvips-tools libsodium-dev \
apt-transport-https ca-certificates curl software-properties-common \
librocksdb-dev \
libsnappy-dev \
python3-distutils \
rsyslog --no-install-recommends
# Other tools not related to building by still required.
RUN sudo apt-get install -y ffmpeg gifsicle
USER circleci
# Install all gems first.
# This hits the warm cache if unchanged so bundling is faster.
COPY Gemfile* /tmp/
WORKDIR /tmp
RUN gem install sassc-rails -v 2.1.2
RUN gem install bulma-rails -v 0.9.1
RUN bundle config set without 'development test' \
&& bundle install --verbose \
&& bundle binstubs railties
# Remove yarn (the other yarn)
RUN apt-get purge cmdtest
RUN yarn global add mjml
WORKDIR /sapco
# First we copy just Yarn files, to run yarn install
COPY package.json /sapco
COPY yarn.lock /sapco
RUN yarn install
WORKDIR /sapco
# Now copy everything
COPY . /sapco
EXPOSE 3000
Any tips to try to debug this further?
I have managed to resolve this issue but not sure why:
I changed my base image file to FROM circleci/ruby:2.7.4-buster-node and the bundling step continues fine.

Gcloud meanjs build failing via docker install

I'm trying to deploy the following container on google cloud app engine using gcloud app deploy, it's the meanjs.org vanilla image. It uses a dockerfile, I'm new to docker and I'm trying to learn it on the fly, so if anyone can help that'd be great, thanks.
It looks as if the install of node via the dockerfile fails, I've checked node's documentation on github, and nothing has changed syntactically to what is in the existing dockerfile. I will attempt to recreate on my local workstation this morning, and will update this query shortly.
the errors are as follows..first docker error second errorbuild fail error
The docker file..
# Build:
# docker build -t meanjs/mean .
#
# Run:
# docker run -it meanjs/mean
#
# Compose:
# docker-compose up -d
FROM ubuntu:latest
MAINTAINER MEAN.JS
# 80 = HTTP, 443 = HTTPS, 3000 = MEAN.JS server, 35729 = livereload, 8080 = node-inspector
EXPOSE 80 443 3000 35729 8080
# Set development environment as default
ENV NODE_ENV development
# Install Utilities
RUN apt-get update -q \
&& apt-get install -yqq \
curl \
git \
ssh \
gcc \
make \
build-essential \
libkrb5-dev \
sudo \
apt-utils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -yq nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install MEAN.JS Prerequisites
RUN npm install --quiet -g gulp bower yo mocha karma-cli pm2 && npm cache clean
RUN mkdir -p /opt/mean.js/public/lib
WORKDIR /opt/mean.js
# Copies the local package.json file to the container
# and utilities docker container cache to not needing to rebuild
# and install node_modules/ everytime we build the docker, but only
# when the local package.json file changes.
# Install npm packages
COPY package.json /opt/mean.js/package.json
RUN npm install --quiet && npm cache clean
# Install bower packages
COPY bower.json /opt/mean.js/bower.json
COPY .bowerrc /opt/mean.js/.bowerrc
RUN bower install --quiet --allow-root --config.interactive=false
COPY . /opt/mean.js
# Run MEAN.JS server
CMD npm install && npm start
Okay, so after much wrestling unsuccessfully trying to install docker on windows, I went back to the dockerfile to try and identify the core issue here. Fortunately I find a solution as follows..
NodeJS is attempting to install on Ubuntu.
In the dockerfile at the root of the app
Ubuntu version is configured as:
FROM ubuntu:latest
simply change it to:
FROM ubuntu:14.04
I'm not sure if this is the best version to use for the build but it seems to be running successfully. Please feel free to amend/recommend an alternative solution. I'm new to Docker so pls be kind.

Docker: Reverse Engineering of an Image

When we use Docker it's very easy push and pull image in a public repository in our https://hub.docker.com but this repository it's free only for public image(only one can be private).
Currently it's possible to execute a reverse engineering of a public image in repository and read the source code of project ?
You can check how an image was created using docker history <image-name> --no-trunc
Update:
Check dive which is a very nice tool that allows you to views image layers.
As yamenk said docker history is the key to this.
As https://github.com/CenturyLinkLabs/dockerfile-from-image is broken, you can use recent
https://hub.docker.com/r/dduvnjak/dockerfile-from-image/
Extract from the site
Note that the script only works against images that exist in your local image
repository (the stuff you see when you type docker images). If you want to
generate a Dockerfile for an image that doesn't exist in your local repo
you'll first need to docker pull it.
For example, you can run it agains itself, to see the code
$ docker run --rm -v /run/docker.sock:/run/docker.sock centurylink/dockerfile-from-image ruby
FROM buildpack-deps:latest
RUN useradd -g users user
RUN apt-get update && apt-get install -y bison procps
RUN apt-get update && apt-get install -y ruby
ADD dir:03090a5fdc5feb8b4f1d6a69214c37b5f6d653f5185cddb6bf7fd71e6ded561c in /usr/src/ruby
WORKDIR /usr/src/ruby
RUN chown -R user:users .
USER user
RUN autoconf && ./configure --disable-install-doc
RUN make -j"$(nproc)"
RUN make check
USER root
RUN apt-get purge -y ruby
RUN make install
RUN echo 'gem: --no-rdoc --no-ri' >> /.gemrc
RUN gem install bundler
ONBUILD ADD . /usr/src/app
ONBUILD WORKDIR /usr/src/app
ONBUILD RUN [ ! -e Gemfile ] || bundle install --system
You can use laniksj/dfimage to reverse engineering of an image.
For example:
# docker run -v /var/run/docker.sock:/var/run/docker.sock laniksj/dfimage <YOUR_IMAGE_ID>
FROM node:12.4.0-alpine
RUN /bin/sh -c apk update
RUN /bin/sh -c apk -Uuv add groff less python py-pip
RUN /bin/sh -c pip install awscli
RUN /bin/sh -c apk --purge -v del py-pip
RUN /bin/sh -c rm /var/cache/apk/*
RUN /bin/sh -c apk add --no-cache curl
ADD dir:4afc740ff29e4a32a34617d2715e5e5dc8740f357254bc6d3f9362bb04af0253 in /app
COPY file:b57abdb61ae72f3a25be67f719b95275da348f9dfb63fb4ff67410a595ae1dfd in /usr/local/bin/
WORKDIR /app
RUN /bin/sh -c npm install
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["node" "app.js"]
dfimage and dockerfile-from-image are broken
dedockify works
imageName=ruby:latest
docker pull $imageName
docker images # -> get imageId
imageId=xxxxxxxxxxxx
# write to Dockerfile
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock mrhavens/dedockify $imageId >Dockerfile

Dockerfile - how to run script?

Created a docker file, but unable to get run the rail setup script i.e ./bin/setup to execute
What am I doing wrong? RUN /bin/bash -C "/usr/src/app/bin/setup" this does not work.
I also tried this RUN ./bin/setup (this also does not work!)
Dockerfile
FROM ruby:2.3
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV RAILS_VERSION 5
RUN gem install rails --version "$RAILS_VERSION"
WORKDIR /usr/src/app
COPY . .
# setup does not run, why?
RUN /bin/bash -C "/usr/src/app/bin/setup"
...
I was facing a similar dos/unix issue. I did a git check out of a file in windows and added it to docker image(linux). If that is the case sed is your friend. Just add following to your Dockerfile:
RUN /bin/sed s/\\r//g -i /usr/src/app/bin/setup
Might save you from installing an additional package. Hope it help!

Resources