Running Rails CLI commands through Docker bash in Azure - ruby-on-rails

I have a containerized rails application, deployed on a app service in Azure. I've enabled SSH for my docker in order to manually run some rakes, and execute rails CLI commands.
The issue:
Logging in through the SSH in azure portal does not let me run any commands (rakes, migrates etc).
I always run into the command not found error, even though the application is successfully deployed and running, so that must mean rails and all the gems are installed somewhere. The bundler is installed in the docker container, along with ruby.
My dockerfile:
FROM ruby:2.6.3
....
WORKDIR /app
COPY . /app
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
ADD Gemfile /app
ADD Gemfile.lock /app
RUN gem install bundler
RUN bundle config set --local without 'test' --with runtime --deployment
RUN bundle install
EXPOSE 3000 80 2222
RUN ["chmod","+x","entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
Any help is highly appreciated!
I've tried executing which ruby, and looking in the gems folder but I've only found bundler there. I've tried setting the GEM_HOME and GEM_PATH to point to my local app, but once again the bundler is installed there and all the other gems are missing.
Executing which/locate rails does not find the installation.
When I try to run bin/rails, it complains that the other gems are not installed/
What is the issue here? Is there another way I should be doing this through azure?

I think that environment variables are not automatically passed on to SSH sessions. I had to add these to my docker image start script
# This makes env variables available in the SSH session too.
eval $(printenv | sed -n "s/^\([^=]\+\)=\(.*\)$/export \1=\2/p" | sed 's/"/\\\"/g' | sed '/=/s//=\'\''/' | sed 's/$/\'\''/' >> /etc/profile)
Check this link.

Related

Dockerfile can't find output on linux

I have the following Dockerfile
ARG JEKYLL_VERSION=4
FROM jekyll/jekyll:$JEKYLL_VERSION as BUILD
COPY --chown=jekyll:jekyll . /srv/jekyll
RUN ls -lah /srv/jekyll
RUN jekyll build
RUN ls /srv/jekyll/_site/
FROM nginxinc/nginx-unprivileged:alpine
ADD build/nginx-site.conf /etc/nginx/conf.d/default.conf
COPY --chown=101:101 --from=BUILD /srv/jekyll/_site/ /var/www
Which does build perfectly locally, but not on the Linux Jenkins Buildslave:
Bundle complete! 1 Gemfile dependency, 28 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.
ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux-musl]
Configuration file: /srv/jekyll/_config.yml
Source: /srv/jekyll
Destination: /srv/jekyll/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.186 seconds.
Auto-regeneration: disabled. Use --watch to enable.
Removing intermediate container 2b064db0ccaa
---> 1e19e78f593a
Step 6/9 : RUN ls /srv/jekyll/_site/
---> Running in 194d35c3f691
[91mls: /srv/jekyll/_site/: No such file or directory
[0mThe command '/bin/sh -c ls /srv/jekyll/_site/' returned a non-zero code: 1
Changing RUN jekyll build to RUN jekyll build && ls /srv/jekyll/_site/ lists the jekyll output as expected.
Why is the output of the jekyll build not stored? If I remove the ls command, the output can't be found in the later stage.
Any hints?
That happens because the image declares that directory as a VOLUME. This is probably a bug and I'd suggest reporting it as a GitHub issue; it won't affect any of the documented uses of the image to remove that line.
At the end of the day Jekyll is just a Ruby gem (library). So while that image installs a lot of things, for your use it might be enough to start from a ruby image, install the gem, and use it:
FROM ruby:3.1
RUN gem install jekyll
WORKDIR /site
COPY . .
RUN jekyll build
If your directory tree contains a Ruby Gemfile and Gemfile.lock, you could also COPY those into the image and RUN bundle install instead.
Alternatively, since Jekyll is a static site generator, you should get the same results if you build the site on the host or in a container, and another good path could be to install Ruby and Jekyll on your host (maybe in an rbenv gemset) and then COPY the site from the host into the Nginx image; skip the first stage entirely.

Deploying Ruby on Rails 6 - AWS Elastic Beanstalk - Docker: ArgumentError: Missing `secret_key_base`

I can't seem to figure out a proper way to set secret_key_base for Ruby on Rails 6 - AWS Elastic Beanstalk - Docker deploy. Thus, the deploy keeps failing. I've been trying to follow this tutorial: https://dev.to/fdoxyz/elastic-beanstalk-apps-using-docker-containers-56l8
System:
Ubuntu 18.04
ruby 2.6.5p114 (2019-10-01 revision 67812) [x86_64-linux]
Bundler version 2.1.4
Rails 6.0.2.1
Docker version 18.09.7, build 2d0083d
Node v12.16.1
Here are the steps I take from empty dir to deploy:
mkdir new_project && cd new_project
eb init
2) us-west-1 : US West (N. California)
2) [ Create new Application ]
(default is "new_project")
8) Docker
Do you want to set up SSH for your instances? Y
Select a keypair.
eb create
Enter Environment Name (default is new-project-dev)
Enter DNS CNAME prefix (default is new-project-dev)
Select a load balancer type: 2) application
enable Spot Fleet? n
download the sample application into the current directory? n
eb setenv SECRET_KEY_BASE=$(ruby -e "require 'securerandom';puts SecureRandom.hex(64)")
eb setenv RAILS_ENV=production
cat .gitignore
rails new .
vim .gitignore (paste old contents of gitignore)
touch Dockerfile
vim Dockerfile
===============
FROM ruby:2.6.5
# Install NodeJS & Yarn
RUN apt-get update && \
apt-get install apt-transport-https && \
curl -sL https://deb.nodesource.com/setup_12.x | bash - && \
apt-get purge nodejs && \
apt-get update && \
apt-get install nodejs -y && \
npm install yarn -g && \
gem install bundler -v 2.1.4
# Workdir and add dependencies
WORKDIR /app/
ADD Gemfile Gemfile.lock /app/
# Throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
# Install dependencies
ARG RAILS_MASTER_KEY
ENV RAILS_ENV=production NODE_ENV=production RAILS_SERVE_STATIC_FILES=1
RUN bundle install --without development test
# Add the app code, precompile assets and use non-root user
ADD . /app/
RUN rake assets:precompile DISABLE_SPRING=1 && \
chown -R nobody:nogroup /app
USER nobody
ENV HOME /app
# Make sure to explicitly bind to port & interface
CMD ["bundle", "exec", "rails s -p 3000 -b 0.0.0.0"]
===============
vim config/environments/production.rb
insert at the top of the file:
config.secret_key_base = ENV["SECRET_KEY_BASE"]
git add . && git commit -m "Initial commit"
eb use new_project-dev
eb deploy
Here's a full log from ssh-ing to the instance at '/var/log/eb-activity.log':
https://raw.githubusercontent.com/maxtocarev/eb-log/master/eb-activity.log
The secret_key_base is a value stored in Rails encrypted credentials. Looks like the command rake assets:precompile from inside the Dockerfile fails when building the Docker image because it needs the secret_key_base value. I believe this happens because it doesn't have the config/master.key in your local project. I would recommend passing it to docker build using something like this:
docker build --build-arg RAILS_MASTER_KEY=${RAILS_MASTER_KEY} ...
I definitely don't recommended including config/master.key inside the project itself because of security reasons, that's why I would use the ENV variable instead.
In this case looks like you're using the Elastic Beanstalk "auto build" (which means the Docker image is built on each deploy from your source code), so you're not building the image manually. This could be fixed by adding the RAILS_MASTER_KEY env variable using eb setenv RAILS_MASTER_KEY=XXXXXXXX or from the AWS web console.

bundle exec rake assets:precompile take incredible time with docker and rails_admin

How to speed up the rake assets process (rake assets:precompile) when building a docker rails image with rails_admin ?
Issue :
I'm trying to build a docker image of a rails application (below the Dockerfile), but the step RUN bundle exec rake assets:precompile hangs and took enormous time, up to 20 minutes and sometime docker stops the image build due to execution timeout. I think it is related to rails_admin, so i disabled it and the image was build without waiting all this time. When the rails_admin is enabled, the log of docker indicates that the task generate my own assets then it is blocked for a considerable time before displaying the assets of rails_admin and continue the build.
Should i build the assets before running the build process of there's something related to rails_admin that should be fixed ?
Os / Docker
Os : Windows 10
Docker : Version 18.03.1-ce-win65
Dockerfile
FROM ruby:2.3.3
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
# Set an environment variable where the Rails app is installed to inside of Docker image
ENV RAILS_ROOT /var/www/unk-web-app
RUN mkdir -p $RAILS_ROOT
# Set working directory
WORKDIR $RAILS_ROOT
# Setting env up
#ENV RAILS_ENV='production'
#ENV RACK_ENV='production'
# Adding gems
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
# https://stephencodes.com/upgrading-ruby-dockerfiles-to-use-bundler-2-0-1/
ENV BUNDLER_VERSION 2.0.2
#set the version in Gemfile
RUN bundle install --jobs 20 --retry 5 --without development test
# Adding project files
COPY . .
# get the database url from docker image building
ARG DATABASE_URI
ARG RAILS_ENV
# Set env
ENV DOCKER 1
RUN bundle exec rake db:migrate
RUN bundle exec rake assets:precompile
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]

ConnectionBad issue with a Rails 5 app running on google cloud run to a Google Cloud SQL instance via socket

When I use the google cloud run service my docker container will return the error:
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've enabled the cloud sql admin api on the relevant project. I ssh'ed into the instance that I was running with GCP services available in the google cloud shell, and checked /var/run/postgresql/.s.PGSQL.5432. There was nothing available. Google cloud run docks say to set the designation for the socket to under /cloudsql/, but no socket appears to exist there either.
Nothing in cloud sql/run open issues or the issue tracker suggests that this should be an issue.
Deploy command uses the --add-cloudsql-instances flag without error, so I believe there should be no issue there.
Relevant database.yml section:
staging:
adapter: postgresql
encoding: utf8
pool: 5
timeout: 5000
database: project_staging
username: project_staging
password: <%= Rails.application.credentials[:db_password] %>
socket: "/cloudsql/my-project-name:asia-northeast1:project-database-name/"
Dockerfile to set up the container -
FROM ruby:2.6.2
ARG environment
// Bunch of env code
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /usr/src/app
RUN gem install bundler
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install
COPY . .
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
ENV RAILS_LOG_TO_STDOUT=true
Do I need to install more than just postgresql-client here?
Almost certainly irrelevant, but the start script:
cd /usr/src/app
bundle exec rake db:create
bundle exec rake db:migrate
# Do some protective cleanup
> log/${RAILS_ENV}.log
rm -f tmp/pids/server.pid
bundle exec rails server -e ${RAILS_ENV} -b 0.0.0.0 -p $PORT
I'm honestly baffled here. Is it a configuration issue? A cloud run issue? Am I missing some kind of package? I expected it to just connect to the socket without issue on boot.
I have followed this Medium guide(parts 1, 2, 3 and 4) to create a Cloud Run with Ruby and connect it to a Cloud SQL instance with no problem at all, can you try to comparing it to your deploy or even try to follow the steps to see if what you did differs on what they explain there?
Also, in case that helps, there is a similar case I've found in another post where they were facing the same issue even though it's not deployed in Cloud Run, might be helpful. Another Medium post addresses this same issue too and gives a set of solutions.

Google cloud ruby deployment and ruby-docker

I am trying to put my rails project on the google cloud engine for the first time and I have a lot of trouble.
I've wanted to upload my project with a custom runtime app.yaml (because I would like yarn to install the dependencies as well), but the deployment command fails with this error:
Error Response: [4] Your deployment has failed to become healthy in the allotted time and therefore was rolled back. If you believe this was an error, try adjusting the 'app_start_timeout_sec' setting in the 'readiness_check' section.
PS: the app runs locally (development and production env).
My app.yaml looks like this:
entrypoint: bundle exec rails s -b '0.0.0.0' --port $PORT
env: flex
runtime: custom
env_variables:
My Environment variables
beta_settings:
cloud_sql_instances: ekoma-app:us-central1:ekoma-db
readiness_check:
path: "/_ah/health"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 1
app_start_timeout_sec: 120
And my Dockerfile looks like this:
FROM l.gcr.io/google/ruby:latest
RUN apt-get update -qq && apt-get install apt-transport-https
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev imagemagick yarn
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN gem install pkg-config -v "~> 1.1"
RUN bundle install && npm install
COPY . /app
When deploying with a ruby runtime I realized that the dockerfile generated was much more complex and probably complete and google provide a repo to generate it.
So, I tried to look into the ruby-docker public repo that google shared but I don't know how to use their generated docker images and therefore fix my Dockerfile issue
https://github.com/GoogleCloudPlatform/ruby-docker
Could someone help me figure what's wrong in my setup and how to run these ruby-docker image (seems very useful!)?
Thank you!
The "entrypoint" field in app.yaml is not used when a custom runtime is in play. Instead, set the CMD in your Dockerfile. e.g.:
CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "--port", "8080"]
That probably will get your application running. (Remember that environment variables are not interpolated in exec form, so I replaced your $PORT with the hard-coded port 8080, which is the port App Engine expects.)
As an alternative:
It may be possible to use the Ruby runtime images in the ruby-docker repo, and not have to use a custom runtime (i.e. you may not need to write your own Dockerfile), even if you have custom build steps like doing yarn installs. Most of the build process in runtime: ruby is customizable, but it's not well-documented. If you want to try this path, the TL;DR is:
Use runtime: ruby in your app.yaml and don't provide your own Dockerfile. (And reinstate the entrypoint of course.)
If you want to install ubuntu packages not normally present in runtime: ruby, list them in app.yaml under runtime_config:packages. For example:
runtime_config:
packages:
- libgeos-dev
- libproj-dev
If you want to run custom build steps, list them in app.yaml under runtime_config:build. They get executed in the Dockerfile after the bundle install step (which cannot itself be modified). For example:
runtime_config:
build:
- npm install
- bundle exec rake assets:precompile
- bundle exec rake setup_my_stuff
Note that by default, if you don't provide custom build steps, the ruby runtime behaves as if there is one build step: bundle exec rake assets:precompile || true. That is, by default, runtime: ruby will attempt to compile your assets during app engine deployment. If you do modify the build steps and you want to keep this behavior, make sure you include that rake task as part of your custom build steps.

Resources