permission denied while trying to start rails server in docker - ruby-on-rails

I'm trying to run a rails server in a docker image along with a mysql and vue frontend image. I'm using ruby 3 and rails 6. The mysql and frontend image both start without problems. However the rails images doesn't start.
I'm on a Macbook Pro with MacOS Monterey and Docker Desktop 4.5.0
this is my docker-compose.yml:
version: "3"
services:
mysql:
image: mysql:8.0.21
command:
- --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=nauza_backend_development
ports:
- "3307:3306"
volumes:
- mysql:/var/lib/mysql
backend:
build:
context: nauza-backend
args:
UID: ${UID:-1001}
tty: true
stdin_open: true
command:
bundle exec rails s -p 8080 -b '0.0.0.0'
volumes:
- ./nauza-backend:/usr/src/app
# attach a volume at /bundle to cache gems
- bundle:/bundle
# attach a volume at ./node_modules to cache node modules
- node-modules:/usr/src/app/node_modules
# attach a volume at ./tmp to cache asset compilation files
- tmp:/usr/src/app/tmp
environment:
- RAILS_ENV=development
ports:
- "8080:8080"
depends_on:
- mysql
user: rails
environment:
- RAILS_ENV=development
- MYSQL_HOST=mysql
- MYSQL_USER=root
- MYSQL_PASSWORD=root
frontend:
build:
context: nauza-frontend
args:
UID: ${UID:-1001}
volumes:
- ./nauza-frontend:/usr/src/app
ports:
- "3000:3000"
user: frontend
volumes:
bundle:
driver: local
mysql:
driver: local
tmp:
driver: local
node-modules:
driver: local
and this is my Dockerfile:
FROM ruby:3.0.2
ARG UID
RUN adduser rails --uid $UID --disabled-password --gecos ""
ENV APP /usr/src/app
RUN mkdir $APP
WORKDIR $APP
ENV EDITOR=vim
RUN apt-get update \
&& apt-get install -y \
nmap \
vim
COPY Gemfile* $APP/
RUN bundle install -j3 --path vendor/bundle
COPY . $APP/
CMD ["rails", "server", "-p", "8080", "-b", "0.0.0.0"]
when I try to start this with docker-compose up on my Mac I get the following error:
/usr/local/lib/ruby/3.0.0/fileutils.rb:253:in `mkdir': Permission denied # dir_s_mkdir - /usr/src/app/tmp/cache (Errno::EACCES)
Any ideas on how to fix this?

Remove the line - tmp:/usr/src/app/tmp on your Dockerfile.
You don't need to access temp files of your container I would say. 🙂

Related

'Could not find rake-13.0.3 in any of the sources (Bundler::GemNotFound)' while creating my api service

docker-compose.yml
version: "3.7"
services:
courseshine_redis:
container_name: courseshine_redis
image: redis:latest
command: redis-server --requirepass ${POSTGRES_PASSWORD}
restart: always
env_file: .env
stdin_open: true
ports:
- ${REDIS_PORT}:${REDIS_PORT}
volumes:
- courseshine_redis_data:/data
networks:
- internal
courseshine_db:
container_name: courseshine_db
build:
context: ../..
dockerfile: courseshine_docker/development/courseshine_db/Dockerfile
restart: always
env_file: .env
environment:
- POSTGRES_MULTIPLE_DATABASES=${POSTGRES_DEV_DB},${POSTGRES_TEST_DB}
ports:
- ${COURSESHINE_DB_PORT}:${COURSESHINE_DB_PORT}
volumes:
- courseshine_postgres_data:/var/lib/postgresql/data
- ./courseshine_db:/dockerfile-entrypoint-initdb.d
networks:
- internal
courseshine_pgadmin:
container_name: courseshine_pgadmin
image: dpage/pgadmin4:4.21
restart: unless-stopped
env_file: .env
environment:
- PGADMIN_DEFAULT_EMAIL=${POSTGRES_USER}
- PGADMIN_DEFAULT_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- pgadmin:/var/lib/pgadmin
- courseshine_postgres_data:/var/lib/postgresql/data
depends_on:
- courseshine_db
networks:
- internal
courseshine_api: &api_base
container_name: courseshine_api
build:
context: ../..
dockerfile: courseshine_docker/development/courseshine_api/Dockerfile
env_file: .env
stdin_open: true
volumes:
- ../../courseshine_api:/var/www/courseshine/courseshine_api
- /var/run/docker.sock:/var/run/docker.sock
- bundle_cache:/usr/local/bundle
depends_on:
- courseshine_redis
- courseshine_db
networks:
- internal
courseshine_ui:
container_name: courseshine_ui
build:
context: ../../
dockerfile: courseshine_docker/development/courseshine_ui/Dockerfile
env_file: .env
stdin_open: true
volumes:
- ../../courseshine_ui:/var/www/courseshine_ui
depends_on:
- courseshine_api
networks:
- internal
networks:
internal:
volumes:
courseshine_redis_data:
courseshine_postgres_data:
pgadmin:
bundle_cache:
my docerfile for courseshine_api service
FROM ruby:2.7.1-slim-buster
RUN apt-get update -qq && apt-get install -y build-essential nodejs libpq-dev postgresql-client && rm -rf /var/lib/apt/lists/*
ENV APP_HOME /var/www/courseshine/courseshine_api
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
COPY ./courseshine_api/Gemfile $APP_HOME/Gemfile
COPY ./courseshine_api/Gemfile.lock $APP_HOME/Gemfile.lock
RUN bundle install --path vendor/cache
# Copy the main application.
COPY ./courseshine_api $APP_HOME
# Add a script to be executed every time the container starts.
COPY ./courseshine_docker/development/courseshine_api/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["rails","server","-b","0.0.0.0"]
entrypoint.sh
set -e
rm -f $APP_HOME/tmp/pids/server.pid
exec "$#"
when i hit docker-compose up, the courseshine_api service is not stand and throw Could not find rake-13.0.3 in any of the sources (Bundler::GemNotFound). Why this problem occur and how to fix this ..

Rails image not running from docker-compose.yml

I have a rails application that runs on Docker. My source code have the following files:
Dockerfile
FROM ruby:2.6.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
CMD bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
sidekiq:
build: .
command: bundle exec sidekiq
depends_on:
- redis
volumes:
- .:/myapp
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
- redis
- sidekiq
It runs normally using docker-compose up since I'm running this with the source code level.
Now when I build this app and push it to Dockerhub
docker build -t myusername/rails-app .
docker push myusername/rails-app
I'm expecting that I can run the rails app from an independent docker-compose.yml separately from the source code.
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
sidekiq:
build: .
command: bundle exec sidekiq
depends_on:
- redis
volumes:
- .:/myapp
web:
image: myusername/rails-app:latest # <= Running the app now from the image
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
- redis
- sidekiq
The only containers running are redis and db. The web is looking for this
Could not locate Gemfile or .bundle/ directory
In the second docker-compose.yml file, the one that should work somewhere else without the source code, you're still having the volume mounting the local folder in the container:
volumes:
- .:/myapp
Remove that from the sidekiq and web containers and it should work.
You've also kept the build: . for the sidekiq container which is useful only for the development box. Replace it with the image attribute, pointing to your image.
To summarise your docker-comspose.yaml file:
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
sidekiq:
image: myusername/rails-app:latest
command: bundle exec sidekiq
depends_on:
- redis
web:
image: myusername/rails-app:latest # <= Running the app now from the image
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
ports:
- "3000:3000"
depends_on:
- db
- redis
- sidekiq

How to set up separate .env for development and production using Docker

Coming from an environment where I was manually doing a ssh into the remote server, doing a git pull and creating my .env(since it is gitignored), how do I separate development .env and a production .env. I used docker-machine to create an AWS EC2 instance. I created a production.yml and did docker-compose -f production.yml up -d. The container in the EC2 machine picked up my development .env which is not what I want.
Dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev git jpeg-dev zlib-dev libmagic
RUN python -m pip install --upgrade pip
RUN mkdir /writer-api
COPY requirements.txt /writer-api/
RUN pip install --no-cache-dir -r /writer-api/requirements.txt
COPY . /writer-api/
WORKDIR /writer-api
production.yml
version: "3"
services:
postgres:
restart: always
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
restart: always
build: .
command: gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
SECRET_KEY: ${SECRET_KEY}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
redis:
restart: always
image: "redis:alpine"
celery:
restart: always
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
restart: always
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
I guess you can export the environment shell variable & then use the .env as per the environment. Create a dev.env & prod.env file in the workspace.
Sample compose -
version: '3'
services:
nginx:
image: nginx
ports:
- '80'
env_file:
- ${ENVIRON}.env
Build for DEV -
export ENVIRON=dev
docker-compose up -d
Build for PROD -
export ENVIRON=prod
docker-compose up -d
This way you will be able to leverage same compose file for DEV & PROD environments.
setup the compose files for production and dev in seperate folders and put .env file in those folders

error running db migrate with docker for rails

I'm trying to set up an dev environment for Vue.js and rails API following a tutorial.
Eventually I hit a hurdle when trying to run the following command:
docker-compose run backend rails db:create
Here is the error:
$ docker-compose run backend rails db:create
Starting am-full-stack_db_1_b7f6ee37d2e4 ... done
Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"rails\": executable file not found in $PATH": unknown
Here is my file tree
App
autheg-backend
autheg-frontend
docker-compose.yml
Here is my docker-compose.yml
version: '3'
services:
db:
image: postgres
ports:
- "5432"
backend:
build:
context: autheg-backend
args:
UID: ${UID:-1001}
volumes:
- ./autheg-backend:/usr/src/app
ports:
- "8080:8080"
depends_on:
- db
user: rails
frontend:
build:
context: autheg-frontend
args:
UID: ${UID:-1001}
volumes:
- ./autheg-frontend:/usr/src/app
ports:
- "3000:3000"
user: frontend
Result of 'docker-compose run backend env'
PATH=/usr/local/bundle/bin:/usr/local/bundle/gems/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c88e0c72c584
TERM=xterm
RUBY_MAJOR=2.5
RUBY_VERSION=2.5.3
RUBY_DOWNLOAD_SHA256=1cc9d0359a8ea35fc6111ec830d12e60168f3b9b305a3c2578357d360fcf306f
RUBYGEMS_VERSION=2.7.8
BUNDLER_VERSION=1.17.1
GEM_HOME=/usr/local/bundle
BUNDLE_PATH=/usr/local/bundle
BUNDLE_SILENCE_ROOT_WARNING=1
BUNDLE_APP_CONFIG=/usr/local/bundle
APP=/usr/src/app
HOME=/home/rails
Thanks!
to do db migration use:
docker-compose run backend bin/rails db:create
# or
docker-compose run backend bundle exec rails db:create
/backend/Dockerfile
FROM ruby:2.5
ARG UID
RUN adduser rails --uid $UID --disabled-password --gecos ""
ENV APP /usr/src/app
RUN mkdir $APP
WORKDIR $APP
COPY Gemfile* $APP/
RUN bundle install -j3 --path vendor/bundle
COPY . $APP/
# Setting env up
ENV RAILS_ENV='development'
ENV RACK_ENV='development'
CMD [ "bundle", "exec", "rails", "server", "-p", "8080", "-b", "0.0.0.0"]
./docker-compose.yml
version: '3'
services:
db:
image: postgres
ports:
- "5432"
backend:
build:
context: autheg-backend
args:
UID: ${UID:-1001}
command: bundle exec rails s -p 8080 -b '0.0.0.0'
volumes:
- ./autheg-backend:/usr/src/app
ports:
- "8080:8080"
depends_on:
- db
user: rails
frontend:
build:
context: autheg-frontend
args:
UID: ${UID:-1001}
volumes:
- ./autheg-frontend:/usr/src/app
ports:
- "3000:3000"
user: frontend
Then
docker-compose up
Now you should be able to see the rails site using:
localhost:8080 (instead of 3000)
Hope that helps you get started
Where is your backened dockerfile for rails development like this example:-
# Base image:
FROM ruby:2.5
# Install dependencies
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
# create application directory
RUN mkdir /myapp
# Set our working directory inside the image
WORKDIR /myapp
# Setting env up
ENV RAILS_ENV='development'
ENV RACK_ENV='development'
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
ADD . /myapp
EXPOSE 3000
CMD [ "bundle", "exec", "puma", "-C", "config/puma.rb" ]
This is caused because the path is not set in the Docker container, it's set on your development/local machine. Use the following command:
docker-compose run backend bundle exec rails db:create

Docker Postgres Ruby on Rails unable to connect

I am following this tutorial from docker Docker Rails and I have created a folder and added this code below in my docker file.
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
And my docker compose file code is:
version: '3'
services:
db:
image: postgres
volumes:
- .data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I am following tutorial when I am running docker compose up I can just see this error:
Could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
What is wrong here I don't know how to inspect and detect error how to fix this.
You need environment variables within your web container so that it knows how to connect to the db container.
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=
volumes:
- ./data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
- PGHOST=db
- PGUSER=postgres
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
Please go to your database.yml and add host set to db , then username and password and run the command again.

Resources