Rails image not running from docker-compose.yml - ruby-on-rails

I have a rails application that runs on Docker. My source code have the following files:
Dockerfile
FROM ruby:2.6.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
CMD bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
sidekiq:
build: .
command: bundle exec sidekiq
depends_on:
- redis
volumes:
- .:/myapp
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
- redis
- sidekiq
It runs normally using docker-compose up since I'm running this with the source code level.
Now when I build this app and push it to Dockerhub
docker build -t myusername/rails-app .
docker push myusername/rails-app
I'm expecting that I can run the rails app from an independent docker-compose.yml separately from the source code.
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
sidekiq:
build: .
command: bundle exec sidekiq
depends_on:
- redis
volumes:
- .:/myapp
web:
image: myusername/rails-app:latest # <= Running the app now from the image
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
- redis
- sidekiq
The only containers running are redis and db. The web is looking for this
Could not locate Gemfile or .bundle/ directory

In the second docker-compose.yml file, the one that should work somewhere else without the source code, you're still having the volume mounting the local folder in the container:
volumes:
- .:/myapp
Remove that from the sidekiq and web containers and it should work.
You've also kept the build: . for the sidekiq container which is useful only for the development box. Replace it with the image attribute, pointing to your image.
To summarise your docker-comspose.yaml file:
version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
sidekiq:
image: myusername/rails-app:latest
command: bundle exec sidekiq
depends_on:
- redis
web:
image: myusername/rails-app:latest # <= Running the app now from the image
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
ports:
- "3000:3000"
depends_on:
- db
- redis
- sidekiq

Related

When creating container into docker-compose on a server. Logs show 'exec /usr/local/bin/docker-entrypoint.sh: exec format error'

Using MacBook M1. (Maybe it's a reason of problem)
During uploading my project into server I got this error
$ cat docker-compose.yml
version: '3.8'
services:
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env
restart: always
frontend:
image: annarulunat/foodgram_frontend:latest
volumes:
- ../frontend/:/app/result_build/
backend:
image: annarulunat/foodgram:latest
restart: always
volumes:
- static_value:/app/static_backend/
- media_value:/app/media/
depends_on:
- db
env_file:
- ./.env
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
gunicorn foodgram.wsgi:application --bind 0:8000"
nginx:
image: nginx:1.21.3-alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
- ../frontend/build:/usr/share/nginx/html/
- ../docs/:/usr/share/nginx/html/api/docs/
- static_value:/var/html/static_backend/
- media_value:/var/html/media/
restart: always
volumes:
static_value:
media_value:
postgres_data:
$ cat Dockerfile
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . ./
RUN npm run build
CMD cp -r build result_build
Thank you)))
I tried to add FROM --platform=linux/amd64 <image>-<version> in the Dockerfile and build

permission denied while trying to start rails server in docker

I'm trying to run a rails server in a docker image along with a mysql and vue frontend image. I'm using ruby 3 and rails 6. The mysql and frontend image both start without problems. However the rails images doesn't start.
I'm on a Macbook Pro with MacOS Monterey and Docker Desktop 4.5.0
this is my docker-compose.yml:
version: "3"
services:
mysql:
image: mysql:8.0.21
command:
- --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=nauza_backend_development
ports:
- "3307:3306"
volumes:
- mysql:/var/lib/mysql
backend:
build:
context: nauza-backend
args:
UID: ${UID:-1001}
tty: true
stdin_open: true
command:
bundle exec rails s -p 8080 -b '0.0.0.0'
volumes:
- ./nauza-backend:/usr/src/app
# attach a volume at /bundle to cache gems
- bundle:/bundle
# attach a volume at ./node_modules to cache node modules
- node-modules:/usr/src/app/node_modules
# attach a volume at ./tmp to cache asset compilation files
- tmp:/usr/src/app/tmp
environment:
- RAILS_ENV=development
ports:
- "8080:8080"
depends_on:
- mysql
user: rails
environment:
- RAILS_ENV=development
- MYSQL_HOST=mysql
- MYSQL_USER=root
- MYSQL_PASSWORD=root
frontend:
build:
context: nauza-frontend
args:
UID: ${UID:-1001}
volumes:
- ./nauza-frontend:/usr/src/app
ports:
- "3000:3000"
user: frontend
volumes:
bundle:
driver: local
mysql:
driver: local
tmp:
driver: local
node-modules:
driver: local
and this is my Dockerfile:
FROM ruby:3.0.2
ARG UID
RUN adduser rails --uid $UID --disabled-password --gecos ""
ENV APP /usr/src/app
RUN mkdir $APP
WORKDIR $APP
ENV EDITOR=vim
RUN apt-get update \
&& apt-get install -y \
nmap \
vim
COPY Gemfile* $APP/
RUN bundle install -j3 --path vendor/bundle
COPY . $APP/
CMD ["rails", "server", "-p", "8080", "-b", "0.0.0.0"]
when I try to start this with docker-compose up on my Mac I get the following error:
/usr/local/lib/ruby/3.0.0/fileutils.rb:253:in `mkdir': Permission denied # dir_s_mkdir - /usr/src/app/tmp/cache (Errno::EACCES)
Any ideas on how to fix this?
Remove the line - tmp:/usr/src/app/tmp on your Dockerfile.
You don't need to access temp files of your container I would say. 🙂

'Could not find rake-13.0.3 in any of the sources (Bundler::GemNotFound)' while creating my api service

docker-compose.yml
version: "3.7"
services:
courseshine_redis:
container_name: courseshine_redis
image: redis:latest
command: redis-server --requirepass ${POSTGRES_PASSWORD}
restart: always
env_file: .env
stdin_open: true
ports:
- ${REDIS_PORT}:${REDIS_PORT}
volumes:
- courseshine_redis_data:/data
networks:
- internal
courseshine_db:
container_name: courseshine_db
build:
context: ../..
dockerfile: courseshine_docker/development/courseshine_db/Dockerfile
restart: always
env_file: .env
environment:
- POSTGRES_MULTIPLE_DATABASES=${POSTGRES_DEV_DB},${POSTGRES_TEST_DB}
ports:
- ${COURSESHINE_DB_PORT}:${COURSESHINE_DB_PORT}
volumes:
- courseshine_postgres_data:/var/lib/postgresql/data
- ./courseshine_db:/dockerfile-entrypoint-initdb.d
networks:
- internal
courseshine_pgadmin:
container_name: courseshine_pgadmin
image: dpage/pgadmin4:4.21
restart: unless-stopped
env_file: .env
environment:
- PGADMIN_DEFAULT_EMAIL=${POSTGRES_USER}
- PGADMIN_DEFAULT_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- pgadmin:/var/lib/pgadmin
- courseshine_postgres_data:/var/lib/postgresql/data
depends_on:
- courseshine_db
networks:
- internal
courseshine_api: &api_base
container_name: courseshine_api
build:
context: ../..
dockerfile: courseshine_docker/development/courseshine_api/Dockerfile
env_file: .env
stdin_open: true
volumes:
- ../../courseshine_api:/var/www/courseshine/courseshine_api
- /var/run/docker.sock:/var/run/docker.sock
- bundle_cache:/usr/local/bundle
depends_on:
- courseshine_redis
- courseshine_db
networks:
- internal
courseshine_ui:
container_name: courseshine_ui
build:
context: ../../
dockerfile: courseshine_docker/development/courseshine_ui/Dockerfile
env_file: .env
stdin_open: true
volumes:
- ../../courseshine_ui:/var/www/courseshine_ui
depends_on:
- courseshine_api
networks:
- internal
networks:
internal:
volumes:
courseshine_redis_data:
courseshine_postgres_data:
pgadmin:
bundle_cache:
my docerfile for courseshine_api service
FROM ruby:2.7.1-slim-buster
RUN apt-get update -qq && apt-get install -y build-essential nodejs libpq-dev postgresql-client && rm -rf /var/lib/apt/lists/*
ENV APP_HOME /var/www/courseshine/courseshine_api
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
COPY ./courseshine_api/Gemfile $APP_HOME/Gemfile
COPY ./courseshine_api/Gemfile.lock $APP_HOME/Gemfile.lock
RUN bundle install --path vendor/cache
# Copy the main application.
COPY ./courseshine_api $APP_HOME
# Add a script to be executed every time the container starts.
COPY ./courseshine_docker/development/courseshine_api/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
# Expose port 3000 to the Docker host, so we can access it
# from the outside.
EXPOSE 3000
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["rails","server","-b","0.0.0.0"]
entrypoint.sh
set -e
rm -f $APP_HOME/tmp/pids/server.pid
exec "$#"
when i hit docker-compose up, the courseshine_api service is not stand and throw Could not find rake-13.0.3 in any of the sources (Bundler::GemNotFound). Why this problem occur and how to fix this ..

How to set up separate .env for development and production using Docker

Coming from an environment where I was manually doing a ssh into the remote server, doing a git pull and creating my .env(since it is gitignored), how do I separate development .env and a production .env. I used docker-machine to create an AWS EC2 instance. I created a production.yml and did docker-compose -f production.yml up -d. The container in the EC2 machine picked up my development .env which is not what I want.
Dockerfile
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev git jpeg-dev zlib-dev libmagic
RUN python -m pip install --upgrade pip
RUN mkdir /writer-api
COPY requirements.txt /writer-api/
RUN pip install --no-cache-dir -r /writer-api/requirements.txt
COPY . /writer-api/
WORKDIR /writer-api
production.yml
version: "3"
services:
postgres:
restart: always
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
restart: always
build: .
command: gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
SECRET_KEY: ${SECRET_KEY}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
redis:
restart: always
image: "redis:alpine"
celery:
restart: always
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
restart: always
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
I guess you can export the environment shell variable & then use the .env as per the environment. Create a dev.env & prod.env file in the workspace.
Sample compose -
version: '3'
services:
nginx:
image: nginx
ports:
- '80'
env_file:
- ${ENVIRON}.env
Build for DEV -
export ENVIRON=dev
docker-compose up -d
Build for PROD -
export ENVIRON=prod
docker-compose up -d
This way you will be able to leverage same compose file for DEV & PROD environments.
setup the compose files for production and dev in seperate folders and put .env file in those folders

Docker Postgres Ruby on Rails unable to connect

I am following this tutorial from docker Docker Rails and I have created a folder and added this code below in my docker file.
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp
And my docker compose file code is:
version: '3'
services:
db:
image: postgres
volumes:
- .data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
I am following tutorial when I am running docker compose up I can just see this error:
Could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
What is wrong here I don't know how to inspect and detect error how to fix this.
You need environment variables within your web container so that it knows how to connect to the db container.
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=
volumes:
- ./data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
environment:
- PGHOST=db
- PGUSER=postgres
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
Please go to your database.yml and add host set to db , then username and password and run the command again.

Resources