docker web server exiting without error message - ruby-on-rails

I have my project set with docker-compose. Was working fine until today when I rebuilt the web container. Now whenever I want to start it using docker-compose up it just exits without an Error message:
web_1 | => Booting Unicorn
web_1 | => Rails 3.2.22.5 application starting in development on http://0.0.0.0:3000
web_1 | => Call with -d to detach
web_1 | => Ctrl-C to shutdown server
web_1 | Exiting
If I run it with --verbose, docker-compose --verbose up, the following lines show after 'Exiting':
compose.cli.verbose_proxy.proxy_callable: docker wait <- (u'a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9')
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9/wait HTTP/1.1" 200 30
compose.cli.verbose_proxy.proxy_callable: docker wait -> 1
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
u'Args': [u'redis:6379',
u'--',
u'./wait-for-it.sh',
u'postgres:5432',
u'--',
u'bundle',
u'exec',
u'rails',
u's',
This is my docker-compose.yml:
version: '3'
services:
memcached:
image: memcached:1.5.2-alpine
restart: always
ports:
- "11211:11211"
postgres:
image: postgres:9.4-alpine
restart: always
volumes:
- ~/.myapp-data/postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myapp_development
- POSTGRES_USER=default
- POSTGRES_PASSWORD=secret
redis:
image: redis:3.2.0-alpine
restart: always
volumes:
- ~/.myapp-data/redis:/data
ports:
- "6379:6379"
web:
build:
context: .
dockerfile: "Dockerfile-dev"
stdin_open: true
tty: true
command: ./wait-for-it.sh redis:6379 -- ./wait-for-it.sh postgres:5432 -- bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/opt/apps/myapp
depends_on:
- memcached
- redis
- postgres
ports:
- "80:3000"
env_file:
- .env
extra_hosts:
- "api.myapp:127.0.0.1"
- "api.getmyapp:127.0.0.1"
- "my.app:127.0.0.1"
EDIT:
Here are the contents of Dockerfile-dev, required on the comments:
FROM ruby:2.3.7-slim
RUN apt-get update
RUN apt-get -y install software-properties-common libpq-dev build-essential \
python-dev python-pip wget curl git-core \
--fix-missing --no-install-recommends --allow-unauthenticated
# Set install path for reference later.
ENV INSTALL_PATH /opt/apps/engine
RUN mkdir -p $INSTALL_PATH
RUN gem install bundler
WORKDIR $INSTALL_PATH
ADD Gemfile $INSTALL_PATH
ADD Gemfile.lock $INSTALL_PATH
RUN bundle install
RUN find /tmp -type f -atime +10 -delete
ADD . $INSTALL_PATH
RUN cp config/database.docker-dev.yml config/database.yml
CMD [ "bundle", "exec", "rails", "s", "-p", "3000", "-b" "0.0.0.0" ]

Docker containers exit as soon as the commands run inside them are finished. From the logs you have posted, it seems like the container has 'nothing else' to do after starting up the server. You need to run the process in the foreground or have some sort of sleep to keep the container alive.

Related

permission denied while trying to start rails server in docker

I'm trying to run a rails server in a docker image along with a mysql and vue frontend image. I'm using ruby 3 and rails 6. The mysql and frontend image both start without problems. However the rails images doesn't start.
I'm on a Macbook Pro with MacOS Monterey and Docker Desktop 4.5.0
this is my docker-compose.yml:
version: "3"
services:
mysql:
image: mysql:8.0.21
command:
- --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=nauza_backend_development
ports:
- "3307:3306"
volumes:
- mysql:/var/lib/mysql
backend:
build:
context: nauza-backend
args:
UID: ${UID:-1001}
tty: true
stdin_open: true
command:
bundle exec rails s -p 8080 -b '0.0.0.0'
volumes:
- ./nauza-backend:/usr/src/app
# attach a volume at /bundle to cache gems
- bundle:/bundle
# attach a volume at ./node_modules to cache node modules
- node-modules:/usr/src/app/node_modules
# attach a volume at ./tmp to cache asset compilation files
- tmp:/usr/src/app/tmp
environment:
- RAILS_ENV=development
ports:
- "8080:8080"
depends_on:
- mysql
user: rails
environment:
- RAILS_ENV=development
- MYSQL_HOST=mysql
- MYSQL_USER=root
- MYSQL_PASSWORD=root
frontend:
build:
context: nauza-frontend
args:
UID: ${UID:-1001}
volumes:
- ./nauza-frontend:/usr/src/app
ports:
- "3000:3000"
user: frontend
volumes:
bundle:
driver: local
mysql:
driver: local
tmp:
driver: local
node-modules:
driver: local
and this is my Dockerfile:
FROM ruby:3.0.2
ARG UID
RUN adduser rails --uid $UID --disabled-password --gecos ""
ENV APP /usr/src/app
RUN mkdir $APP
WORKDIR $APP
ENV EDITOR=vim
RUN apt-get update \
&& apt-get install -y \
nmap \
vim
COPY Gemfile* $APP/
RUN bundle install -j3 --path vendor/bundle
COPY . $APP/
CMD ["rails", "server", "-p", "8080", "-b", "0.0.0.0"]
when I try to start this with docker-compose up on my Mac I get the following error:
/usr/local/lib/ruby/3.0.0/fileutils.rb:253:in `mkdir': Permission denied # dir_s_mkdir - /usr/src/app/tmp/cache (Errno::EACCES)
Any ideas on how to fix this?
Remove the line - tmp:/usr/src/app/tmp on your Dockerfile.
You don't need to access temp files of your container I would say. 🙂

Why Docker doesn't see my entrypoint if container include it?

need some help. I tried to set up the first Docker image of my Django project, but docker doesn't see my entrypoint script. At first, I started my compose by
sudo docker-compose up -d --build but localhost steel empty. So I run sudo docker-compose logs -f
Here are the logs.
Attaching to djangopetgeo_db_1, djangopetgeo_web_1
db_1 | 2020-12-02 18:56:07.093 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-12-02 18:56:07.093 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-12-02 18:56:07.102 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-12-02 18:56:07.138 UTC [23] LOG: database system was shut down at 2020-12-02 18:54:46 UTC
db_1 | 2020-12-02 18:56:07.150 UTC [1] LOG: database system is ready to accept connections
web_1 | Waiting for postgres...
web_1 | /usr/src/djangoPetGeo/entrypoint.sh: 7: /usr/src/djangoPetGeo/entrypoint.sh: nc: not found
Docker doesn't saw my entrypoint as I said. But in this screen, u can see I had entrypoint.sh inside my web app.
Screen with terminal output
Here is my Dockerfile.
# pull official base image
FROM python:3.8.6-slim
# set work directory
WORKDIR /usr/src/djangoPetGeo
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && \
apt-get install -y postgresql postgresql-contrib gcc python3-dev musl-dev libgdal-dev gdal-bin
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt --no-cache-dir
# copy project
COPY . .
ENTRYPOINT ["/usr/src/djangoPetGeo/entrypoint.sh"]
And docker-compose.yml
version: '3.7'
services:
web:
build: ./djangoPetGeo
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./djangoPetGeo/:/usr/src/djangoPetGeo/
ports:
- 8000:8000
env_file:
- ./.env.dev
db:
image: mdillon/postgis
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=8596dbPASS
- POSTGRES_DB=pet_geo_db
volumes:
postgres_data:
Someone help me pls.

Unable to start and connect 3 docker services - Redis, Python 3.7 Slim (Running DRF) and Celery

I am trying to containerise my application which is developed using technologise like DRF, Celery and Redis (as a broker).
I want to prepare docker-compose which will start all the three services (DRF, Celery and Redis (as a broker).
I also want to prepare the Dockerfile.prod for deloyment.
Here is what I have done so far -
version: "3"
services:
redis:
container_name: Redis-Container
image: "redis:latest"
ports:
- "6379:6379"
expose:
- "6379"
command: "redis-server"
dropoff-backend:
container_name: Dropoff-Backend
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/logistics_backend
ports:
- "8080:8080"
expose:
- "8080"
restart: always
command: "python manage.py runserver 0.0.0.0:8080"
links:
- redis
depends_on:
- redis
# - celery
celery:
container_name: celery-container
build: .
command: "celery -A logistics_project worker -l INFO"
volumes:
- .:/code
links:
- redis
Dockerfile(Not for deployment)
FROM python:3.7-slim
# FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
mkdir /logistics_backend
WORKDIR /logistics_backend
COPY ./requirements.txt /requirements.txt
COPY . /logistics_backend
EXPOSE 80
RUN pip install -r /requirements.txt
RUN pip install -U "celery[redis]"
RUN python manage.py makemigrations &&\
python manage.py migrate
RUN python manage.py loaddata roles businesses route_status route_type order_status service_city payment_status
CMD [ "python", "manage.py", "runserver", "0.0.0.0:80"]
The problem with the existing docker-compose is it returns the error as stated below -
celery-container | [2020-10-08 16:59:25,843: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
celery-container | Trying again in 32.00 seconds... (16/100)
In Setting.py I have defined this for radis connection
REDIS_HOST = 'localhost'
REDIS_PORT = '6379'
BROKER_URL = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0'
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 3600}
CELERY_RESULT_BACKEND = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0'
I don't know how should I extend my Dockerfile which is current used for the development, to form a Dockerfile.prod which could be deployable.
All of my three containers are working -
You need to change your REDIS_HOST in your Setting.py to be “redis” instead of “localhost”.

Can't launch sphinxsearch in docker

I'm trying to dockerize a Rails application. It uses Sphinx for search, and I can't make it run through docker.
This is what happens when I run docker-compose up and try to perform search:
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807] Sphinx Query (1.9ms) SELECT * FROM `field_core` WHERE MATCH('soccer') AND `sphinx_deleted` = 0 ORDER BY `name` ASC LIMIT 0, 10000
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807] Completed 500 Internal Server Error in 27ms (ActiveRecord: 3.0ms)
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807]
web_1 | ThinkingSphinx::ConnectionError (Error connecting to Sphinx via the MySQL protocol. Can't connect to MySQL server on '127.0.0.1' (111)):
web_1 | app/controllers/fields_controller.rb:7:in `search'
This is result of docker-compose run sphinx rake ts:index:
sh: 1: searchd: not found
The Sphinx start command failed:
Command: searchd --pidfile --config "/app/config/development.sphinx.conf"
Status: 127
Output: See above
There may be more information about the failure in /app/log/development.searchd.log.
docker-compose.yml:
version: '3'
services:
db:
image: circleci/mysql:5.7
restart: always
volumes:
- mysql_data:/var/lib/mysql
ports:
- "3309:3309"
expose:
- '3309'
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
expose:
- '3000'
depends_on:
- db
- sphinx
volumes:
- app:/app
sphinx:
container_name: sociaball_sphinx
image: stefobark/sphinxdocker
restart: always
links:
- db
volumes:
- /app/config/sphinxy.conf:/etc/sphinxsearch/sphinxy.conf
- /app/sphinx:/var/lib/sphinx
volumes:
mysql_data:
app:
Dockerfile:
FROM ruby:2.4.1
RUN apt-get update && apt-get install -qq -y build-essential nodejs --fix-missing --no-install-recommends
RUN curl -s \
http://sphinxsearch.com/files/sphinxsearch_2.3.2-beta-1~wheezy_amd64.deb \
-o /tmp/sphinxsearch.deb \
&& dpkg -i /tmp/sphinxsearch.deb \
&& rm /tmp/sphinxsearch.deb \&& mkdir -p /var/log/sphinxsearch
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
COPY . ./
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
thinking_sphinx.yml:
development: &common
min_infix_len: 1
charset_table: "0..9, english, U+0021..U+002F"
port: 9306
address: sociaball_mysql_1
production:
<<: *common
So, rake isn't available in sphinx container, and sphinx scripts aren't available in app's container. What am I doing wrong?
Thinking Sphinx expects a copy of the Rails app when running the rake tasks, so you'll need to have a copy of your app within your Sphinx container. This will ensure (once you've bundled the gems) that rake exists in the Sphinx container, where the Sphinx binaries are also present.
So.
Instead of running rake task, I just did what it does directly in sphinx container. Like this:
docker-compose run --rm sphinx indexer \
--config "/etc/sphinxsearch/sphinxy.conf" --all --rotate
Regarding 500 error. It was caused by incorrect configuration in thinking_sphinx.yml. It should have pointed to remote host with sphinx instead of db:
development: &common
# ...
address: sociaball_sphinx_1

127.0.0.1:11211 is down party_manager | DalliError: No server available

I am using docker. Whenever my application is trying to read or write the cache it is getting following error :
Cache read: send_otp_request_count_3
Dalli::Server#connect 127.0.0.1:11211
127.0.0.1:11211 failed (count: 0) Errno::ECONNREFUSED:
Connection refused - connect(2) for "127.0.0.1" port 11211
DalliError: No server available
My Gemfile has:
gem 'dalli'
My Dockerfile is:
FROM ruby:2.3.6
RUN mkdir -p /railsapp
WORKDIR /railsapp
RUN apt-get update && apt-get install -y nodejs --no-install-recommends
RUN apt-get update && apt-get install -y mysql-client --no-install-recommends
COPY Gemfile /railsapp/
COPY Gemfile.lock /railsapp/
RUN bundle install
COPY . /railsapp
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0"]
My docker-compose.yml file is:
version: '3.3'
services:
cache:
image: memcached:1.4-alpine
mysql:
image: mysql
restart: always
ports:
- "3002:3002"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=dev
web:
container_name: party_manager
build:
context: .
dockerfile: Dockerfile
environment:
- RAILS_ENV=development
ports:
- '3000:3000'
volumes:
- .:/railsapp
links:
- mysql
I have also also installed the memchaced in container shell through
docker exec -it 499e3d1efe44 bash
**499e3d1efe44 is my container id.
Then install the gem with command: gem install memcached
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
So according to your docker-compose.yaml file you can access you cache container on cache:112111 from web container.

Resources