Why Docker doesn't see my entrypoint if container include it? - docker

need some help. I tried to set up the first Docker image of my Django project, but docker doesn't see my entrypoint script. At first, I started my compose by
sudo docker-compose up -d --build but localhost steel empty. So I run sudo docker-compose logs -f
Here are the logs.
Attaching to djangopetgeo_db_1, djangopetgeo_web_1
db_1 | 2020-12-02 18:56:07.093 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-12-02 18:56:07.093 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-12-02 18:56:07.102 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-12-02 18:56:07.138 UTC [23] LOG: database system was shut down at 2020-12-02 18:54:46 UTC
db_1 | 2020-12-02 18:56:07.150 UTC [1] LOG: database system is ready to accept connections
web_1 | Waiting for postgres...
web_1 | /usr/src/djangoPetGeo/entrypoint.sh: 7: /usr/src/djangoPetGeo/entrypoint.sh: nc: not found
Docker doesn't saw my entrypoint as I said. But in this screen, u can see I had entrypoint.sh inside my web app.
Screen with terminal output
Here is my Dockerfile.
# pull official base image
FROM python:3.8.6-slim
# set work directory
WORKDIR /usr/src/djangoPetGeo
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && \
apt-get install -y postgresql postgresql-contrib gcc python3-dev musl-dev libgdal-dev gdal-bin
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt --no-cache-dir
# copy project
COPY . .
ENTRYPOINT ["/usr/src/djangoPetGeo/entrypoint.sh"]
And docker-compose.yml
version: '3.7'
services:
web:
build: ./djangoPetGeo
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./djangoPetGeo/:/usr/src/djangoPetGeo/
ports:
- 8000:8000
env_file:
- ./.env.dev
db:
image: mdillon/postgis
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=8596dbPASS
- POSTGRES_DB=pet_geo_db
volumes:
postgres_data:
Someone help me pls.

Related

Gunicorn Docker container only listens on `0.0.0.0`

I am trying to set up an nginx reverse proxy to a gunicorn app server serving up my flask app. The gunicorn container listens on port 5000, and nginx listens on port 80. The problem is that I can still access the app through the browser by visiting localhost:5000, even though I have set gunicorn to listen to localhost of the docker container only, and all requests should pass through the nginx container to the gunicorn container through port 80. This is my set up.
docker-compose.yml
version: "3.3"
services:
web_app:
build:
context: .
dockerfile: Dockerfile.web
restart: always
ports:
- "5000:5000"
volumes:
- data:/home/microblog
networks:
- web
web_proxy:
container_name: web_proxy
image: nginx:alpine
restart: always
ports:
- "80:80"
volumes:
- data:/flask:ro
- ./nginx/config/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- web
networks:
web:
volumes:
data:
Dockerfile.web
FROM python:3.6-alpine
# Environment Variables
ENV FLASK_APP=microblog.py
ENV FLASK_ENVIRONMENT=production
ENV FLASK_RUN_PORT=5000
# Don't copy .pyc files to cointainer
ENV PYTHONDONTWRITEBYTECODE=1
# Security / Permissions (1/2)
RUN adduser -D microblog
WORKDIR /home/microblog
# Virtual Environment
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -U pip
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql
# Install App
COPY app app
COPY migrations migrations
COPY microblog.py config.py boot.sh ./
RUN chmod +x boot.sh
# Security / Permissions (2/2)
RUN chown -R microblog:microblog ./
USER microblog
# Start Application
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
boot.sh
#!/bin/sh
source venv/bin/activate
flask db upgrade
exec gunicorn --bind 127.0.0.1:5000 --access-logfile - --error-logfile - microblog:app
Even though I have set gunicorn --bind 127.0.0.1:5000', in stdout of docker-compose` I see
web_app_1 | [2021-03-02 22:54:14 +0000] [1] [INFO] Starting gunicorn 20.0.4
web_app_1 | [2021-03-02 22:54:14 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
And I am still able to see the website from port 5000 in my browser. I'm not sure why it is listening on 0.0.0.0 when I have explicitly set it to 127.0.0.1.
Your docker-compose has
ports:
- "5000:5000"
which tells the docker-proxy to listen on port 5000 on the host machine and forward requests to the container. If you don't want port 5000 to be externally available, remove this.
Also, it's good that you didn't succeed in making gunicorn listen only to 127.0.0.1; if you did, the web_proxy container wouldn't be able to connect to it. So you may as well undo your attempt to do that.

Docker compose with Rails and Postgres could not connect to server: No route to host Is the server

I'm currently having an issue with my docker-compose that have these services.
Rails app and Postgres. These are my configurations:
docker-compose.yml
version: '3'
services:
db:
image: postgres:alpine
restart: always
volumes:
- ./tmp/db:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
app:
build: .
restart: always
command: bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
- bundle_path:/bundle
ports:
- "3000:3000"
depends_on:
- db
volumes:
bundle_path:
Dockerfile
FROM ruby:2.5.3-slim
# install rails dependencies
RUN apt-get update -qq \
&& apt-get install -y \
# Needed for certain gems
build-essential \
# Needed for postgres gem
libpq-dev \
# Others
nodejs \
vim-tiny \
# The following are used to trim down the size of the image by removing unneeded data
&& apt-get clean autoclean \
&& apt-get autoremove -y \
&& rm -rf \
/var/lib/apt \
/var/lib/dpkg \
/var/lib/cache \
/var/lib/log
# Changes localtime to Singapore
RUN cp /usr/share/zoneinfo/Asia/Singapore /etc/localtime
# create a folder /myapp in the docker container and go into that folder
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
# Run bundle install to install gems inside the gemfile
RUN bundle install
ADD . /myapp
CMD bash -c "rm -f tmp/pids/server.pid && rails s -p 3000 -b '0.0.0.0'"
database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
development:
<<: *default
database: myapp_development
host: db
username: postgres
password: postgres
port: 5432
I can build the app using docker-compose build but whenever I docker-compose up the service db exited but my rails app is running.
This is the logs I'm getting when I run docker-compose up
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | initdb: error: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
The error I'm getting when I access http://localhost:3000 is
could not connect to server: No route to host Is the server running on host "db" (172.18.0.2) and accepting TCP/IP connections on port 5432?
I think you should use volume for Postgres too.
services:
db:
image: postgres:alpine
restart: always
volumes:
- postgres_volume:/var/lib/postgresql/data
volumes:
postgres_volume:
I had similar issue and fixed it with that. Try also to restart Docker.

Docker can't connect to localhost

Im facing a problem of not being able to access my docker container from my browser at localhost:8000. There is no error message. Here is what the browser is saying:
This page isn’t working localhost didn’t send any data. ERR_EMPTY_RESPONSE
This is my docker-compose file:
version: "3.7"
services:
postgres:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
fastapi:
build: ./backend
ports:
- "8000:8000"
volumes:
- ./backend/:/usr/src/backend/
depends_on:
- postgres
volumes:
postgres_data:
and this is my dockerfile:
# pull official base image
FROM python:3.8.3-slim-buster
# set work directory
WORKDIR /usr/src/backend
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy project
COPY . .
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
#Installing dependencies, remove those that are not needed after the installation
RUN pip install -r requirements.txt
CMD uvicorn main:app --reload
Here is my CLI:
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [6] using statreload
INFO: Started server process [8]
INFO: Waiting for application startup.
INFO: Application startup complete.
If anyone has this problem with FastAPI,
Try adding --host 0.0.0.0 in your startup script.
Example: uvicorn main:app --reload --host 0.0.0.0 --port 8000

docker web server exiting without error message

I have my project set with docker-compose. Was working fine until today when I rebuilt the web container. Now whenever I want to start it using docker-compose up it just exits without an Error message:
web_1 | => Booting Unicorn
web_1 | => Rails 3.2.22.5 application starting in development on http://0.0.0.0:3000
web_1 | => Call with -d to detach
web_1 | => Ctrl-C to shutdown server
web_1 | Exiting
If I run it with --verbose, docker-compose --verbose up, the following lines show after 'Exiting':
compose.cli.verbose_proxy.proxy_callable: docker wait <- (u'a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9')
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9')
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9/wait HTTP/1.1" 200 30
compose.cli.verbose_proxy.proxy_callable: docker wait -> 1
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/a03b58f2116698d670f86155cd68605a148143b83ee3351a5e5a4808d682afc9/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {u'AppArmorProfile': u'docker-default',
u'Args': [u'redis:6379',
u'--',
u'./wait-for-it.sh',
u'postgres:5432',
u'--',
u'bundle',
u'exec',
u'rails',
u's',
This is my docker-compose.yml:
version: '3'
services:
memcached:
image: memcached:1.5.2-alpine
restart: always
ports:
- "11211:11211"
postgres:
image: postgres:9.4-alpine
restart: always
volumes:
- ~/.myapp-data/postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myapp_development
- POSTGRES_USER=default
- POSTGRES_PASSWORD=secret
redis:
image: redis:3.2.0-alpine
restart: always
volumes:
- ~/.myapp-data/redis:/data
ports:
- "6379:6379"
web:
build:
context: .
dockerfile: "Dockerfile-dev"
stdin_open: true
tty: true
command: ./wait-for-it.sh redis:6379 -- ./wait-for-it.sh postgres:5432 -- bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/opt/apps/myapp
depends_on:
- memcached
- redis
- postgres
ports:
- "80:3000"
env_file:
- .env
extra_hosts:
- "api.myapp:127.0.0.1"
- "api.getmyapp:127.0.0.1"
- "my.app:127.0.0.1"
EDIT:
Here are the contents of Dockerfile-dev, required on the comments:
FROM ruby:2.3.7-slim
RUN apt-get update
RUN apt-get -y install software-properties-common libpq-dev build-essential \
python-dev python-pip wget curl git-core \
--fix-missing --no-install-recommends --allow-unauthenticated
# Set install path for reference later.
ENV INSTALL_PATH /opt/apps/engine
RUN mkdir -p $INSTALL_PATH
RUN gem install bundler
WORKDIR $INSTALL_PATH
ADD Gemfile $INSTALL_PATH
ADD Gemfile.lock $INSTALL_PATH
RUN bundle install
RUN find /tmp -type f -atime +10 -delete
ADD . $INSTALL_PATH
RUN cp config/database.docker-dev.yml config/database.yml
CMD [ "bundle", "exec", "rails", "s", "-p", "3000", "-b" "0.0.0.0" ]
Docker containers exit as soon as the commands run inside them are finished. From the logs you have posted, it seems like the container has 'nothing else' to do after starting up the server. You need to run the process in the foreground or have some sort of sleep to keep the container alive.

Can't launch sphinxsearch in docker

I'm trying to dockerize a Rails application. It uses Sphinx for search, and I can't make it run through docker.
This is what happens when I run docker-compose up and try to perform search:
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807] Sphinx Query (1.9ms) SELECT * FROM `field_core` WHERE MATCH('soccer') AND `sphinx_deleted` = 0 ORDER BY `name` ASC LIMIT 0, 10000
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807] Completed 500 Internal Server Error in 27ms (ActiveRecord: 3.0ms)
web_1 | [1fd79fbf-2e77-4af5-90ad-ae3637ada807]
web_1 | ThinkingSphinx::ConnectionError (Error connecting to Sphinx via the MySQL protocol. Can't connect to MySQL server on '127.0.0.1' (111)):
web_1 | app/controllers/fields_controller.rb:7:in `search'
This is result of docker-compose run sphinx rake ts:index:
sh: 1: searchd: not found
The Sphinx start command failed:
Command: searchd --pidfile --config "/app/config/development.sphinx.conf"
Status: 127
Output: See above
There may be more information about the failure in /app/log/development.searchd.log.
docker-compose.yml:
version: '3'
services:
db:
image: circleci/mysql:5.7
restart: always
volumes:
- mysql_data:/var/lib/mysql
ports:
- "3309:3309"
expose:
- '3309'
web:
build: .
command: rails server -p 3000 -b '0.0.0.0'
ports:
- "3000:3000"
expose:
- '3000'
depends_on:
- db
- sphinx
volumes:
- app:/app
sphinx:
container_name: sociaball_sphinx
image: stefobark/sphinxdocker
restart: always
links:
- db
volumes:
- /app/config/sphinxy.conf:/etc/sphinxsearch/sphinxy.conf
- /app/sphinx:/var/lib/sphinx
volumes:
mysql_data:
app:
Dockerfile:
FROM ruby:2.4.1
RUN apt-get update && apt-get install -qq -y build-essential nodejs --fix-missing --no-install-recommends
RUN curl -s \
http://sphinxsearch.com/files/sphinxsearch_2.3.2-beta-1~wheezy_amd64.deb \
-o /tmp/sphinxsearch.deb \
&& dpkg -i /tmp/sphinxsearch.deb \
&& rm /tmp/sphinxsearch.deb \&& mkdir -p /var/log/sphinxsearch
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
COPY . ./
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
thinking_sphinx.yml:
development: &common
min_infix_len: 1
charset_table: "0..9, english, U+0021..U+002F"
port: 9306
address: sociaball_mysql_1
production:
<<: *common
So, rake isn't available in sphinx container, and sphinx scripts aren't available in app's container. What am I doing wrong?
Thinking Sphinx expects a copy of the Rails app when running the rake tasks, so you'll need to have a copy of your app within your Sphinx container. This will ensure (once you've bundled the gems) that rake exists in the Sphinx container, where the Sphinx binaries are also present.
So.
Instead of running rake task, I just did what it does directly in sphinx container. Like this:
docker-compose run --rm sphinx indexer \
--config "/etc/sphinxsearch/sphinxy.conf" --all --rotate
Regarding 500 error. It was caused by incorrect configuration in thinking_sphinx.yml. It should have pointed to remote host with sphinx instead of db:
development: &common
# ...
address: sociaball_sphinx_1

Resources