Docker: Cannot connect to other container - docker

docker-compose.yml
version: '3'
volumes:
wp-assets:
services:
mariadb:
build: ./requirements/mariadb
environment:
- MYSQL_ROOT_HOST=${MYSQL_ROOT_HOST}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
ports:
- "127.0.0.1:3306:3306"
- "127.0.0.1:9999:9999" # test
wordpress:
environment:
- WORDPRESS_DB_HOST=${WORDPRESS_DB_HOST}
- WORDPRESS_DB_USER=${WORDPRESS_DB_USER}
- WORDPRESS_DB_PASSWORD=${WORDPRESS_DB_PASSWORD}
- WORDPRESS_DB_NAME=${WORDPRESS_DB_NAME}
- WORDPRESS_TABLE_PREFIX=${WORDPRESS_TABLE_PREFIX}
- WORDPRESS_AUTH_KEY=${WORDPRESS_AUTH_KEY}
- WORDPRESS_SECURE_AUTH_KEY=${WORDPRESS_SECURE_AUTH_KEY}
- WORDPRESS_LOGGED_IN_KEY=${WORDPRESS_LOGGED_IN_KEY}
- WORDPRESS_NONCE_KEY=${WORDPRESS_NONCE_KEY}
- WORDPRESS_AUTH_SALT=${WORDPRESS_AUTH_SALT}
- WORDPRESS_SECURE_AUTH_SALT=${WORDPRESS_SECURE_AUTH_SALT}
- WORDPRESS_LOGGED_IN_SALT=${WORDPRESS_LOGGED_IN_SALT}
- WORDPRESS_NONCE_SALT=${WORDPRESS_NONCE_SALT}
volumes:
- wp-assets:/var/wp-assets
build: ./requirements/wordpress
ports:
# host_port == 127.0.0.1:9000, allow only localhost
- "127.0.0.1:9000:9000"
nginx:
# image: nginx:latest
depends_on:
- wordpress
volumes:
- wp-assets:/var/wp-assets
build: ./requirements/nginx
ports:
# host_port == 0.0.0.0:8080, allow all interfaces
- "8080:80"
mariadb/Dockerfile
FROM debian:buster
# install mariadb-server
RUN apt update && apt install -y mariadb-server
# allow connection from wordpress (host name)
RUN sed -e 's/127.0.0.1/wordpress/' \
-i '/etc/mysql/mariadb.conf.d/50-server.cnf'
# used for socket
RUN mkdir -p /var/run/mysqld && \
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld && \
chmod 777 /var/run/mysqld && \
touch /var/run/mysqld/mysqld.sock
# init db here
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
#ENTRYPOINT ["/docker-entrypoint.sh"]
ENTRYPOINT ["tail", "-f"]
I was trying to connect mariadb with wordpress (mariadb-client), and got an error :
Can't connect to MySQL server on 'mariadb'
So I tested ports are good. But while other ports like nginx:80 or wordpress:9000 can be accessed by the other containers, ports of mariadb refuses connections.
I couldn't figure out what is difference between mariadb container and the others. What's the problem?

The ENTRYPOINT of your MariaDB Dockerfile is
ENTRYPOINT ["tail", "-f"]
so it doesn't actually run MariaDB.
You probably need to comment that out and comment back in
ENTRYPOINT ["/docker-entrypoint.sh"]

I was confused about ports. I thought that each containers could communicate by just opening ports, but there was no LISTENing ports. Docker-compose ports: just binds ports and has nothing to do with LISTEN.

Related

how can i maintain my data and public links in owncloud image after docker push&pull

I have a web server program which requires pdf files from owncloud server. I'm making installation code via docker-compose & docker hub.
I use Ubuntu 20.04LTS and Docker Compose v2.1.0.
Here is the process
store pdf files and create public links in owncloud docker container(under /var/www/owncloud/data)
create new images(both owncloud, mariadb) and tags from container by code below
docker commit 5cba8bf76904
docker tag 9315184e23f5 DOCKERID/docker-mariadb
docker push DOCKERID/docker-mariadb
pull those images in another new fresh Ubuntu server, using docker-compose up
After this process, when I connect to owncloud, running on a new fresh ubuntu server, there are no pdf files and all those configs are intialized (owncloud account, mariadb database configs)
and the owncloud start-up page(config admin account and database page) is opened.
My docker-compose, Dockerfiles are below(related parts only)
docker-compose.yml
owncloud:
#build: ./dockerfiles/owncloud/
image: "dockerhubid/docker-owncloud"
container_name: chatbot_owncloud
restart: always
networks:
- chatbot_network
depends_on:
- mariadb
volumes:
- 'owncloud_php:/var/www/owncloud'
command: php-fpm7.4 -F -R
mariadb:
# build: ./dockerfiles/mariadb/
image: dockerhubid/docker-mariadb
container_name: mariadb
restart: always
expose:
- '3306'
networks:
- chatbot_network
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_USER=owncloud
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=owncloud
command: ["--max-allowed-packet=128M", "--innodb-log-file-size=64M"]
nginx:
#build: ./dockerfiles/nginx/
image: "dockerhubid/docker-nginx"
container_name: chatbot_nginx
restart: always
depends_on:
- owncloud
volumes:
- ./dockerfiles/certbot/conf:/etc/letsencrypt
- ./dockerfiles/certbot/www:/var/www/certbot
volumes_from:
- 'owncloud:ro'
networks:
- chatbot_network
ports:
- '80:80'
- '3000:3000'
- '8883:8883'
- '8884:8884'
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
container_name: chatbot_certbot
networks:
- chatbot_network
volumes:
- ./dockerfiles/certbot/conf:/etc/letsencrypt
- ./dockerfiles/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
owncloud Dockerfile
FROM ubuntu:20.04
EXPOSE 9000
ARG DEBIAN_FRONTEND=noninteractive
# dependencies
RUN apt update && apt upgrade -y
RUN apt install -y php-bz2 php-curl php-gd php-imagick php-intl php-mbstring php-xml php-zip php-mysql php-fpm wget zip vim
# owncloud
RUN wget https://download.owncloud.org/community/owncloud-10.5.0.zip
RUN unzip owncloud-10.5.0.zip -d /var/www/
RUN rm /owncloud-10.5.0.zip
WORKDIR /var/www/owncloud
RUN chown www-data:www-data -R /usr/bin/php /var/www/owncloud/
RUN chmod -R 755 /var/www/owncloud/
# php-fpm setup
RUN sed -i 's+/run/php/php7.4-fpm.sock+9000+g' /etc/php/7.4/fpm/pool.d/www.conf
ADD init.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/init.sh
mariadb Dockerfile
from mariadb:10.5
EXPOSE 3306
ARG DEBIAN_FRONTEND=noninteractive
USER root
ADD init.sql /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/init.sql
How can I maintain those files and public links?
Why are those things removed after docker hub push&pull?
I tried it with the owncloud official image first, but by my investigation official image stores data in external docker volume.
I thought that's why my data is gone after docker push&pull.
so I'm trying it by manual installation.

what is the point to run supervisor on top of docker container?

Im inheriting from an opensource project where i have this script to deploy two containers (docker and nginx) on a server:
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
The docker-compose.yml file is like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm wondering what is the point to use sudo supervisorctl restart react-wagtail-project?
If i put restart: always in the two containers i'm running, is it useful to run on top of that a supervisor command to check they are always on and running?
Or maybe is it for the possibility to create logs ?
Thank you

Unable to start and connect 3 docker services - Redis, Python 3.7 Slim (Running DRF) and Celery

I am trying to containerise my application which is developed using technologise like DRF, Celery and Redis (as a broker).
I want to prepare docker-compose which will start all the three services (DRF, Celery and Redis (as a broker).
I also want to prepare the Dockerfile.prod for deloyment.
Here is what I have done so far -
version: "3"
services:
redis:
container_name: Redis-Container
image: "redis:latest"
ports:
- "6379:6379"
expose:
- "6379"
command: "redis-server"
dropoff-backend:
container_name: Dropoff-Backend
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/logistics_backend
ports:
- "8080:8080"
expose:
- "8080"
restart: always
command: "python manage.py runserver 0.0.0.0:8080"
links:
- redis
depends_on:
- redis
# - celery
celery:
container_name: celery-container
build: .
command: "celery -A logistics_project worker -l INFO"
volumes:
- .:/code
links:
- redis
Dockerfile(Not for deployment)
FROM python:3.7-slim
# FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
mkdir /logistics_backend
WORKDIR /logistics_backend
COPY ./requirements.txt /requirements.txt
COPY . /logistics_backend
EXPOSE 80
RUN pip install -r /requirements.txt
RUN pip install -U "celery[redis]"
RUN python manage.py makemigrations &&\
python manage.py migrate
RUN python manage.py loaddata roles businesses route_status route_type order_status service_city payment_status
CMD [ "python", "manage.py", "runserver", "0.0.0.0:80"]
The problem with the existing docker-compose is it returns the error as stated below -
celery-container | [2020-10-08 16:59:25,843: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
celery-container | Trying again in 32.00 seconds... (16/100)
In Setting.py I have defined this for radis connection
REDIS_HOST = 'localhost'
REDIS_PORT = '6379'
BROKER_URL = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0'
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 3600}
CELERY_RESULT_BACKEND = 'redis://' + REDIS_HOST + ':' + REDIS_PORT + '/0'
I don't know how should I extend my Dockerfile which is current used for the development, to form a Dockerfile.prod which could be deployable.
All of my three containers are working -
You need to change your REDIS_HOST in your Setting.py to be “redis” instead of “localhost”.

docker-compose port not exposing

When i try to access my lambda endpoint from outside i receive this error:
curl -XPOST 127.0.0.1:3000/create-loan
Recv failure: Connection reset by peer
But inside docker the endpoint work, the port 3000 does not working from outside.
Any help?
Name Command State Ports
billing_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
billing_lambda_1 /usr/local/bin/sam local s ... Up 0.0.0.0:3000->3000/tcp
docker-compose.yml
version: '3'
services:
lambda:
build: .
volumes:
- ./:/app
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- db
environment:
- PYTHONPATH=${PWD}/billing
ports:
- "3001:3000"
db:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
environment:
- POSTGRES_USER=${BILLING_USER}
- POSTGRES_PASSWORD=${BILLING_PASSWORD}
- POSTGRES_DB=${BILLING_DB}
- POSTGRES_HOST=${BILLING_HOST}
volumes:
db-data:
driver: local
My DockerFile:
FROM python:3.7
RUN pip3 install aws-sam-cli
EXPOSE 3000
ENTRYPOINT ["/usr/local/bin/sam"]
RUN apt-get install curl
RUN pip3 install pipenv
WORKDIR /app
RUN pipenv install --dev
CMD ["local", "start-api"]
Solved
CMD ["local","start-api","--host","0.0.0.0"]
By the compose file you've exposed the port 3001
ports:
- "3001:3000"
But connecting to 3000
curl -XPOST 127.0.0.1:3000/create-loan

Sending http requests from docker container using same network as sending requests from host machine

My application is dockerized. Its python/django application. We are using a local sms sending api that is restricted on IP based. So I have given them my EC2 ip address. And I am running my docker container in this EC2 machine. But my python app is not able to send requests to that machine. Because this docker container has different IP.
How do I solve this problem ?
Dockerfile
# ToDo use alpine image
FROM python:3.6
# Build Arguments with defaults
ARG envior
ARG build_date
ARG build_version
ARG maintainer_name='Name'
ARG maintainaer_email='email#email.com'
# Adding Labels
LABEL com.example.service="Service Name" \
com.example.maintainer.name="$maintainer_name" \
com.example.maintainer.email="$maintainaer_email" \
com.example.build.enviornment="$envior" \
com.example.build.version="$build_version" \
com.example.build.release-date="$build_date"
# Create app directory
RUN mkdir -p /home/example/app
# Install Libre Office for pdf conversion
RUN apt-get update -qq \
&& apt-get install -y -q libreoffice \
&& apt-get remove -q -y libreoffice-gnome
# Cleanup after apt-get commands
RUN apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
/var/cache/apt/archives/*.deb /var/cache/apt/*cache.bin
# Activate WORKING DIR
WORKDIR /home/example/app
# Copying requirements
COPY requirements/${envior}.txt /tmp/requirements.txt
# Install the app dependencies
# ToDo Refactor requirements
RUN pip install -r /tmp/requirements.txt
# Envs
ENV DJANGO_SETTINGS_MODULE app.settings.${envior}
ENV ENVIORNMENT ${envior}
# ADD the source code and entry point into the container
ADD . /home/example/app
ADD entrypoint.sh /home/example/app/entrypoint.sh
# Making entry point executable
RUN chmod +x entrypoint.sh
# Exposing port
EXPOSE 8000
# Entry point and CMD
ENTRYPOINT ["/home/example/app/entrypoint.sh"]
docker-compose.yml
version: '3'
services:
postgres:
image: onjin/alpine-postgres:9.5
restart: unless-stopped
ports:
- "5432:5432"
environment:
LC_ALL: C.UTF-8
POSTGRES_USER: django
POSTGRES_PASSWORD: django
POSTGRES_DB: web
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build:
context: .
args:
environ: local
command: gunicorn app.wsgi:application -b 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- postgres
environment:
DATABASE_URL: 'postgres://django:django#postgres/web'
DJANGO_MANAGEPY_MIGRATE: 'on'
DJANGO_MANAGEPY_COLLECTSTATIC: 'on'
DJANGO_LOADDATA: 'off'
DOMAIN: '0.0.0.0'
volumes:
postgres_data:
You should try putting the container in the same network as your EC2 instance. It means using networks with host driver.
suggested docker-compose file
version: '3'
services:
postgres:
[...]
networks:
- host
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
[...]
networks:
- host
volumes:
postgres_data:
networks:
host:
In case it wouldn't work, you might define your own network by:
networks:
appnet:
driver: host
and connect to that network form services:
postgres:
[..]
networks:
- appnet
Further reading about networks official ref.
An interesting read too from official networking tutorial.
Publish port from docker container to base machine, then configure ec2IP:port in sms application.

Resources