I have four containers. A container for the php server, a container for the mysql server, a container for ngnix and a container for let's encrypt.
The problem is that the php server can't connect to the mysql server.
In the server, I can connect to the database on 127.0.0.1
Schema update in the server
In the container, I can't connect to the database on 127.0.0.1
Schema update in php container
I think it's a problem of network beetwen the containers.
This is the docker-compose :
version: "3.3"
services:
saas-smd-php:
build: ./html
container_name: saas-smd-php
ports:
- "80"
network_mode: bridge
env_file:
- .env
environment:
- VIRTUAL_PORT=80
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
network_mode: bridge
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-le
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
env_file:
- .env
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
network_mode: bridge
db:
container_name: saas-smd-mysql
image: mysql
ports:
- "3306:3306"
env_file:
- .env
volumes:
- data:/var/lib/mysql
network_mode: bridge
volumes:
data:
conf:
vhost:
html:
dhparam:
certs:
And the Dockerfile of the php server :
ARG PHP_VERSION=7.3
FROM php:${PHP_VERSION}-fpm-alpine
RUN apk update
RUN apk upgrade
RUN set -ex \
&& apk --no-cache add postgresql-libs postgresql-dev \
&& docker-php-ext-install pgsql pdo_pgsql \
&& apk del postgresql-dev
WORKDIR /var/www/html
COPY saas-api .
EXPOSE 80
EXPOSE 22
CMD ["php", "-S", "0.0.0.0:80", "-t", "./", "./web/app_dev.php"]
I would like to the php server can communicate with the mysql server, without breaking the access of the php server from a domain name with ngnix and let's encrypt.
Since you do not provide any info on how you connect, I will go ahead and assume you are just not using you database container name; and you're supposed to use saas-smd-mysql instead of localhost.
Related
i have several flask applications which i want to run on a server as separate docker containers. on the server i already have several websites running with a reverse proxy and the letsencrypt-nginx-proxy-companion. unfortunately i can't get the containers to run. I think it is because of the port mapping. When I start the containers on port 80, I get the following error message "[ERROR] Can't connect to ('', 80)" from gunicorn. On all other ports it starts successfully, but then I can't access it from outside.
what am I doing wrong?
docker-compose.yml
version: '3'
services:
db:
image: "mysql/mysql-server:5.7"
env_file: .env-mysql
restart: always
app:
build: .
env_file: .env
expose:
- "8001"
environment:
- VIRTUAL_HOST:example.com
- VIRTUAL_PORT:'8001'
- LETSENCRYPT_HOST:example.com
- LETSENCRYPT_EMAIL:foo#example.com
links:
- db:dbserver
restart: always
networks:
default:
external:
name: nginx-proxy
Dockerfile
FROM python:3.6-alpine
ARG CONTAINER_USER='flask-user'
ENV FLASK_APP run.py
ENV FLASK_CONFIG docker
RUN adduser -D ${CONTAINER_USER}
USER ${CONTAINER_USER}
WORKDIR /home/${CONTAINER_USER}
COPY requirements requirements
RUN python -m venv venv
RUN venv/bin/pip install -r requirements/docker.txt
COPY app app
COPY migrations migrations
COPY run.py config.py entrypoint.sh ./
# runtime configuration
EXPOSE 8001
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
source venv/bin/activate
flask deploy
exec gunicorn -b :8001 --access-logfile - --error-logfile - run:app
reverse-proxy/docker-compose.yml
version: '3'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- /srv/www/nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
DEBUG: "true"
networks:
default:
external:
name: nginx-proxy
I've a VPS on which I want to deploy multiple web applications (for which I've already read posts and they're perfect when we have directly sub container). I want to manage each web application having its own nginx to route to its sub domains and also some static website's related to them. I've two docker compose. My network looks like the following. Image.
My NGINX Rerverse proxy should be responsible to route the domains to respective nginx container. It might be possible or not I'm not sure (that's why I asked help). If someone can provide a better understanding that I'm open to suggestions. Below is my configuration for nginx proxy container and other web apps docker yaml file code. I've used the NGINX PROXY
NGINX PROXY Docker Compose File
version: "3.7"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
WEB APPLICATION 1 docker-compose.yml
version: "3.7"
networks:
default:
external:
name: nginx-proxy
yoda-network:
driver: bridge
services:
adminer:
container_name: ${APP_NAME}_adminer
image: adminer
depends_on:
- mysql
expose:
- 8080
environment:
VIRTUAL_HOST: yodaledger.com
VIRTUAL_PORT: 8080
networks:
- yoda-network
python:
container_name: ${APP_NAME}_python
image: python:3.6
command: bash -c "pip3 install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
volumes:
- ${APP_PATH}var/www/api.yodaledger.com/yodaledger_backend:/app
- ${APP_PATH}var/www/static.yodaledger.com:/app/static
depends_on:
- mysql
working_dir: /app
environment:
- PYTHONUNBUFFERED=1
- MYSQL_HOST=mysql
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
networks:
- yoda-network
nginx:
container_name: ${APP_NAME}_nginx
image: nginx:alpine
expose:
- 443
- 80
volumes:
- ${APP_PATH}var/www:/var/www
- ${APP_PATH}var/log:/var/log/nginx
- ${APP_PATH}var/ssl:/var/ssl
- ${APP_PATH}etc/nginx:/etc/nginx
- /tmp/${APP_NAME}/nginx:/tmp
depends_on:
- python
environment:
VIRTUAL_PORT: 443
VIRTUAL_HOST: yodaledger.com,api.yodaledger.com,app.yodaledger.com,static.yodaledger.com
networks:
- yoda-network
- default
mysql:
container_name: ${APP_NAME}_mysql
image: mariadb:latest
#ports:
# - "3306:3306"
volumes:
- ${APP_PATH}var/mysql:/var/lib/mysql
- ${APP_PATH}etc/mysql:/etc/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- TZ=Europe/Paris
networks:
- yoda-network
WEB APPLICATION 2
It looks the same as web application 1.
I have an issue I discovered while trying to try PHPUnit tests with CURL commands. I always get:
Connecting to `www.example.local` (www.example.local)|127.0.0.1|:80... failed: Connection refused.
I then tried to run wget and curl commands from the container command line, same problem. I have docker working like this.
In my computer’s /etc/hosts
127.0.0.1 www.example.local
127.0.0.1 api.example.local
and so forth and it works when accessing the sites in my browser. In my docker-compse.yml, I have:
version: “3.4”
volumes:
postgres_database :
external: false
mysql_data : {}
schemas:
external: false
services:
php:
build:
context : ./
dockerfile : Dockerfile
network: host
volumes:
- ./:/code
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
depends_on: [ "postgres"]
ports:
- "9000:9000"
expose:
- "9000"
container_name: binge_php
web:
image: nginx:latest
build:
context : ./
dockerfile : Dockerfile_Nginx
network: host
ports:
- "80:80"
expose:
- "0"
volumes:
- ./:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
- ./nginx_custom_settings.conf:/etc/nginx/conf.d/nginx_custom_settings.conf
links:
- php
depends_on:
- php
container_name: binge_nginx
What might I be doing wrong that is preventing me from running curl commands from inside the PHP container?
I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.
I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp