I've this docker-compose file:
version: "3.8"
services:
web:
image: apachephp:v1
ports:
- "80-85:80"
volumes:
- volume:/var/www/html
network_mode: bridge
ddbb:
image: mariadb:v1
ports:
- "3306:3306"
volumes:
- volume2:/var/lib/mysql
network_mode: bridge
environment:
- MYSQL_ROOT_PASSWORD=*********
- MYSQL_DATABASE=*********
- MYSQL_USER=*********
- MYSQL_PASSWORD=*********
volumes:
volume:
name: volume-data
volume2:
name: volume2-data
When run this:
docker-compose up -d --scale web=2
Its works as well but receive this warning:
WARNING: The "web" service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash.
Can somebody help to avoid these warning?, thank you advance.
Best regards.
I suppose, you try to access the web service without knowing the port of the specific container and to distribute the requests to a container here. If i rigth, to do that, you need a load balancing mechanisms to the system configuration.In the following example, i'll use NGINX as the load balancer.
version: "3.8"
services:
web:
image: apachephp:v1
expose: # change 'ports' to 'expose'
- "7483" # <- this is web running port (Change to your web port)
....
ddbb:
....
## New Start ##
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web # your web service name
ports:
- "80:80"
## New End ##
volumes:
...
So you don’t need to map the port 80-85:80 from the web services to a host machine port, if you want to scale the service. So i remove the port mapping configuration from your Docker Compose file and only expose the port as above:
In the nginx service and i add port mappings to the host container for that server. In example, i configured NGINX to listen on the port 4000, which is why we have to add port mappings for this port.
nginx.conf file contents:
user nginx;
events {
worker_connections 1000;
}
http {
server {
listen 4000;
location / {
proxy_pass http://pspdfkit:5000;
}
}
}
You will find here more details, to Use Docker Compose to Run Multiple Instances of a Service in Development.
Related
I have a simple PHP Laravel docker image, created finally with PHP Apache, listening on port 80 (by default).
I have a Docker Traefik installation that works very well, via HTTPS (443 port).
Now, if I use the following docker-compose.yml for the laravel installation:
version: "3.8"
services:
resumecv:
image: sineverba/resumecv-backend:0.1.0-dev
container_name: resumecv
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.resumecv-backend.entrypoints=websecure"
- "traefik.http.routers.resumecv-backend.service=resumecv-backend"
- "traefik.http.routers.resumecv-backend.rule=Host(`resumecvbackend.example.com`)"
- "traefik.http.services.resumecv-backend.loadbalancer.server.port=80"
networks:
proxy:
external: true
It works (mapped against 80 port).
If I would change the listening port:
version: "3.8"
services:
resumecv:
image: sineverba/resumecv-backend:0.1.0-dev
container_name: resumecv
networks:
- proxy
ports:
- "9999:80"
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.resumecv-backend.entrypoints=websecure"
- "traefik.http.routers.resumecv-backend.service=resumecv-backend"
- "traefik.http.routers.resumecv-backend.rule=Host(`resumecvbackend.example.com`)"
- "traefik.http.services.resumecv-backend.loadbalancer.server.port=9999"
networks:
proxy:
external: true
I get a Bad Gateway from Cloudflare (service not reachable).
I know that I could change the Apache port inside the container itself, but I would use the out <-> in mapping with ports definition.
Curl test
From the host, I can curl http://127.0.0.1:9999 with success.
I can also browse website using the IP of the host (192.168.1.100:9999).
Label traefik port
I did try to add traefik.port=9999 label without luck
Removing Label balancer
If I remove "traefik.http.services.resumecv-backend.loadbalancer.server.port=9999" label, I get a laconic 404 not found.
Port publishing...
ports:
- "9999:80"
...doesn't change the port on which your container is listening. It simply establishes a mapping from the host into the container. Your service is still listening on port 80, and that's the port other containers -- including traefik -- will need to use to contact your service.
If you're using a frontend like traefik you don't need the ports entry (because you'll be accessing the service through traefik, rather than directly through a host port).
I am trying to setup an nginx container that will show at the "http://server_ip/" path the nginx html page and on the "/app" path the tutum/hello-world container. as a follow up, want to be able to get to the "hello-world" container only from the "http://server_ip/app" path and not via http://server_ip:1500.
I created the following docker-compose:
version: '3'
services:
proxy:
container_name: proxy
image: nginx
ports:
- "80:80"
volumes:
- $PWD/html:/usr/share/nginx/html
- $PWD/config/nginx.conf:/etc/nginx/conf.d/nginx.conf
networks:
- backend
webapp:
container_name: webapp
image: tutum/hello-world
ports:
- "1500:80"
networks:
- backend
networks:
backend:
then I have the following nginx.conf file:
server {
listen 80; # not really needed, but more informative
location = / {
root /usr/share/nginx/html;
}
location = /app/ {
proxy_pass http://localhost:1500/;
}
}
If I try to get to each of the containers by their http://server_ip:PORT, I get there. if I try http://server_ip/app I get "404 not found". what am I missing? did I put the conf file in the wrong folder? how do I limit the availability of the "hello-world" only to the "http://server_ip/app" path and not via "http://server_ip:1500".
Your containers using the "backend" docker network, as you stated in the compose file.
Inside that they reach each other with the service names, so from the "proxy" service you can reach "webapp" service on http://webapp (or http://webapp:80) and from the "webapp" service you can reach "proxy" on http://proxy or (http://proxy:80).
On your computer if you type http://localhost:1500/ you will reach the webapp service and if you type http://localhost:80/ you will reach proxy service.
The port mapping 1500:80 means that your computer 1500 port is mapped to the webapp container 80 port.
So in nginx.conf do this:
proxy_pass http://webapp:80/;
Also if you want to make your webapp not accessible from your host on localhost:1500 remove the ports part in the webapp service spec:
version: '3'
services:
proxy:
container_name: proxy
image: nginx:1.11
ports:
- "80:80"
volumes:
- $PWD/html:/usr/share/nginx/html
- $PWD/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
webapp:
container_name: webapp
image: tutum/hello-world
networks:
- backend
networks:
backend:
I am trying to run a very simple Docker-compose.yml file based on varnish and php7.1+apache2 services:
version: "3"
services:
cache:
image: varnish
container_name: varnish
volumes:
- ./default.vcl:/etc/varnish/default.vcl
links:
- web:webserver
depends_on:
- web
ports:
- 80:80
web:
image: benit/stretch-php-7.1
container_name: web
ports:
- 8080:80
volumes:
- ./index.php:/var/www/html/index.php
The default.vcl contains:
vcl 4.0;
backend default {
.host = "webserver";
.port = "8080";
}
I encountered the following error when browsing at http://localhost/:
Error 503 Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: 9
Varnish cache server
The web service works fine when I test it at http://localhost:8080/.
What's wrong?
You need to configure varnish to communicate with "web" on port "80" rather than "webserver" on port "8080".
The "web" comes from the service name in your compose file. There's no need to set a container name, and indeed that breaks the ability to scale or perform rolling updates if you transition to swarm mode. Links have been deprecated in favor of shared networks that docker compose will provide (links are very brittle, breaking if you update the web container). And depends_on does not assure that the other service is ready to receive requests. If you have a hard dependency to hold varnish from starting until the web server is ready to receive requests, then you'll want to update the entrypoint with a task to wait for the remote port to be reachable and have a plan for how to handle the web server going down.
The port 80 comes from the container port. There is no need to publish port 8080 on the docker host if you only want to access it through varnish, and this would be a security risk to many. Containers communicate directly to the container port, not back out to the host and mapped back into a container.
The resulting compose file could look like:
version: "3"
services:
cache:
image: varnish
container_name: varnish
volumes:
- ./default.vcl:/etc/varnish/default.vcl
ports:
- 80:80
web:
image: benit/stretch-php-7.1
volumes:
- ./index.php:/var/www/html/index.php
And importantly, your varnish config would look like:
vcl 4.0;
backend default {
.host = "web";
.port = "80";
}
Hello I'm new to the world of Docker, so I tried an installation with NGINX reverse proxy (jwilder image) and a Docker app.
I have installed both without SSL to make it easy. Since the Docker app seems to be installed in the root path I want to separate the NGINX web server and the docker app.
upstream example.com {
server 172.29.12.2:4040;
}
server {
server_name example.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://example.com;
root /usr/share/nginx/html;
index index.html index.htm;
}
location /app {
proxy_pass http://example.com:4040;
}
}
So I want with http://example.com be redirected to the index.html
and with http://example.com/app be redirected to the docker app.
Furthermore, when I build the installation, I use in docker-compose expose: "4040" so when I reload NGINX configuration file with nginx -s reload, it warns me that I have not the port 4040 open.
With the configuration file I posted above any path lead me to the docker app.
I can't find a simple solution to my question.
As I far I understood your logic is right, docker is designed to run a single service to a single container; to reach your goal you still have a couple of thing to look after, if the EXPOSE 4040 was declared in you Docker file, that is not enough to let service reachable. in the docker-compose file you have to declare also the ports, I.E. for nginx you let host system to listen on all interface by adding
...
ports:
- 80:80
...
and this is the first thing, also you have to think on which way you want your proxy reach the "app", from the container network on the same node? If yes you can add in the composer file:
...
depends_on:
- app
...
where app is the declared name of your service in the docker-compose file like this nginx are able to reach your app with name app, so redirect will point to app:
location /app {
proxy_pass http://app:4040;
}
In case you want to reach the "app" via host network, may because one day will run in another host, you can add entry in the hosts file of the container run nginx with:
...
extra_hosts:
- "app:10.10.10.10"
- "appb:10.10.10.11"
...
and so on
Reference: https://docs.docker.com/compose/compose-file/
edit 01/01/2019!!!!! happy new year!!
an example using an "huge" docker compose file:
version: '3'
services:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
depends_on:
- app
In the above example nginx can reach yuor app just with the "app" name so redirect will point to http://app:4040
systemctl (start directly with docker - no compose)
[Unit]
Description=app dockerized service
Requires=docker.service
After=docker.service
[Service]
ExecStartPre=/usr/bin/sleep 1
ExecStartPre=/usr/bin/docker pull mariadb:10.4
ExecStart=/usr/bin/docker run --restart=always --name=app -p 4040:4040 python:3.6-alpine # or your own builded image
ExecStop=/usr/bin/docker stop app
ExecStopPost=/usr/bin/docker rm -f app
ExecReload=/usr/bin/docker restart app
[Install]
WantedBy=multi-user.target
like the above example you can reach the app at port 4040 on the system host (which is in listen for connection on port 4040 by all interfaces) to give a specific interface: -p 10.10.10.10:4040:4040 like this will listen to port 4040 on address 10.10.10.10 (host machine)
docker-compose with extra_host:
version: '3'
services:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
extra_hosts:
- "app:10.10.10.10"
like the above example nginx defined service can reach the name app at 10.10.10.10
least but not last extends service on compose file:
docker-compose.yml:
version: '2.1'
services:
app:
extends:
file: /path/to/app-service.yml
service: app
nginx:
extends: /path/to/nginx-service.yml
service: nginx
app-service.yml:
version: "2.1"
service:
app:
build: "./app" # in case you docker file is in a app dir
image: "some image name"
restart: always
command: "command to start your app"
nginx-service.yml
version: "2.1"
service:
nginx:
build: "./nginx" # in case you docker file is in a nginx dir
image: "some image name"
restart: always
ports:
- "80:80"
- "443:443"
extra_hosts:
- "app:10.10.10.10"
really hope the above posted are enough examples.
hey guys I have a docker container A with a domain name attached to it on a host B with a domain name attached to it as well.....how can I access the said container A via A's domain name rather than an B's ip address or domain name from computer C on the host B's local network.
thus C -> A( via wwww.cname.url) rather than C -> B( www.bname.url:port) -> A
E.G.
the following is a docker-compose with services
version: "3.2"
services:
php:
links:
- mysql
image: arm32v6/php:7.1.24-fpm-alpine3.8-lavalite
networks:
- backend
working_dir: /var/www/html
volumes:
- ./website/:/var/www/html/
privileged: true
node:
domainname: docker.local
hostname: node
networks:
frontend:
aliases:
- node.docker.local
links:
- "apache:dev.docker.local"
depends_on:
- apache
image: arm32v7/node:latest
entrypoint: yarn
command: twill-dev
volumes:
- ./website:/usr/src/app
working_dir: /usr/src/app
ports:
- "3000:3000"
- "3001:3001"
apache:
domainname: docker.local
hostname: dev
image: arm32v7/httpd:2.4
depends_on:
- php
- mysql
networks:
frontend:
aliases:
- apache
- dev.docker.local
backend:
aliases:
- apache
privileged: true
ports:
- "8880:80"
working_dir: /var/www/html
volumes:
- ./website/:/var/www/html/
- ./httpd.conf:/usr/local/apache2/conf/httpd.conf
- ./fpm.conf:/usr/local/apache2/conf/extra/httpd-vhosts.conf
mysql:
image: yobasystems/alpine-mariadb:arm32v7
volumes:
- ./datadir:/var/lib/mysql
networks:
- backend
environment:
- MYSQL_VERSION=5.7
- MYSQL_ROOT_PASSWORD=rootpassword
- MYSQL_USER=test
- MYSQL_PASSWORD=testpass
- MYSQL_DATABASE=test_db
networks:
frontend:
external:
name: localnet
backend:
I want to be able to access service apache by its domain name set to dev.docker.local the ip of which is on a network 17.18.0.1/24
The host has an IP which is on a network 192.168.1.0/24 with a domain name dev.server.local
I have a dev pc on the network 192.168.1.0/24 and it can access the service containers via the hosts IP and usually a port exposed for the particular service.
UPDATE
The host can be reached at server.local from outside the network
my network interface has the following entries
dns-search server.local
dns-domain server.local
the docker container has the following
hostname nginx
domainname server.local
do I need to also edit a host file or resolv.conf file?
It seems the host is running avahi service discovery. Would this affect anything?
So can I
set an internal domain set to the host and have docker containers on subdomains? How would outside devices access this via the domain?
attach the docker container to be on the host's network thus having an ip in the 192.168.1.0/24 and being able to be pinged by devices on that network as well. Will the domain resolve to it?
Is there a dynamic DNS software I can use that can hook this up to me so that its not a manual process. Thus it will detect the server and route incoming requests to it via the domain name?
You can do this by configuring an nginx container with the containers bound to the subdomain.
So for example the host is accessible by domain example.com and you want the php container to be accessible on php.example.com you could use a setup like the following:
services:
php:
image: arm32v6/php:7.1.24-fpm-alpine3.8-lavalite
environment:
- VIRTUAL_HOST=php.example.com
nginx-proxy:
image: jwilder/nginx-proxy
depends_on:
- php
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Any request to the subdomain would first be send to the host, this is bound by nginx, which in turn registers that because the subdomain php is requested it should send the user to that container.
I hope this can help you and if you have any questions please let me know