Docker Container ignores port 80 - docker

Since I use macvlan network configuration and give every container a different IP, I would like to access the services via port 80 and not the costum port. But this doesn't work for me.
Currently I created the following docker-compose file:
version: '3.3'
networks:
dockervlan:
external: true
volumes:
data:
services:
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
volumes:
- data:/app/data
ports:
- "80:3001" # <Host Port>:<Container Port>
restart: unless-stopped
networks:
dockervlan:
ipv4_address: 192.168.178.194
After that I still be able to access the service via http://192.168.178.194:3001 but not via http://192.168.178.194
Same for portainer:
version: '3'
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
ports:
- 80:9000
- 8000:8000
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
- data:/data
restart: unless-stopped
networks:
dockervlan:
ipv4_address: 192.168.178.200
networks:
dockervlan:
name: dockervlan
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: "192.168.178.0/24"
ip_range: "192.168.178.192/26"
gateway: "192.168.178.1"
volumes:
data:
I also tried 3001:80 and 9000:80, which of course didn't work euther.
Where is my mistake?

Related

docker does not expose static ip for container

i am trying to bind container with nginx to ip 172.16.238.10 , but for some reason docker ignores settings in docker-compose
#my docker-compose file
version: "3.9"
services:
nginx:
build: nginx/
ports:
- 80:80/tcp
volumes:
./dokcer/nginx/conf/nginx.conf:/etc/nginx/nginx.conf
./dokcer/nginx/conf/hosts:/etc/hosts
./docker/project:/var/www/project
networks:
app_net:
ipv4_address: 172.16.238.10
php-fpm:
build: php-fpm/
ports:
- 9000:9000/tcp
volumes:
./dokcer/php-fpm/conf/www.conf:/usr/local/etc/php-fpm.d/www.conf
./dokcer/php-fpm/conf/hosts:/etc/hosts
./dokcer/project:/var/www/project
networks:
app_net:
ipv4_address: 172.16.238.11
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
after build and launch, I look at the docker_app_net network and see that the nginx container has ip 172.16.238.2 although I expected it to be 172.16.238.10.
what could be the problem? I will be grateful for every answer because I am confused :c

Isolate containers on the jwilder/nginx-proxy network

I'm using jwilder/nginx-proxy to host multiple (web)apps from a single server. This is working great except that all services can communicate with each other because they are all on the same network because that is required for the proxy to work.
Proxy docker-compose.yaml
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy:alpine
container_name: nginx-proxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
ports:
- "80:80"
- "443:443"
volumes:
- ./data/certs:/etc/nginx/certs:ro
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
letsencrypt-proxy:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: letsencrypt-proxy
depends_on:
- nginx-proxy
volumes:
- ./data/nginx/vhost.d:/etc/nginx/vhost.d
- ./data/share/nginx/html:/usr/share/nginx/html
- ./data/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: always
networks:
default:
external:
name: nginx-proxy
App 1 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app1.example.com
LETSENCRYPT_HOST: app1.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
App 2 docker-compose.yaml
version: "3"
services:
app:
image: nginx:latest
depends_on:
- db
- cache
expose:
- 80
volumes:
- ./application:/var/www/html
restart: always
working_dir: /var/www/html
environment:
VIRTUAL_HOST: app2.example.com
LETSENCRYPT_HOST: app2.example.com
LETSENCRYPT_EMAIL: user#example.com
cache:
image: redis:alpine
restart: always
volumes:
- cachedata:/data
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: database_name
MYSQL_USER: database_user
MYSQL_PASSWORD: database_passwd
volumes:
- dbdata:/var/lib/mysql
networks:
default:
external:
name: nginx-proxy
volumes:
dbdata:
driver: local
cachedata:
driver: local
With this setup both applications will use de db and cache instance of App 1. The only way to solve that is to give those services unique names like app_1_db and app_2_db. But then App 1 is still able to connect to the app_2_db which I would like to prevent.
Is there a way to isolate all services within their docker-composer.yaml file and still use the nginx proxy?
Docker version 18.09.0, build 4d60db4
docker-compose version 1.21.2, build a133471
You can connect only the app(nginx) container from your apps to the nginx-proxy network. The only edit needed should be in the app's docker-compose:
version: '3'
services:
app:
networks:
- default
- nginx-proxy
networks:
nginx-proxy:
external: true
That way the app service will be connected to nginx-proxy and default networks at the same time. (If you omit networks key, service is always connected to the default network)
Resolving service names to containers ip's then works as expected as long as no container can see (across all networks it's connected to) two containers with same service name.
If you want even more isolation, you can create nginx-proxy network for every app.
So in your nginx-proxy docker-compose you will have:
version: "3"
services:
nginx-proxy:
networks:
- default
- nginx-proxy_app1
- nginx-proxy_app2
# letsencrypt-proxy service doesn't have to have networks key
networks:
nginx-proxy_app1:
external: true
nginx-proxy_app2:
external: true
and in your apps:
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app1
networks:
nginx-proxy_app1:
external: true
and
version: '3'
services:
app:
networks:
- default
- nginx-proxy_app2
networks:
nginx-proxy_app2:
external: true
That way in every "proxy" network there is only one (if you are not using docker-compose scaling) app container and the nginx-proxy container.
More reading:
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/overlay/#operations-for-standalone-containers-on-overlay-networks

Share Docker container through local network and access to it from an another host

I try to share a container through my local network, to access this container from an another machine on the same network. I have follow tihs tutorial (section "With macvlan devices") and I succeeded to share a simple web container and access from an another host.
But the container that I want to share is a little more sophisticated, because he comminicate with other containers on the host through an internal network on the host.
I try to bind my existing container created in my docker-compose but I can't access to it. Can you help me, or tell me where I'm wrong if so please ?
This is my docker-compose :
version: "2"
services:
baseimage:
container_name: baseimage
image: base
build:
context: ./
dockerfile: Dockerfile.base
web:
container_name: web
image: web
env_file:
- .env
context: ./
dockerfile: Dockerfile.web
extra_hosts:
- dev.api.exemple.com:127.0.0.1
- dev.admin.exemple.com:127.0.0.1
- dev.www.exemple.com:127.0.0.1
ports:
- 80:80
- 443:443
volumes:
- ./code:/ass
- /var/run/docker.sock:/var/run/docker.sock
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.2
cron:
container_name: cron
image: cron
build:
context: ./
dockerfile: Dockerfile.cron
volumes:
- ./code:/ass
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- web:dev.api.exemple.com
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.3
mysql:
container_name: mysql
image: mysql:5.6
ports:
- 3306:3306
networks:
devbox:
ipv4_address: 172.20.0.4
redis:
container_name: redis
image: redis:3.2.4
ports:
- 6379:6379
networks:
devbox:
ipv4_address: 172.20.0.5
elasticsearch:
container_name: elastic
image: elasticsearch:2.3.4
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- ./es_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
devbox:
ipv4_address: 172.20.0.6
chromedriver:
container_name: chromedriver
image: robcherry/docker-chromedriver:latest
privileged: true
ports:
- 4444:4444
environment:
- CHROMEDRIVER_WHITELISTED_IPS='172.20.0.2'
- CHROMEDRIVER_URL_BASE='wd/hub'
- CHROMEDRIVER_EXTRA_ARGS='--ignore-certificate-errors'
networks:
devbox:
ipv4_address: 172.20.0.7
links:
- web:dev.www.exemple.com
networks:
devbox:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
Create an external network assign the external network and devbox network to web. Web would then be publicly accessible via the external network public ip address and communicate with the internal services using the devbox network.
Will post working example asap

Set specify IP for a Docker container from docker-compose.yml

I have the docker-compose.yml file
web_server:
build: web_server/
ports:
- "8000:8000"
links:
- mongo
tty: true
environment:
SYMFONY__MONGO_ADDRESS: mongo
SYMFONY__MONGO_PORT: 27017
networks:
app_net:
ipv4_address: 172.16.238.10
ipv6_address: 2001:3984:3989::10
networks:
app_net:
driver: bridge
enable_ipv6: true
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
-
subnet: 2001:3984:3989::/64
mongo:
image: mongo:3.0
container_name: mongo
command: mongod --smallfiles
expose:
- 27017
I want to have a specifiy IP for my web_server for pass this in another applications.
But when I call command docker-compose up I recive the error:
ERROR: The Compose file '.\docker-compose.yml' is invalid because:
Unsupported config option for networks: 'app_net'
Unsupported config option for web_server: 'networks'
What is wrong?
I dont't known why you have this error but you can try to fix some points:
webserver networks must refer the networks declared below
you simplify your network configuration and start with ipv4 only
Here is a working configuration:
version: '2'
services:
webssl:
image: nginx:1.11.4-alpine
ports:
- "443:443"
- "80:80"
volumes:
- /data/nginx/webroot:/usr/share/nginx/html:ro
networks:
- dmz
networks:
dmz:
ipam:
driver: default
config:
- subnet: 172.77.0.1/24
ip_range: 172.77.0.0/24
gateway: 172.77.0.1

Provide static IP to docker containers via docker-compose

I'm trying to provide static IP address to containers. I understand that I have to create a custom network. I create it and the bridge interface is up on the host machine (Ubuntu 16.x). The containers get IP from this subnet but not the static I provided.
Here is my docker-compose.yml:
version: '2'
services:
mysql:
container_name: mysql
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
- vpcbr
apigw-tomcat:
container_name: apigw-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
- vpcbr
depends_on:
- mysql
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
aux_addresses:
mysql: 10.5.0.5
apigw-tomcat: 10.5.0.6
The containers get 10.5.0.2 and 10.5.0.3, instead of 5 and 6.
Note that I don't recommend a fixed IP for containers in Docker unless you're doing something that allows routing from outside to the inside of your container network (e.g. macvlan). DNS is already there for service discovery inside of the container network and supports container scaling. And outside the container network, you should use exposed ports on the host. With that disclaimer, here's the compose file you want:
version: '2'
services:
mysql:
container_name: mysql
image: mysql:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
ports:
- "3306:3306"
networks:
vpcbr:
ipv4_address: 10.5.0.5
apigw-tomcat:
container_name: apigw-tomcat
build: tomcat/.
ports:
- "8080:8080"
- "8009:8009"
networks:
vpcbr:
ipv4_address: 10.5.0.6
depends_on:
- mysql
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
I was facing some difficulties with an environment variable that is with custom name (not with container name /port convention for KAPACITOR_BASE_URL and KAPACITOR_ALERTS_ENDPOINT). If we give service name in this case it wouldn't resolve the ip as
KAPACITOR_BASE_URL: http://kapacitor:9092
In above http://[**kapacitor**]:9092 would not resolve to http://172.20.0.2:9092
I resolved the static IPs issues using subnetting configurations.
version: "3.3"
networks:
frontend:
ipam:
config:
- subnet: 172.20.0.0/24
services:
db:
image: postgres:9.4.4
networks:
frontend:
ipv4_address: 172.20.0.5
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:latest
networks:
frontend:
ipv4_address: 172.20.0.6
ports:
- "6379"
influxdb:
image: influxdb:latest
ports:
- "8086:8086"
- "8083:8083"
volumes:
- ../influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
- ../influxdb/inxdb:/var/lib/influxdb
networks:
frontend:
ipv4_address: 172.20.0.4
environment:
INFLUXDB_HTTP_AUTH_ENABLED: "false"
INFLUXDB_ADMIN_ENABLED: "true"
INFLUXDB_USERNAME: "db_username"
INFLUXDB_PASSWORD: "12345678"
INFLUXDB_DB: db_customers
kapacitor:
image: kapacitor:latest
ports:
- "9092:9092"
networks:
frontend:
ipv4_address: 172.20.0.2
depends_on:
- influxdb
volumes:
- ../kapacitor/kapacitor.conf:/etc/kapacitor/kapacitor.conf
- ../kapacitor/kapdb:/var/lib/kapacitor
environment:
KAPACITOR_INFLUXDB_0_URLS_0: http://influxdb:8086
web:
build: .
environment:
RAILS_ENV: $RAILS_ENV
command: bundle exec rails s -b 0.0.0.0
ports:
- "3000:3000"
networks:
frontend:
ipv4_address: 172.20.0.3
links:
- db
- kapacitor
depends_on:
- db
volumes:
- .:/var/app/current
environment:
DATABASE_URL: postgres://postgres#db
DATABASE_USERNAME: postgres
DATABASE_PASSWORD: postgres
INFLUX_URL: http://influxdb:8086
INFLUX_USER: db_username
INFLUX_PWD: 12345678
KAPACITOR_BASE_URL: http://172.20.0.2:9092
KAPACITOR_ALERTS_ENDPOINT: http://172.20.0.3:3000
volumes:
postgres_data:
If you are never seeing the static IP address set, perhaps it could be because you are using "docker compose up". Try using "docker-compose up".
When I use "docker-compose up" (with the hyphen) I now see the static IPs assigned.
networks:
hfnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.55.0/24
gateway: 192.168.55.1
services:
web:
image: 'mycompany/webserver:latest'
hostname: www
domainname: mycompany.com
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
hfnet:
ipv4_address: 192.168.55.10
ports:
- '80:80'
- '443:443'
volumes:
- '../honeyfund:/var/www/html'
I wasted a lot of time to figure that one out. :(
I realized, that the more convenient and meaningful way is to give the container a container-name.
You can use the name in the same docker network as source.
This helped me because the docker-containers had changing IPs and by this I can communicate with another container with a static name that I can use in config-files.

Resources