I have the following docker compose.
version: '2'
services:
mockup:
build: mockup/
ports:
- 12320:12320
volumes:
- /var/lib/tt/:/var/lib/tt/
networks:
- test
networks:
test:
driver: bridge
ipam:
config:
- subnet: 172.20.1.0/24
gateway: 172.20.1.1
I want to deploy a few instances of the same application on different containers and different IP addreses.
When I run docker-compose up --scale mockup=2 or more there is conflict on the port. All deployed apps must be on the same port.
What should I change in my docker-compose?
In order to scale without having an issue with the port you need to make it bind on a random port so you need to do like below, it will make the host port random for each container you start and map it to 12320 which inside the container:
ports:
- 12320
Next you should use some kind of service discovery to be aware of the new containers as you go up or down and a proxy so you can talk to a specific URL without worrying about which container is up and what is the port
Related
how can we run docker commands inside container with docker-compose?
Simply I want to get IP of some other network container.
I am running three container va-server, db and api-server. All the containers are in same docker-network
Here I am providing docker-compose file below:
version: "2.3"
services:
va-server:
container_name: va_server
image: nitinroxx/facesense:amd64_2022.11.28 #facesense:alpha
runtime: nvidia
restart: always
mem_limit: 4G
networks:
- perimeter-network
db:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
restart: always
volumes:
- ./facesense_db:/data/db
command: [--auth]
networks:
- perimeter-network
api-server:
container_name: api_server
image: nitinroxx/facesense:api_amd64_2022.11.28
ports:
- "80:80"
- "465:465"
restart: always
networks:
- perimeter-network
networks:
perimeter-network:
driver: bridge
ipam:
config:
- gateway: 10.16.239.1
subnet: 10.16.239.0/24
I have install docker inside the container which giving me below permission error:
docker.errors.dockerexception: error while fetching server api version: ('connection aborted.', permissionerror(13, 'permission denied')
...inside [a] container [...] I want to get IP of some other network container....
Docker provides an internal DNS service that can resolve container names to their Docker-internal IP addresses. From one of the containers you show, you could look up a host name like db to get the container's IP address; but in practice, this is a totally normal DNS name and all but the lowest-level networking interfaces can use those directly.
This does require that all of the containers involved be on the same Docker network. Normally Compose sets this up automatically for you; in the file you show I might delete the networks: blocks and container_name: overrides in the name of simplicity. Also see Networking in Compose in the Docker documentation.
In short:
You can probably use the Compose service names va-server, db, and api-server as host names without specifically knowing their IP addresses.
This probably means you never need to know the container IP addresses at all (they're usually unusable from outside Docker).
If you do need an IP address from inside a container, a DNS lookup can find it.
You can't usually run docker commands from inside containers. You can't do this safely without making it possible for the container to take over the whole host. There are usually better patterns that don't tie you to the Docker stack specifically.
I have two containers running on the same host using docker, however one container uses the host network while the other uses a custom bridge network as follows:
version: '3.8'
services:
app1:
container_name: app1
hostname: app1
image: app1/app1
restart: always
networks:
local:
ipv4_address: 10.0.0.8
ports:
- "9000:9000/tcp"
volumes:
- /host:/container
app2:
container_name: app2
hostname: app2
image: app2/app2
restart: always
network_mode: host
volumes:
- /host:/container
networks:
local:
ipam:
driver: bridge
config:
- subnet: "10.0.0.0/24"
i have normal ip communication between the two containers however when i want to use the hostname of the containers to communicate it fails. is there a way to make this feature work on host networks?
No, you can't do this. You probably could turn off host networking though.
Host networking pretty much completely disables Docker's networking layer. In the same way that a process outside a container can't directly communicate with a container except via its published ports:, a container that uses host networking would have to talk to localhost and the other container's published port. If the host has multiple interfaces it's up to the process to figure out which one(s) to listen on, and you can't do things like remap ports.
You almost never need host networking in practice. It's appropriate in three cases: if a service listens on a truly large number of ports (thousands); if the service's port is unpredictable; or for a management tool that's consciously trying to escape the container. You do not need host networking to make outbound network calls, and it's not a good solution to work around an incorrect hard-coded host name.
For a typical application, I would remove network_mode: host. If app2 needs to be reached from outside the container, add ports: to it. You also do not need any of the manual networking configuration you show, since Compose creates a default network for you and Docker automatically assigns IP addresses on its own.
A functioning docker-compose.yml file that omits the unnecessary options and also does not use host networking could look like:
version: '3.8'
services:
app1:
image: app1/app1
restart: always
ports: # optional if it does not need to be directly reached
- "9000:9000/tcp"
# no container_name:, hostname:, networks:, manual IP configuration
# volumes: may not be necessary in routine use
app2:
image: app2/app2
restart: always
# add to make the container accessible
ports:
- "3000:3000"
# configure communication with the first service
environment:
APP1_URL: 'http://app1:9000'
I do not know how to achieve that. Now all the ports are exposed to the host machine but I just want to expose one container port (80), not the other (8080). Here is the docker-compose file:
---
version: "3.9"
services:
app:
image: sandbox/app
container_name: app
volumes:
- ./src/app:/app/
expose:
- "8080"
restart: unless-stopped
networks:
custom-net:
ipv4_address: 10.0.0.7
web_server:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
networks:
custom-net:
ipv4_address: 10.0.0.6
networks:
custom-net:
name: custom-net
driver: bridge
ipam:
driver: default
config:
- subnet: 10.0.0.0/8
If I run from the local machine nmap 10.0.0.6, it shows port as open in port 80. This container exposure is the desired one. But when I run nmap 10.0.0.7, it also shows as open 8080 port, how it could be that one? Checking some stackoverflow thread, ports is defined like that:
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen).
and expose:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Do I miss some network concepts or do I have wrong docker-compose file?
You must be on a native-Linux host. If you happen to know the Docker-internal IP addresses, and you're on a native-Linux host, then you can always connect to a container using those addresses; you can't prevent this (without iptables magic) but it's also not usually harmful. This trick doesn't work in other environments (on MacOS or Windows hosts, or if Docker is in a Linux VM, or from a different host from the container) and it's much more portable to connect only to containers' published ports:.
You should be able to use a much simpler Compose file. Delete all of the networks: blocks and the expose: blocks. You also do not need container_name:, and you should not need to inject code using volumes:. Trimming out all of the unnecessary options leaves you with
version: '3.8' # last version supported by standalone docker-compose tool
services:
app:
image: sandbox/app # may want `build: .` _instead of_ this line
restart: unless-stopped
web_server:
image: nginx:latest # needs some custom configuration?
ports:
- "80:80"
That should literally be the entire file.
From outside Docker but on the same machine, http://localhost:80 matches the first ports: of the web_server container, so forwards to the second ports:, on which the Nginx server is listening. The Nginx configuration should include a line like proxy_pass http://app:8080 which will forward to the application container.
Compared to your original file:
expose: is an artifact of first-generation Docker networking. In a Compose file it does absolutely nothing at all and it's always safe to delete it.
Connections between containers (where web_server uses app as a host name) connect directly to the specified port; they do not use or require expose: or ports: settings, and they ignore ports: if they're present.
Compose assigns container names on its own, and there are docker-compose CLI equivalents to almost all Docker commands that can figure out the right mapping. You don't need to manually specify container_name:.
Docker automatically assigns IP addresses to containers. These are usually an internal implementation detail; it's useful to know that containers do have their own IP addresses (and so you can have multiple containers that internally listen on the same port) but you never need to know these addresses, look them up, or manually specify them.
Compose automatically creates a network named default for you and attaches containers to it, so in most common cases you don't need networks: at all.
Networking in Compose in the Docker documentation describes how to make connections between containers (again, you do not need to know the container-private IP addresses). Container networking discusses these concepts separately from Compose.
I'im fairly new to docker and docker compose.
I have a simple scenario, based on three applications (app1, app2, app3) that I want to connect to my host's network. The purpose is having an internet connection also inside the container.
Here is my docker-compose file:
version: '3.9'
services:
app1container:
image: app1img
build: ./app1
networks:
network_comp:
ipv4_address: 192.168.1.1
extra_hosts:
anotherpc: 192.168.1.44
ports:
- 80:80
- 8080:8080
app2container:
depends_on:
- "app1container"
image: app2img
build: ./app2
networks:
network_comp:
ipv4_address: 192.168.1.2
ports:
- 3100:3100
app3container:
depends_on:
- "app1container"
image: app3img
build: ./app3
networks:
network_comp:
ipv4_address: 192.168.1.3
ports:
- 9080:9080
networks:
network_comp:
driver: ""
ipam:
driver: ""
config:
- subnet: 192.168.0.0/24
gateway: 192.168.1.254
I already read the docker-compose documentation, which says that there is no a bridge driver for Windows OS. Is there anyway a solution to this issue?
You shouldn't usually need to do any special setup for this to work. When your Compose service has ports:, that makes a port available on the host's IP address. The essential rules for this are:
The service inside the container must listen on the special 0.0.0.0 "all interfaces" address (not 127.0.0.1 "this container only"), on some (usually fixed) port.
The container must be started with Compose ports: (or docker run -p). You choose the first port number, the second port number must match the port inside the container.
The service can be reached via the host's IP address on the first port number (or, if you're using the older Docker Toolbox setup, on the docker-machine ip address).
http://host.example.com:12345 (from other hosts)
|
v
ports: ['12345:8080'] (in the `docker-compose.yml`)
|
v
./my_server -bind 0.0.0.0:8080 (the main container command)
You can remove all of the manual networks: configuration in this file. In particular, it's problematic if you try to specify the Docker network to have the same IP address range as the host network, since these are two separate networks. Compose automatically provides a network named default that should work for most practical applications.
I have a docker-compose file which is globally like this.
version '2'
services:
app:
image: myimage
ports:
- "80:80"
networks:
mynet:
ipv4_adress: 192.168.22.22
db:
image: postgres:9.5
ports:
- "6432:5432"
networks:
mynet:
ipv4_adress: 192.168.22.23
...
networks:
mynet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.22.0/24
I want to put my postgresql and application in subnetworks to avoid the ports to be exposed outside my computer/server.
From within the app container, I can't connect to 192.168.22.23, I installed net-tools to use ifconfig/netstat, and it doesn't seem the containers are able to communicate.
I assume I have this problem because I'm using subnetworks with static ipv4 adresses.
I can access both static IPs from the host (connect to postgres and access the application)
Do you have any piece of advice, the goal is to access the ports of another container to communicate with him, without removing the use of static ips (on app at least). Here, to connect to postgresql from the app container.
The docker run -p option and Docker Compose ports: option take a bind address as an optional parameter. You can use this to make a service accessible from the same host, but not from other hosts:
services:
db:
ports:
- '127.0.0.1:6432:5432'
(The other good use of this setting is if you have a gateway machine with both a public and private network interface, and you want a service to only be accessible from the private network.)
Once you have this, you can dispense with all of the manual networks: setup. Non-Docker services on the same host can reach the service via the special host name localhost and the published port number. Docker services can use inter-container networking; within the same docker-compose.yml file you can use the service name as a host name, and the internal port number.
host$ PGHOST=localhost PGPORT=6432 psql
services:
app:
environment:
- PGHOST=db
- PGPORT=5432
You should remove all of the manual networks: setup, and in general try not to think about the Docker-internal IP addresses at all. If your Docker is Docker for Mac or Docker Toolbox, you cannot reach the internal IP addresses at all. In a multi-host environment they will be similarly unreachable from hosts other than where the container itself is running.