I scale container
ports:
- "8086-8090:8085"
But what if I needed it only inside my bridge network?
In other words, does it exists something like this?
expose:
- "8086-8090:8085"
UPDATED:
I have a master container:
exposed to host network
acts as a load balancer
I want to have N slaves of another container, exposed to assigned ports inside docker network (bot visible in host network)
Connections between containers (over the Docker-internal bridge network) don't need ports: at all, and you can just remove that block. You only need ports: to accept connections from outside of Docker. If the process inside the container is listening on port 8085 then connections between containers will always use port 8085, regardless of what ports: mappings you have or if there is one at all.
expose: in a Compose file does almost nothing at all. You never need to include it, and it's always safe to delete it.
(This wasn't the case in first-generation Docker networking. However, Compose files v2 and v3 always provide what the Docker documentation otherwise calls a "user-defined bridge network", that doesn't use "exposed ports" in any way. I'm not totally clear why the archaic expose: and links: options were kept.)
No extra changes needed!
Because of internal Docker DNS it 'hides' scaled instances under same port:
version : "3.8"
services:
web:
image: "nginx:latest"
ports:
- "8080:8080"
then
docker-compose up -d --scale web=3
calling localhost:8080 will proxy requests to all instances using Round Robin!
Related
I have two containers running on the same host using docker, however one container uses the host network while the other uses a custom bridge network as follows:
version: '3.8'
services:
app1:
container_name: app1
hostname: app1
image: app1/app1
restart: always
networks:
local:
ipv4_address: 10.0.0.8
ports:
- "9000:9000/tcp"
volumes:
- /host:/container
app2:
container_name: app2
hostname: app2
image: app2/app2
restart: always
network_mode: host
volumes:
- /host:/container
networks:
local:
ipam:
driver: bridge
config:
- subnet: "10.0.0.0/24"
i have normal ip communication between the two containers however when i want to use the hostname of the containers to communicate it fails. is there a way to make this feature work on host networks?
No, you can't do this. You probably could turn off host networking though.
Host networking pretty much completely disables Docker's networking layer. In the same way that a process outside a container can't directly communicate with a container except via its published ports:, a container that uses host networking would have to talk to localhost and the other container's published port. If the host has multiple interfaces it's up to the process to figure out which one(s) to listen on, and you can't do things like remap ports.
You almost never need host networking in practice. It's appropriate in three cases: if a service listens on a truly large number of ports (thousands); if the service's port is unpredictable; or for a management tool that's consciously trying to escape the container. You do not need host networking to make outbound network calls, and it's not a good solution to work around an incorrect hard-coded host name.
For a typical application, I would remove network_mode: host. If app2 needs to be reached from outside the container, add ports: to it. You also do not need any of the manual networking configuration you show, since Compose creates a default network for you and Docker automatically assigns IP addresses on its own.
A functioning docker-compose.yml file that omits the unnecessary options and also does not use host networking could look like:
version: '3.8'
services:
app1:
image: app1/app1
restart: always
ports: # optional if it does not need to be directly reached
- "9000:9000/tcp"
# no container_name:, hostname:, networks:, manual IP configuration
# volumes: may not be necessary in routine use
app2:
image: app2/app2
restart: always
# add to make the container accessible
ports:
- "3000:3000"
# configure communication with the first service
environment:
APP1_URL: 'http://app1:9000'
I do not know how to achieve that. Now all the ports are exposed to the host machine but I just want to expose one container port (80), not the other (8080). Here is the docker-compose file:
---
version: "3.9"
services:
app:
image: sandbox/app
container_name: app
volumes:
- ./src/app:/app/
expose:
- "8080"
restart: unless-stopped
networks:
custom-net:
ipv4_address: 10.0.0.7
web_server:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
networks:
custom-net:
ipv4_address: 10.0.0.6
networks:
custom-net:
name: custom-net
driver: bridge
ipam:
driver: default
config:
- subnet: 10.0.0.0/8
If I run from the local machine nmap 10.0.0.6, it shows port as open in port 80. This container exposure is the desired one. But when I run nmap 10.0.0.7, it also shows as open 8080 port, how it could be that one? Checking some stackoverflow thread, ports is defined like that:
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen).
and expose:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Do I miss some network concepts or do I have wrong docker-compose file?
You must be on a native-Linux host. If you happen to know the Docker-internal IP addresses, and you're on a native-Linux host, then you can always connect to a container using those addresses; you can't prevent this (without iptables magic) but it's also not usually harmful. This trick doesn't work in other environments (on MacOS or Windows hosts, or if Docker is in a Linux VM, or from a different host from the container) and it's much more portable to connect only to containers' published ports:.
You should be able to use a much simpler Compose file. Delete all of the networks: blocks and the expose: blocks. You also do not need container_name:, and you should not need to inject code using volumes:. Trimming out all of the unnecessary options leaves you with
version: '3.8' # last version supported by standalone docker-compose tool
services:
app:
image: sandbox/app # may want `build: .` _instead of_ this line
restart: unless-stopped
web_server:
image: nginx:latest # needs some custom configuration?
ports:
- "80:80"
That should literally be the entire file.
From outside Docker but on the same machine, http://localhost:80 matches the first ports: of the web_server container, so forwards to the second ports:, on which the Nginx server is listening. The Nginx configuration should include a line like proxy_pass http://app:8080 which will forward to the application container.
Compared to your original file:
expose: is an artifact of first-generation Docker networking. In a Compose file it does absolutely nothing at all and it's always safe to delete it.
Connections between containers (where web_server uses app as a host name) connect directly to the specified port; they do not use or require expose: or ports: settings, and they ignore ports: if they're present.
Compose assigns container names on its own, and there are docker-compose CLI equivalents to almost all Docker commands that can figure out the right mapping. You don't need to manually specify container_name:.
Docker automatically assigns IP addresses to containers. These are usually an internal implementation detail; it's useful to know that containers do have their own IP addresses (and so you can have multiple containers that internally listen on the same port) but you never need to know these addresses, look them up, or manually specify them.
Compose automatically creates a network named default for you and attaches containers to it, so in most common cases you don't need networks: at all.
Networking in Compose in the Docker documentation describes how to make connections between containers (again, you do not need to know the container-private IP addresses). Container networking discusses these concepts separately from Compose.
I started mysqldb from a docker container . I was surprised that I could connect it via the localhost using the below command
mysql -uroot -proot -P3306 -h localhost
I thought the docker containers that start on the bridge network and wont be available outside that network. How is that mysql CLI is able to connect to this instance
Below is my docker compose that runs the mysqldb-docker instance
version: '3.8'
services:
mysqldb-docker:
image: 'mysql:8.0.27'
restart: 'unless-stopped'
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-dev
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3306/reco-tracker-dev"
depends_on: [mysqldb-docker]
env_file:
- ./.env
volumes:
mysqldb:
You have published the port(s). That means you can reach them on the host system on the published port.
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
The critical section in your config is the below. You have added a ports key to your service. This is composes way to publish ports. The left part is the port where you publish it to on the host system. The right part is where the container actually listens on.
ports:
- "3306:3306"
Also keep in mind that when you start compose, a default network is created that joins all container in the compose stack. That's why These containers can find each other, with the service name and/or container name as hostname.
You don't need to publish the port(s) like you did in order for them to be able to communicate. I guess that's why you did it. You can and probably should remove any port mapping from internal services, if possible. This will add extra security to your setup, because then it behaves like you describe. Only containers in the same network find each other.
I have setup in which service_main stream logs on socket 127.0.0.1:6000
Simplified docker-compose.yml looks like that:
version: "3"
networks:
some_network:
driver: bridge
ipam:
driver: default
config:
- subnet: 100.100.100.0/24
gateway: 100.100.100.1
services:
service_main:
image: someimage1
networks:
some_network:
ipv4_address: 100.100.100.2
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 100.100.100.2:6000
My assumption that it SHOULD work since both containers belong to one network.
However I got an error(from service_listener) that 100.100.100.2:6000 is not available
(which i interpret that service tries to listen some public socket instead of network.)
I tried different ways, without deep understanding: expose/ publish 6000 port on service_main, or set socket for logs as 100.100.100.21:6000 and in service_listener listen 127.0.0.1:6000 (end publish port it also). But nothing works. And apparently I don't understand why.
In same network with similar approach - powerdns and postgresql works fine - I tell powerdns in config that db host is on 100.100.100.x and it works.
It all depends on what you want to do
If you want to access service_main from outside like the host the containers are running on then there are 2 ways to fix this:
Publish the port. This is done with the Ports command:
services:
service_main:
image: someimage1
ports:
- "6000:4000"
In this case port 4000 being the port where someimage1 is running on inside the Docker Container.
Use a ProxyServer which talks to the IP address of the Docker Container.
But the you need to make sure that the thing you have running inside the Docker Container (someimage1) is indeed running on port 6000.
Proxyserver
The nice thing about the proxyserver method is that you can use nginx inside another docker container and put all the deployment and networking stuff in there. (Shameless self-promotion for an example I created of a proxyserver in docker)
Non Routable Networks
And I would always use a non-routable network for internal networks, not 100.100.100.*
I assume when I publish/mapping port - I make it available not only for docker compose network but for external calls.
My problem was solved by next steps:
In configuration of service_main I set that it should stream log to socket: 100.100.100.21:6000
In service_listener I told app inside to listen 0.0.0.0:6000 port
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 0.0.0.0:6000
It helped.
I have one application running on http://home.local:8180 in container A. And the other container B is running on http://data.local:9010. Container B is using container A to hit the API. If I specify container A hostname as http://host.docker.internal:8180 in container B then it works. What I would have to do if I want to use the hostname as is (home.local:8180)
Following is the docker-compose file:
home_app:
hostname: "home.local"
image: "home-app"
ports:
- "8180:8080"
environment:
data_app:
hostname: "data.local"
image: "data-app"
links:
- "home_app"
ports:
- "9010:9010"
Just use "home.local:8080". 8180 is just on the host machine and forwards to 8080 on the container, whereas based on your docker-compose, 8080 is the port of your application on home_app container, so within the docker-compose network, other containers should be able to access it via hostname (home.local) and the actual ports (8080).
You need to configure your application to use the Compose service name home_app as a host name, and the port number that the process inside the container is using. Neither hostname: nor ports: has any effect on connections between containers. You don't need to (and can't) specify a custom DNS suffix. See Networking in Compose in the Docker documentation for additional details.
So I might specify:
version: '3.8'
services:
home_app:
image: "home-app"
ports:
- "8180:8080" # optional, only for access from outside Docker
data_app:
image: "data-app"
ports:
- "9010:9010"
environment:
HOME_APP_URL: 'http://home_app:8080'
You don't need hostname:, which only affects what a container thinks its own hostname is and has no effect on anything outside the container; and you don't need links:, which is an obsolete option from first-generation Docker networking.