Getting container-ips inside docker container - docker

how can we run docker commands inside container with docker-compose?
Simply I want to get IP of some other network container.
I am running three container va-server, db and api-server. All the containers are in same docker-network
Here I am providing docker-compose file below:
version: "2.3"
services:
va-server:
container_name: va_server
image: nitinroxx/facesense:amd64_2022.11.28 #facesense:alpha
runtime: nvidia
restart: always
mem_limit: 4G
networks:
- perimeter-network
db:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
restart: always
volumes:
- ./facesense_db:/data/db
command: [--auth]
networks:
- perimeter-network
api-server:
container_name: api_server
image: nitinroxx/facesense:api_amd64_2022.11.28
ports:
- "80:80"
- "465:465"
restart: always
networks:
- perimeter-network
networks:
perimeter-network:
driver: bridge
ipam:
config:
- gateway: 10.16.239.1
subnet: 10.16.239.0/24
I have install docker inside the container which giving me below permission error:
docker.errors.dockerexception: error while fetching server api version: ('connection aborted.', permissionerror(13, 'permission denied')

...inside [a] container [...] I want to get IP of some other network container....
Docker provides an internal DNS service that can resolve container names to their Docker-internal IP addresses. From one of the containers you show, you could look up a host name like db to get the container's IP address; but in practice, this is a totally normal DNS name and all but the lowest-level networking interfaces can use those directly.
This does require that all of the containers involved be on the same Docker network. Normally Compose sets this up automatically for you; in the file you show I might delete the networks: blocks and container_name: overrides in the name of simplicity. Also see Networking in Compose in the Docker documentation.
In short:
You can probably use the Compose service names va-server, db, and api-server as host names without specifically knowing their IP addresses.
This probably means you never need to know the container IP addresses at all (they're usually unusable from outside Docker).
If you do need an IP address from inside a container, a DNS lookup can find it.
You can't usually run docker commands from inside containers. You can't do this safely without making it possible for the container to take over the whole host. There are usually better patterns that don't tie you to the Docker stack specifically.

Related

automatic dns resolution does not work on containers running on host network

I have two containers running on the same host using docker, however one container uses the host network while the other uses a custom bridge network as follows:
version: '3.8'
services:
app1:
container_name: app1
hostname: app1
image: app1/app1
restart: always
networks:
local:
ipv4_address: 10.0.0.8
ports:
- "9000:9000/tcp"
volumes:
- /host:/container
app2:
container_name: app2
hostname: app2
image: app2/app2
restart: always
network_mode: host
volumes:
- /host:/container
networks:
local:
ipam:
driver: bridge
config:
- subnet: "10.0.0.0/24"
i have normal ip communication between the two containers however when i want to use the hostname of the containers to communicate it fails. is there a way to make this feature work on host networks?
No, you can't do this. You probably could turn off host networking though.
Host networking pretty much completely disables Docker's networking layer. In the same way that a process outside a container can't directly communicate with a container except via its published ports:, a container that uses host networking would have to talk to localhost and the other container's published port. If the host has multiple interfaces it's up to the process to figure out which one(s) to listen on, and you can't do things like remap ports.
You almost never need host networking in practice. It's appropriate in three cases: if a service listens on a truly large number of ports (thousands); if the service's port is unpredictable; or for a management tool that's consciously trying to escape the container. You do not need host networking to make outbound network calls, and it's not a good solution to work around an incorrect hard-coded host name.
For a typical application, I would remove network_mode: host. If app2 needs to be reached from outside the container, add ports: to it. You also do not need any of the manual networking configuration you show, since Compose creates a default network for you and Docker automatically assigns IP addresses on its own.
A functioning docker-compose.yml file that omits the unnecessary options and also does not use host networking could look like:
version: '3.8'
services:
app1:
image: app1/app1
restart: always
ports: # optional if it does not need to be directly reached
- "9000:9000/tcp"
# no container_name:, hostname:, networks:, manual IP configuration
# volumes: may not be necessary in routine use
app2:
image: app2/app2
restart: always
# add to make the container accessible
ports:
- "3000:3000"
# configure communication with the first service
environment:
APP1_URL: 'http://app1:9000'

Expose docker-compose windows containers to windows host network

I'im fairly new to docker and docker compose.
I have a simple scenario, based on three applications (app1, app2, app3) that I want to connect to my host's network. The purpose is having an internet connection also inside the container.
Here is my docker-compose file:
version: '3.9'
services:
app1container:
image: app1img
build: ./app1
networks:
network_comp:
ipv4_address: 192.168.1.1
extra_hosts:
anotherpc: 192.168.1.44
ports:
- 80:80
- 8080:8080
app2container:
depends_on:
- "app1container"
image: app2img
build: ./app2
networks:
network_comp:
ipv4_address: 192.168.1.2
ports:
- 3100:3100
app3container:
depends_on:
- "app1container"
image: app3img
build: ./app3
networks:
network_comp:
ipv4_address: 192.168.1.3
ports:
- 9080:9080
networks:
network_comp:
driver: ""
ipam:
driver: ""
config:
- subnet: 192.168.0.0/24
gateway: 192.168.1.254
I already read the docker-compose documentation, which says that there is no a bridge driver for Windows OS. Is there anyway a solution to this issue?
You shouldn't usually need to do any special setup for this to work. When your Compose service has ports:, that makes a port available on the host's IP address. The essential rules for this are:
The service inside the container must listen on the special 0.0.0.0 "all interfaces" address (not 127.0.0.1 "this container only"), on some (usually fixed) port.
The container must be started with Compose ports: (or docker run -p). You choose the first port number, the second port number must match the port inside the container.
The service can be reached via the host's IP address on the first port number (or, if you're using the older Docker Toolbox setup, on the docker-machine ip address).
http://host.example.com:12345 (from other hosts)
|
v
ports: ['12345:8080'] (in the `docker-compose.yml`)
|
v
./my_server -bind 0.0.0.0:8080 (the main container command)
You can remove all of the manual networks: configuration in this file. In particular, it's problematic if you try to specify the Docker network to have the same IP address range as the host network, since these are two separate networks. Compose automatically provides a network named default that should work for most practical applications.

docker-compose: Connect container to "network=host" and to other services [duplicate]

I want to connect two Docker containers, defined in a Docker-Compose file to each other (app and db). And one of them (app) should also be connected to the host network.
The containers should be connected to a common user-defined network (appnet or default) to use the embedded DNS capabilities from docker networking.
app needs also to be directly connected to the host network to receive ethernet broadcasts (network layer 2) in the physical network of the docker host.
Using both directives network_mode: host and networks in compose together, results in the following error:
ERROR: 'network_mode' and 'networks' cannot be combined
Specifying the network name host in the service without defining it in networks (because it already exists), results in:
ERROR: Service "app" uses an undefined network "host"
Next try: define both networks explicitly and do not use the network_mode: host attribute at service level.
version: '3'
services:
app:
build: .
image: app
container_name: app
environment:
- MONGODB_HOST=db
depends_on:
- db
networks:
- appnet
- hostnet
db:
image: 'mongo:latest'
container_name: db
networks:
- appnet
networks:
appnet: null
hostnet:
external:
name: host
The foregoing compose file produces an error:
ERROR: for app network-scoped alias is supported only for containers in user defined networks
How to use the host network, and any other user-defined network (or the default) together in Docker-Compose?
TL;DR you can't. The host networking turns off the docker network namespace for that container. You can't have it both on and off at the same time.
Instead, connect to your database with a published port, or a unix socket that you can share as a volume. E.g. here's how to publish the port:
version: "3.3"
services:
app:
build: .
image: app
container_name: app
environment:
- MONGODB_HOST=127.0.0.1
db:
image: mongo:latest
container_name: db
ports:
- "127.0.0.1:27017:27017"
To use host network, you don't need to define it. Just use "ports" keyword to define, which port(s) from service you want to expose in host network.
Since Docker 18.03+ one can use host.docker.internal to access your host from within your containers. No need to add host network or mix it with the user defined networks.
Source: Docker Tip #65: Get Your Docker Host's IP Address from in a Container

How to make a docker compose service to use multiple network

everyone, I have a requirement to write a docker-compose.yml which need to make one of service to use two network, one for the default for communication with each other service and one for the external bridge network for auto self discovery via nginx-proxy.
My docker-compose.yml like the belows.
version: '2'
services:
dns-management-frontend:
image: ......
depends_on:
- dns-management-backend
ports:
- 80
restart: always
networks:
- default
- bridge
dns-management-backend:
image:......
depends_on:
- db
- redis
restart: always
networks:
- default
db:
image: ......
volumes:
- ./mysql-data:/var/lib/mysql
restart: always
networks:
- default
redis:
image: redis
ports:
- 6379
restart: always
networks:
- default
networks:
default:
bridge:
external:
name: bridge
networks:
- default
When I start with it, it gave me network-scoped alias is supported only for containers in user defined networks error. I have to remove the networks section in services, and after started, manually ran docker network connect <id_of_frontend_container> bridge to make it work.
Any advice on how to configure multiple network in docker-compose? I also have read https://docs.docker.com/compose/networking/, but it is too simple.
The Docker network named bridge is special; most notably, it doesn't provide DNS-based service discovery.
For your proxy service, you should docker network create some other network, named anything other than bridge, either docker network connect the existing container to it or restart the proxy --net the_new_network_name. In the docker-compose.yml file change the external: {name: ...} to the new network name.
Any advice on how to configure multiple network in docker-compose?
As you note Docker Compose (and for that matter Docker proper) doesn't support especially involved network topologies. At the half-dozen-containers scale where Compose works well, you don't really need an involved network topology. Use the default network that Docker Compose provides for you, and don't bother manually configuring networks: unless it's actually necessary (as the external proxy is in your question).
you can not mixing for now the default bridge with other networks in compose
issue is still open ...

docker postgresql access from other container

I have a docker-compose file which is globally like this.
version '2'
services:
app:
image: myimage
ports:
- "80:80"
networks:
mynet:
ipv4_adress: 192.168.22.22
db:
image: postgres:9.5
ports:
- "6432:5432"
networks:
mynet:
ipv4_adress: 192.168.22.23
...
networks:
mynet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.22.0/24
I want to put my postgresql and application in subnetworks to avoid the ports to be exposed outside my computer/server.
From within the app container, I can't connect to 192.168.22.23, I installed net-tools to use ifconfig/netstat, and it doesn't seem the containers are able to communicate.
I assume I have this problem because I'm using subnetworks with static ipv4 adresses.
I can access both static IPs from the host (connect to postgres and access the application)
Do you have any piece of advice, the goal is to access the ports of another container to communicate with him, without removing the use of static ips (on app at least). Here, to connect to postgresql from the app container.
The docker run -p option and Docker Compose ports: option take a bind address as an optional parameter. You can use this to make a service accessible from the same host, but not from other hosts:
services:
db:
ports:
- '127.0.0.1:6432:5432'
(The other good use of this setting is if you have a gateway machine with both a public and private network interface, and you want a service to only be accessible from the private network.)
Once you have this, you can dispense with all of the manual networks: setup. Non-Docker services on the same host can reach the service via the special host name localhost and the published port number. Docker services can use inter-container networking; within the same docker-compose.yml file you can use the service name as a host name, and the internal port number.
host$ PGHOST=localhost PGPORT=6432 psql
services:
app:
environment:
- PGHOST=db
- PGPORT=5432
You should remove all of the manual networks: setup, and in general try not to think about the Docker-internal IP addresses at all. If your Docker is Docker for Mac or Docker Toolbox, you cannot reach the internal IP addresses at all. In a multi-host environment they will be similarly unreachable from hosts other than where the container itself is running.

Resources