Docker: select ip of a container - docker

I am doing a flask aplication deployed in uwsgi and nginx. My problem is that when i run it, it does in docker.internal.host and i need to specify the host. I would like to know if there is some way to create a docker network and in it, specify the host where i will see my flask application, or some other way to run a container in a specific ip.
Spected behaviour:
docker-compose up and then when i go to the ip,which i selct in docker compose.yml or in my net config, i see my flask app.
Thanks.

You can do something like:
services:
srv:
image: image:latest
command: start
networks:
mynet:
ipv4_address: 192.168.42.11
networks:
mynet:
ipam:
config:
- subnet: 192.168.42.0/24
but it works only on linux and from the host where docker is running. You should consider if port mapping or the host network mode could be options for you.
check this: https://docs.docker.com/compose/networking/

Related

Why docker containers can't access each other by ip within one network?

I have setup in which service_main stream logs on socket 127.0.0.1:6000
Simplified docker-compose.yml looks like that:
version: "3"
networks:
some_network:
driver: bridge
ipam:
driver: default
config:
- subnet: 100.100.100.0/24
gateway: 100.100.100.1
services:
service_main:
image: someimage1
networks:
some_network:
ipv4_address: 100.100.100.2
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 100.100.100.2:6000
My assumption that it SHOULD work since both containers belong to one network.
However I got an error(from service_listener) that 100.100.100.2:6000 is not available
(which i interpret that service tries to listen some public socket instead of network.)
I tried different ways, without deep understanding: expose/ publish 6000 port on service_main, or set socket for logs as 100.100.100.21:6000 and in service_listener listen 127.0.0.1:6000 (end publish port it also). But nothing works. And apparently I don't understand why.
In same network with similar approach - powerdns and postgresql works fine - I tell powerdns in config that db host is on 100.100.100.x and it works.
It all depends on what you want to do
If you want to access service_main from outside like the host the containers are running on then there are 2 ways to fix this:
Publish the port. This is done with the Ports command:
services:
service_main:
image: someimage1
ports:
- "6000:4000"
In this case port 4000 being the port where someimage1 is running on inside the Docker Container.
Use a ProxyServer which talks to the IP address of the Docker Container.
But the you need to make sure that the thing you have running inside the Docker Container (someimage1) is indeed running on port 6000.
Proxyserver
The nice thing about the proxyserver method is that you can use nginx inside another docker container and put all the deployment and networking stuff in there. (Shameless self-promotion for an example I created of a proxyserver in docker)
Non Routable Networks
And I would always use a non-routable network for internal networks, not 100.100.100.*
I assume when I publish/mapping port - I make it available not only for docker compose network but for external calls.
My problem was solved by next steps:
In configuration of service_main I set that it should stream log to socket: 100.100.100.21:6000
In service_listener I told app inside to listen 0.0.0.0:6000 port
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 0.0.0.0:6000
It helped.

docker postgresql access from other container

I have a docker-compose file which is globally like this.
version '2'
services:
app:
image: myimage
ports:
- "80:80"
networks:
mynet:
ipv4_adress: 192.168.22.22
db:
image: postgres:9.5
ports:
- "6432:5432"
networks:
mynet:
ipv4_adress: 192.168.22.23
...
networks:
mynet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.22.0/24
I want to put my postgresql and application in subnetworks to avoid the ports to be exposed outside my computer/server.
From within the app container, I can't connect to 192.168.22.23, I installed net-tools to use ifconfig/netstat, and it doesn't seem the containers are able to communicate.
I assume I have this problem because I'm using subnetworks with static ipv4 adresses.
I can access both static IPs from the host (connect to postgres and access the application)
Do you have any piece of advice, the goal is to access the ports of another container to communicate with him, without removing the use of static ips (on app at least). Here, to connect to postgresql from the app container.
The docker run -p option and Docker Compose ports: option take a bind address as an optional parameter. You can use this to make a service accessible from the same host, but not from other hosts:
services:
db:
ports:
- '127.0.0.1:6432:5432'
(The other good use of this setting is if you have a gateway machine with both a public and private network interface, and you want a service to only be accessible from the private network.)
Once you have this, you can dispense with all of the manual networks: setup. Non-Docker services on the same host can reach the service via the special host name localhost and the published port number. Docker services can use inter-container networking; within the same docker-compose.yml file you can use the service name as a host name, and the internal port number.
host$ PGHOST=localhost PGPORT=6432 psql
services:
app:
environment:
- PGHOST=db
- PGPORT=5432
You should remove all of the manual networks: setup, and in general try not to think about the Docker-internal IP addresses at all. If your Docker is Docker for Mac or Docker Toolbox, you cannot reach the internal IP addresses at all. In a multi-host environment they will be similarly unreachable from hosts other than where the container itself is running.

Scale containers for different ip addreses using docker compose

I have the following docker compose.
version: '2'
services:
mockup:
build: mockup/
ports:
- 12320:12320
volumes:
- /var/lib/tt/:/var/lib/tt/
networks:
- test
networks:
test:
driver: bridge
ipam:
config:
- subnet: 172.20.1.0/24
gateway: 172.20.1.1
I want to deploy a few instances of the same application on different containers and different IP addreses.
When I run docker-compose up --scale mockup=2 or more there is conflict on the port. All deployed apps must be on the same port.
What should I change in my docker-compose?
In order to scale without having an issue with the port you need to make it bind on a random port so you need to do like below, it will make the host port random for each container you start and map it to 12320 which inside the container:
ports:
- 12320
Next you should use some kind of service discovery to be aware of the new containers as you go up or down and a proxy so you can talk to a specific URL without worrying about which container is up and what is the port

Service access another service on 127.0.0.1?

I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379

docker-compose networking - hostname not resolving

I have the following docker-compose file. I am trying to access the service running in the container, from the host.
But the hostname never resolves.
version: '2'
networks:
mynet:
driver: bridge
services:
grpcserver:
image: test/image
volumes:
- ./:/var/local/git
ports:
- 50051:50051
stdin_open: true
tty: true
hostname: grpcserver
networks:
- mynet
entrypoint: bash ../var/local/git/service/start.sh
When I exec to the container I can telnet grpcserver 50051 to the running service using the hostname successfully. But from the host, I cannot.
Version
docker-compose version 1.16.1, build 6d1ac21
Docker containers are not resolved using their name on the host. They can only be resolved inside other containers. The name would be dependent on whether you are trying to connect from another service in same compose/network or a different one.
If you need your containers to be discoverable from host you need to use a tool like dnsmasq. See the answer below on more details on how to do such a setup
Access to container by his hostname from host-mascine

Resources