Failed to add interface to sandbox - docker

I'm trying to run two Docker containers attached to a single Docker network using Docker Compose.
I'm running into the following error when I run the containers:
Error response from daemon: failed to add interface veth5b3bcc5 to sandbox:
error setting interface "veth5b3bcc5" IP to 172.19.0.2/16:
cannot program address 172.19.0.2/16 in sandbox
interface because it conflicts with existing
route {Ifindex: 10 Dst: 172.19.0.0/16 Src: 172.19.0.1 Gw: <nil> Flags: [] Table: 254}
My docker-compose.yml looks like this:
version: '3'
volumes:
dsn-redis-data:
driver: local
dsn-redis-conf:
driver: local
networks:
dsn-net:
driver: bridge
services:
duty-students-notifier:
image: duty-students-notifier:latest
network_mode: host
container_name: duty-students-notifier
build:
context: ../
dockerfile: ./docker/Dockerfile
env_file: ../.env
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- dsn-net
restart: always
dsn-redis:
image: redis:latest
expose:
- 5432
volumes:
- dsn-redis-data:/var/lib/redis
- dsn-redis-conf:/usr/local/etc/redis/redis.conf
networks:
- dsn-net
restart: always
Thanks!

The network_mode: host setting generally disables Docker networking, and can interfere with other options. In your case it looks like it might be trying to apply the networks: configuration to the host system network layer.
network_mode: host is almost never necessary, and deleting it may resolve this issue.

Related

Why in docker-compose after recreate conteiner i get "Docker cannot link to a non running container"?

I have two conteiners:
docker-compose.yml
version: '3.8'
services:
db:
image: postgres:14.1
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
......
network_mode: bridge
web:
container_name: web
build: .
........
network_mode: bridge
external_links:
- postgres
depends_on:
- db
volumes:
postgres_data:
name: postgres_data
After docker-compose up, when i recreate only one container - "db", all works, but i can not connect to conteiner "web", i get error: "Failure
Cannot link to a non running container: /postgres AS /web/postgres".
In conteiner "web" i call db as host=postgres.
What am I doing wrong?
The external_links: setting is obsolete and you don't need it. You can just remove it with no adverse consequences.
network_mode: bridge and container_name: are also unnecessary, though they shouldn't specifically cause problems; still, I'd delete them. What you show can be reduced to
version: '3.8'
services:
db:
image: postgres:14.1
volumes:
- postgres_data:/var/lib/postgresql/data/
......
web:
build: .
........
depends_on:
- db
volumes:
postgres_data: # empty
Since Compose creates a network named default for you and attaches containers to it, your application container can still reach the database container using the hostname db. Networking in Compose in the Docker documentation describes this further.

Docker communication inside docker compose and with database which is outside docker

I'm little bit confused with docker and network communication. I tried many things but it didn't work :-(.
I have following docker compose:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:stable-alpine
restart: unless-stopped
tty: true
ports:
- 80:80
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
networks:
- frontend
- backend
app:
restart: unless-stopped
tty: true
build:
context: .
dockerfile: Dockerfile
container_name: app
expose:
- "9090"
ports:
- 9090:9090
networks:
- backend
networks:
frontend:
backend:
And I would like to communicate:
From nginx to app //this probably works
From app to postgreSQL which is installed on server (no docker container)
I cannot do this, I tried many things but something is wrong :-(
You can choose any of these two options:
Make your postgresql listen to all your network interfaces (or the docker bridge for more secure but complex setup), to achieve that you need to make sure your config looks like this:
# grep listen /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*'
Use host network mode in your docker compose, which runs docker in your host network name space instead of creating a new network:
network_mode: "host"

How to use same ports in seperate containers?

I have two docker containers, each running roscore which uses port 11311. Each of the containers has seperate IP address and uses different namespaces when publishing and subscribing. Shouldn't I be able to treat each container as a separate machine? What I want to do is rostopic pub from the host to one of the containers based on namespace.
When I start the containers, I get the following:
$ docker-compose up
Creating mach1 ... error
Creating mach1 ...
ERROR: for mach1 Cannot start service mach1: driver failed programming external
Creating mach2 ... done
cab7aa376623c708c): Bind for 0.0.0.0:11311 failed: port is already allocated
ERROR: for mach1 Cannot start service mach1: driver failed programming external connectivity on endpoint mach1 (9f755a1bd3f1dad40cce6963105a5d7224127dca3e0bb72cab7aa376623c708c): Bind for 0.0.0.0:11311 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
The YAML for docker-compose is:
version: '3'
services:
mach1:
build:
context: .
dockerfile: ./mach1/Dockerfile
environment:
- "ROS_IP=10.10.0.20"
- "ROS_MASTER_URI=http://10.10.0.20:11311"
image: my-image:v1
ports:
- "11311:11311"
networks:
my_net:
ipv4_address: 10.10.0.20
mach2:
build:
context: .
dockerfile: ./mach2/Dockerfile
environment:
- "ROS_IP=10.10.0.21"
- "ROS_MASTER_URI=http://10.10.0.21:11311"
image: my-image:v1
ports:
- "11311:11311"
networks:
my_net:
ipv4_address: 10.10.0.21
networks:
my_net:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.10.0.0/24
#- gateway: 10.10.0.1
The issue is that you are attempting to map both containers' ports 11311 to 11311 on the host
ports:
- "11311:11311"
Instead, try mapping to different host ports:
ports:
- "11311:11311"
and
ports:
- "11312:11311"

Docker-compose bridge network & host remote port forwarding at the same container

I'm trying to make service that can forward remote database port to container and at the same time can be accessible by alias hostname from other containers to work with them.
I am think that make all containers communicate by host network is bad practice, so i am trying to setup that configuration.
When i am triyng to add to php-fpm service network with driver: host, docker says
only one instance of "host" network is allowed
When i am trying to set php-fpm service with this
networks
- host
Docker says that he cant find out network with this name.
When i try to define network in docker-compose by id of built-in host, it just cant start this container.
This is my docker-compose:
version: '3.2'
networks:
backend-network:
driver: bridge
frontend-network:
driver: bridge
volumes:
redis-data:
home-dir:
services:
&app-service app: &app-service-template
build:
context: ./docker/app
dockerfile: Dockerfile
volumes:
- ./src:/app:rw
- home-dir:/home/user
hostname: *app-service
environment:
FPM_PORT: &php-fpm-port 9001
FPM_USER: "${USER_ID:-1000}"
FPM_GROUP: "${GROUP_ID:-1000}"
APP_ENV: local
HOME: /home/user
command: keep-alive.sh
networks:
- backend-network
&php-fpm-service php-fpm:
<<: *app-service-template
user: 'root:root'
restart: always
hostname: *php-fpm-service
ports: [*php-fpm-port]
environment:
FPM_PORT: *php-fpm-port
FPM_USER: "${USER_ID:-1000}"
FPM_GROUP: "${GROUP_ID:-1000}"
APP_ENV: local
HOME: /home/user
entrypoint: /fpm-entrypoint.sh
command: php-fpm --nodaemonize -R -d "opcache.enable=0" -d "display_startup_errors=On" -d "display_errors=On" -d "error_reporting=E_ALL"
networks:
- backend-network
- frontend-network
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
restart: always
working_dir: /usr/share/nginx/html
environment:
FPM_HOST: *php-fpm-service
FPM_PORT: *php-fpm-port
ROOT_DIR: '/app/public' # App path must equals with php-fpm container path
volumes:
- ./src:/app:ro
ports: ['9999:80']
depends_on:
- *php-fpm-service
networks:
- frontend-network
Network scheme (question about green line):
Host works on Debian 7 (updates prohibited) and conainer works with lastest Alpine

Setting up IPFS Cluster on docker environment

I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode

Resources