I have two docker containers, each running roscore which uses port 11311. Each of the containers has seperate IP address and uses different namespaces when publishing and subscribing. Shouldn't I be able to treat each container as a separate machine? What I want to do is rostopic pub from the host to one of the containers based on namespace.
When I start the containers, I get the following:
$ docker-compose up
Creating mach1 ... error
Creating mach1 ...
ERROR: for mach1 Cannot start service mach1: driver failed programming external
Creating mach2 ... done
cab7aa376623c708c): Bind for 0.0.0.0:11311 failed: port is already allocated
ERROR: for mach1 Cannot start service mach1: driver failed programming external connectivity on endpoint mach1 (9f755a1bd3f1dad40cce6963105a5d7224127dca3e0bb72cab7aa376623c708c): Bind for 0.0.0.0:11311 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
The YAML for docker-compose is:
version: '3'
services:
mach1:
build:
context: .
dockerfile: ./mach1/Dockerfile
environment:
- "ROS_IP=10.10.0.20"
- "ROS_MASTER_URI=http://10.10.0.20:11311"
image: my-image:v1
ports:
- "11311:11311"
networks:
my_net:
ipv4_address: 10.10.0.20
mach2:
build:
context: .
dockerfile: ./mach2/Dockerfile
environment:
- "ROS_IP=10.10.0.21"
- "ROS_MASTER_URI=http://10.10.0.21:11311"
image: my-image:v1
ports:
- "11311:11311"
networks:
my_net:
ipv4_address: 10.10.0.21
networks:
my_net:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.10.0.0/24
#- gateway: 10.10.0.1
The issue is that you are attempting to map both containers' ports 11311 to 11311 on the host
ports:
- "11311:11311"
Instead, try mapping to different host ports:
ports:
- "11311:11311"
and
ports:
- "11312:11311"
Related
I'm trying to run two Docker containers attached to a single Docker network using Docker Compose.
I'm running into the following error when I run the containers:
Error response from daemon: failed to add interface veth5b3bcc5 to sandbox:
error setting interface "veth5b3bcc5" IP to 172.19.0.2/16:
cannot program address 172.19.0.2/16 in sandbox
interface because it conflicts with existing
route {Ifindex: 10 Dst: 172.19.0.0/16 Src: 172.19.0.1 Gw: <nil> Flags: [] Table: 254}
My docker-compose.yml looks like this:
version: '3'
volumes:
dsn-redis-data:
driver: local
dsn-redis-conf:
driver: local
networks:
dsn-net:
driver: bridge
services:
duty-students-notifier:
image: duty-students-notifier:latest
network_mode: host
container_name: duty-students-notifier
build:
context: ../
dockerfile: ./docker/Dockerfile
env_file: ../.env
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- dsn-net
restart: always
dsn-redis:
image: redis:latest
expose:
- 5432
volumes:
- dsn-redis-data:/var/lib/redis
- dsn-redis-conf:/usr/local/etc/redis/redis.conf
networks:
- dsn-net
restart: always
Thanks!
The network_mode: host setting generally disables Docker networking, and can interfere with other options. In your case it looks like it might be trying to apply the networks: configuration to the host system network layer.
network_mode: host is almost never necessary, and deleting it may resolve this issue.
I'm trying to define my networks in separate docker-compose.yml file (docker-compose.networks.yml).
Here it is:
version: '3.8'
networks:
pypinfo-rabbitmq:
name: pypinfo_rabbitmq
driver: bridge
When I try to apply this configuration it shows the following warning:
WARNING: Some networks were defined but are not used by any service: pypinfo-rabbitmq
The main configuration file is:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: pypinfo_rabbitmq
ports:
- "5672:5672"
- "15672:15672"
environment:
- RABBITMQ_DEFAULT_USER
- RABBITMQ_DEFAULT_PASS
- RABBITMQ_DEFAULT_VHOST
volumes:
- rabbitmq_data:/var/lib/rabbitmq/
- rabbitmq_log:/var/log/rabbitmq/
networks:
- pypinfo-rabbitmq
volumes:
rabbitmq_data:
driver: local
rabbitmq_log:
driver: local
networks:
pypinfo-rabbitmq:
external:
name: pypinfo_rabbitmq
And when I apply my main configuration file for my services it says:
ERROR: Network pypinfo_rabbitmq declared as external, but could not be found. Please create the network manually using `docker network create pypinfo_rabbitmq` and try again.
The question: Why are my networks defined in docker-compose.networks.yml not created? Why should I do to force docker-compose to create them?
I found a solution. I can just apply both files at the same time:
docker-compose -f docker-compose.rabbitmq.yml -f docker-compose.networks.yml up -d
It will create the network and containter that has this network in networks section in configuration:
Creating network "pypinfo_rabbitmq" with driver "bridge"
Recreating pypinfo_rabbitmq ... done
Try to create network before running docker compose up
docker network create pypinfo-rabbitmq
I have started learning about Docker and containers, and have been given an assignment to "Host a docker container on the external network (the one that the host is connected to) with its own IP address that is valid for said network".
As far as my understanding goes, Docker containers allow to forward ports onto the host, without exposing the docker container to the outside network. Is there any way to expose the whole container, with all its ports and have its own IP onto the external network?
Here is a excerpt from a test docker-compose.yaml file:
env20:
build: ./env20
image: env20
container_name: env20
hostname: env20
ports:
- "22:22/tcp"
- "80:80/tcp"
depends_on:
- mysql
networks:
gnet:
ipv4_address: 10.10.11.30
restart: unless-stopped
#############################################################
# Netowrk setup
#############################################################
networks:
gnet:
name: gnet
driver: macvlan
ipam:
driver: default
config:
- subnet: 10.10.11.0/24
gateway: 10.10.11.1
Any help would be appreciated!
I have two machines: machine-A and machine-B. Both are on different networks. I create a docker container on machine-A using docker-compose.yml and run litecoind process within it on port 12345. I have forwarded port 12345 to the port 80 of the host machine-A.
version: '3'
services:
node1:
build: .
cap_add:
- ALL
command: litecoind -regtest -server -rpcuser=rpc -rpcpassword=x -rpcport=10340 --datadir=/root/litecoind-simnet/ -port=12345
networks:
vpcbr:
ipv4_address: 10.9.0.11
ports:
- 80:12345
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.9.0.0/16
Now on machine-B, I can directly connect to the above process with -addnode option of litecoin and can see the blockchains syncing.
Problem arises when I create a container on machine-B and try to connect to the same above process with -addnode by using the docker-compose.yml file on machine-B. In this case, the litecoind process remains invisible and the blockchains do not sync.
version: '3'
services:
node1:
build: .
cap_add:
- ALL
command: litecoind -regtest -addnode=<x.x.x.x:80> -rpcuser=rpc -rpcpassword=x -rpcport=10340 --datadir=/root/litecoind-simnet/ -port=12345
networks:
vpcbr:
ipv4_address: 10.8.0.11
ports:
- 90:12345
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.8.0.0/16
I want the above two separate containers on two separate remote machines to communicate with each other. What am I missing? Help please. Thanks.
The possible solutions are
Use a single docker-compose file to deploy both the containers on same node.
If your requirement is to absolutely deploy the containers on two different nodes then you need to create a swarm cluster if you are using compose.
If you want to create two different compose file on same node this is the answered here
I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode