What are docker networks are needed for? - docker

Please explain why docker network is needed? I read some documentation, but on a practical example, I don't understand why we need to manipulate the network. Here I have a docker-compose in which everything works fine with and without network. Explain me please, what benefits will be in practical use if you uncomment docker-compose in the right places? Now my containers interact perfectly, there are migrations from the ORM to the database, why do I need a networks?
version: '3.4'
services:
main:
container_name: main
build:
context: .
target: development
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- ${PORT}:${PORT}
command: npm run start:dev
env_file:
- .env
# networks:
# - webnet
depends_on:
- postgres
postgres:
container_name: postgres
image: postgres:12
# networks:
# - webnet
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
PG_DATA: /var/lib/postgresql/data
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 1m30s
timeout: 10s
retries: 3
# networks:
# webnet:
volumes:
pgdata:

If no networks are defined, docker-compose will create a default network with a generated name. Otherwise you can manually specify the network and its name in the compose file.
You can read more at Networking in Compose

Here explained Docker network - Networking overview, and here are tutorials:
Macvlan network tutorial,
Overlay networking tutorial,
Host networking tutorial,
Bridge network tutorial.

Without config network for containers, all container IPs are in one range (Ex. 172.17.0.0/16). also, they can see each other by name of the container (Docker internal DNS).
Simple usage of networking in docker:
When you want to have numbers of containers in a specific network range isolated, you must use the network of docker.

Related

How to connect to RabbitMQ from a program that is running in another container? [duplicate]

This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 7 months ago.
i have project in which Rabbit is launched:
services:
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- rabbitmq_go_net
networks:
rabbitmq_go_net:
driver: bridge
Then I made another project that connects to this queue.
Without the doker, I just used localhost as the host and port number 5672.
I wanted to run another project in docker with a database:
version: '3'
services:
postgres:
image: postgres:12
restart: always
networks:
- rabbitmq_go_net
ports:
- '5432:5432'
volumes:
- ./db_data:/var/lib/postgresql/data
- ./app/internal/config/dbconfig/init.sql:/docker-entrypoint-initdb.d/create_tables.sql
env_file:
- ./app/internal/config/dbconfig/.env
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "devdb", "-U", "postgres" ]
timeout: 45s
interval: 10s
retries: 10
app:
build: app
networks:
- rabbitmq_go_net
depends_on:
postgres:
condition: service_healthy
volumes:
db_data:
networks:
rabbitmq_go_net:
driver: bridge
And now I can't connect to Rabbit.I tried to make a different network and with the same name, but every time I get the same error:
FATA[2022-07-31T13:24:09Z]ipstack/internal.(*app).startConsume() app.go:43 failed to connect to RabbitMQ due dial tcp 127.0.0.1:5672: connect: connection refused
Connect:
addr := fmt.Sprintf("amqp://%s:%s#%s:%s/", cfg.Username, cfg.Password, cfg.Host, cfg.Port)
Where host is the container name rabbitmq.
Is it possible to do this, or is it necessary to put programs in a container in some other way?I will be glad to help
I think the issue is that your Docker networks aren't the same despite using the same name in the two different compose files. Docker prefixes networks declared inside compose files with the project name to avoid collisions.
Everything you're looking for can be found in this SO response and corresponding thread. It shows how to use the external flag on a network so that the second compose doesn't create a new network, and also talks about how Docker uses the project name as a prefix for the network name so you can predict what the generated network name can be.
Alternatively, you can create the network in advance with docker network create and use the external flag inside both compose files so you don't need to worry about docker's naming convensions.

How to get redis address from docker compose?

I'm trying to pass redis url to docker container but so far i couldn't get it to work. I did a little research and none of the answers worked for me.
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
container_name: redis
hostname: redis
expose:
- 6379
links:
- api
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
environment:
- REDIS_URL=redis
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=proxy'
networks:
proxy:
Error: Redis connection to redis failed - connect ENOENT redis
You can only communicate between containers on the same Docker network. Docker Compose creates a default network for you, and absent any specific declaration your redis container is on that network. But you also declare a separate proxy network, and only attach the api container to that other network.
The single simplest solution to this is to delete all of the network: blocks everywhere and just use the default network Docker Compose creates for you. You may need to format the REDIS_URL variable as an actual URL, maybe like redis://redis:6379.
If you have a non-technical requirement to have separate networks, add - default to the networks listing for the api container.
You have a number of other settings in your docker-compose.yml that aren't especially useful. expose: does almost nothing at all, and is usually also provided in a Dockerfile. links: is an outdated way to make cross-container calls, and as you've declared it to make calls from Redis to your API server. hostname: has no effect outside the container itself and is usually totally unnecessary. container_name: does have some visible effects, but usually the container name Docker Compose picks is just fine.
This would leave you with:
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=default'

What is the impact on not using volumes in my docker-compose?

I am new to docker, what a wonderful tool!. Following the Django tutorial, their docs provide a basic docker-compose.yml, that looks similar to the following one that I've created.
version: '3'
services:
web:
build: .
container_name: web
command: python manage.py migrate
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./src:/src
ports:
- "8000:8000"
depends_on:
- postgres
postgres:
image: postgres:latest
container_name: postgres
environment:
POSTGRES_USER: my_user
POSTGRES_PASSWORD: my_secret_pass!
POSTGRES_DB: my_db
ports:
- "5432:5432"
However, in every single docker-compose file that I see around, the following is added:
volumes:
- ./postgres-data:/var/lib/postgresql/data
What are those volumes used for? Does it mean that if I now restart my postgres container all my data is deleted, but if I had the volumes it is not?
Is my docker-compose.yml ready for production?
What are those volumes used for?
Volumes persist data from your container to your Docker host.
This:
volumes:
- ./postgres-data:/var/lib/postgresql/data
means that /var/lib/postgresql/data in your container will be persisted in ./postgres-data in your Docker host.
What #Dan Lowe commented is correct, if you do docker-compose down without volumes, all the data insisde your containers will be lost, but if you have volumes the directories, and files you specified will be kept in your Docker host
You can see this data in your Docker host in /var/lib/docker/volumes/<your_volume_name>/_data even after your container don't exist anymore.

Setting up IPFS Cluster on docker environment

I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode

docker-compose: difference between networks and links

I'm learning docker. I see those two terms that make me confused. For example here is a docker-compose that defined two services redis and web-app.
services:
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
networks:
- lognet
app:
container_name: web-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ".:/webapp"
links:
- redis
networks:
- lognet
networks:
lognet:
driver: bridge
This docker-compose file defines a bridge network named lognet and all services will connect to this network. As I understand, this action makes those services see others. So why app service still needs to link to redis service in the above case?
Thanks
Links have been replaced by networks. Docker describes them as a legacy feature that you should avoid using. You can safely remove the link and the two containers will be able to refer to each other by their service name (or container_name).
With compose, links do have a side effect of creating an implied dependency. You should replace this with a more explicit depends_on section so that the app doesn't attempt to run without or before redis starts.
As an aside, I'm not a fan of hard coding container_name unless you are certain that this is the only container that will exist with that name on the host and you need to refer to it from the docker cli by name. Without the container name, docker-compose will give it a less intuitive name, but it will also give it an alias of redis on the network, which is exactly what you need for container to container networking. So the end result with these suggestions is:
version: '2'
# do not forget the version line, this file syntax is invalid without it
services:
redis:
image: redis:latest
ports:
- "6379:6379"
networks:
- lognet
app:
container_name: web-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ".:/webapp"
depends_on:
- redis
networks:
- lognet
networks:
lognet:
driver: bridge

Resources