Mapping port in docker / docker-compose - docker

I have a similar issue but the suggested answer does not seems to work for me. My problem is that I want to split an app on multiple containers. One of the components is willing to connect to a local redis (which I have now moved in its own container)
version: '3'
services:
server-test:
image: redis:alpine
command: redis-cli -h redis-db ping
redis-db:
image: redis
expose:
- "6379"
works fine - but I want something along this line:
version: '3'
services:
server-test:
image: redis:alpine
command: redis-cli ping
redis-db:
image: redis
expose:
- "6379"
I have tried to add the lines:
ports:
- "6379:redis-db:6379"
but I end up with this error message:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.server-test.ports contains an invalid type, it should be a number, or an object
Any idea ?
And btw it seems that the expose is not even needed in my first case.

Related

Why in docker-compose after recreate conteiner i get "Docker cannot link to a non running container"?

I have two conteiners:
docker-compose.yml
version: '3.8'
services:
db:
image: postgres:14.1
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
......
network_mode: bridge
web:
container_name: web
build: .
........
network_mode: bridge
external_links:
- postgres
depends_on:
- db
volumes:
postgres_data:
name: postgres_data
After docker-compose up, when i recreate only one container - "db", all works, but i can not connect to conteiner "web", i get error: "Failure
Cannot link to a non running container: /postgres AS /web/postgres".
In conteiner "web" i call db as host=postgres.
What am I doing wrong?
The external_links: setting is obsolete and you don't need it. You can just remove it with no adverse consequences.
network_mode: bridge and container_name: are also unnecessary, though they shouldn't specifically cause problems; still, I'd delete them. What you show can be reduced to
version: '3.8'
services:
db:
image: postgres:14.1
volumes:
- postgres_data:/var/lib/postgresql/data/
......
web:
build: .
........
depends_on:
- db
volumes:
postgres_data: # empty
Since Compose creates a network named default for you and attaches containers to it, your application container can still reach the database container using the hostname db. Networking in Compose in the Docker documentation describes this further.

Why the services is not a list in docker-compose files

I am learning and testing docker with yaml. And I have created a simple docker-compose.yml file as:
version: "3"
services:
redis:
image: redis
click-counter:
image: kodekloud/click-counter
ports:
- 8080:5000
links:
- redis:redis
My question: Are the services under option services list(redis and click-counter). Then appending a dash should work but is not working.
version: "3"
services:
- redis:
image: redis
- click-counter:
image: kodekloud/click-counter
ports:
- 8080:5000
links:
- redis:redis
This throws errors in terminal as:
RROR: In file './docker-compose.yml', service must be a mapping, not an array.
Can any assist on the this.
services is a map in terms of Docker Compose YAML definition. It allows enforcing no duplicated services.
When you add - in front of the name of each service, services is going to be a list in terms of YAML definition. And therefore is not what is expected by Docker.
The first docker-compose file you have has the correct format for defining services.
version: "3"
services:
redis:
...
click-counter:
...

Deploy containers from different docker-compose.yml

Currently I have a rabbitmq message broker and multiple celery workers that need to be containerized. My problem is, how can I fire up containers using different docker-compose.yml? My goal is to start the rabbitmq once and for all, and never touch it again.
Currently I have a docker-compose.yml for the rabbitmq:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
expose:
- "5672"
And another docker-compose.yml for celery workers:
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
links:
- rabbit
However, when I do docker-compose up for celery workers, I keep getting the following error:
ERROR/MainProcess] consumer: Cannot connect to
amqp://admin:**#rabbit:5672//: failed to resolve broker hostname.
Can anyone take a look if there is anything wrong with my code? Thanks.
the domain name rabbit in your second docker-compose.yml file does not resolve because there is no service with that name in that docker-compose.yml file.
As stated in the comments, one solution is to put both the rabbit service and worker service in the same docker-compose.yml file. In such a setup, all containers started for those services would join the same docker network and those service names could be resolved to the IP adresses of their containers.
Since having a single docker-compose.yml file is not convenient in your case, you have to find an other way to have the containers originating from different docker-compose.yml files join a same docker network.
To do so, you need to create a dedicated docker network for that purpose:
docker network create rabbitNetwork
Then, in each docker-compose.yml file, you need to refer to this network in the services definitions:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
# ports:
# - "5672:5672" # there is no need to publish ports on the docker host anymore
expose:
- "5672"
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
You can use any file as service definition.
docker-compose.yml is default file name but any other name can be passed using -f argument.
docker-compose -f rabbit-compose.yml COMMAND

Docker Compose: Volumes not working on Windows Nano

I've got two Windows Nano docker containers ... one with a service on, the second with my Automated Acceptance Tests.
I'm trying to add a volume to the aat container so I can copy off the tests output.
I've seen elsewhere I'm supposed to use ...
COMPOSE_CONVERT_WINDOWS_PATHS=1
But can't seem to get anywhere :S
version: '3.3'
services:
fancyservice:
restart: always
image: fancyservice
aat-runner:
environment:
- FancyServiceUrl=http://fancyservice/
- COMPOSE_CONVERT_WINDOWS_PATHS=1
volumes:
- .:/output:rw
restart: always
image: aat-runner
I get:
ERROR: for aat_aat-runner_1 Cannot create container for service aat-runner: invalid volume spec "/output"
ERROR: for aat-runner Cannot create container for service aat-runner: invalid volume spec "/output": invalid volume specification: '\output'
ERROR: Encountered errors while bringing up the project.
You have to specify the volume at the same level as "services:" as well as against the individual container ...
version: '3.3'
services:
fancyservice:
restart: always
image: fancyservice
aat-runner:
environment:
- FancyServiceUrl=http://fancyservice/
- COMPOSE_CONVERT_WINDOWS_PATHS=1
volumes:
- .:/output:rw
restart: always
image: aat-runner
volumes:
- aat-output:c:\aat-output\
volumes:
aat-output:

How to use ipaddreses instead of container names in docker compse networking

I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.

Resources