Communication between containers in docker (docker-compose) - docker

I am currently setting up a buildkite build pipeline which runs an api in one docker container and an application in another docker container alongside it while running Cypress tests (which are also running within the second container).
I use the following docker compose file:
version: '3'
services:
testing-image:
build:
context: ../
dockerfile: ./deploy/Dockerfile-cypress
image: cypress-testing
cypress:
image: cypress-testing
ipc: host
depends_on:
- api
db:
image: postgres:latest
ports:
- "54320:5432"
redis:
image: redis:latest
ports:
- "63790:6379"
api:
build:
context: ../api/
dockerfile: ./Dockerfile
image: api
command: /env/development/command.sh
links:
- db
- redis
depends_on:
- db
- redis
The application runs in the cypress container when started by buildkite. It then starts the cypress tests and some of them pass. However, any test that requires communication with the API fails because the cypress container is unable to see localhost within the API container. I am able to enter the API container using a terminal and have verified that it is working perfectly internally using cURL.
I have tried various URLs within the cypress container to try to reach the API, which is available on port 8080 within the API container, including api://api:8080 and http://api:8080 but none of them have been able to see the API.
Does anybody know what could be going on here?

Related

does celery docker container have to copy the same files from django container build?

Here is tha raw question:
i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers on the server or i have to duplicate the data from django project to every container or is there another way?
here is the question with explanation:
i am new to this and i tried to make a docker composer file for a Django-rest project with celery and Rabitmq and PostgreSQL,
i followed bunch of tutorials and i manage to make it work as the celery container use a shared volume from Django to start the worker(also the beat celery container) see the picture for related code from docker compose:(edited part from one of suggestions i put the code instead of pic)
version: '3.8'
services:
api:
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.prod
container_name: 'prod-djbackend'
image: youdeal_djangopart-prod:0.63
restart: unless-stopped
expose:
- '8000'
env_file: .env
volumes:
- static-data:/static
- media-data:/youdeal_djangopart/media
- api_vol:/youdeal_djangopart
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
depends_on:
- db
- rabbitmq
celeryworker:
container_name: celeryworker
image: celeryworker:0.51
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.celery.prod
env_file: .env
volumes:
- ./:/api_vol/
links:
- db
- rabbitmq
- api
depends_on:
- rabbitmq
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
volumes:
api_vol:
now here is the problem when i want to deploy the project the server i am using (https://www.arvancloud.com/en) doesn't really allow shared volume between containers and dont answer well in this regard, i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers or i have to duplicate the data from django project to every container or is there another way?
i find this and some other related topics but i couldn't make it work

How to network between multiple containers of the same image in docker-compose?

I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300

docker compose pushing to docker hub

I have a docker-compose file which has 3 services, the yaml file works but how do i push this into registry as a single image and retrieve this is AWS Fargate so it spins up container ?
What are my options to spin up multiple containers as images are pushed into separate repositories.
This below is my Docker-compose.yaml file
version: '3.4'
services:
dataapidocker:
image: ${DOCKER_REGISTRY-}dataapidocker
build:
context: .
dockerfile: DataAPIDocker/Dockerfile
environment:
- DB_PW
depends_on:
- db
db:
image: mcr.microsoft.com/mssql/server
environment:
SA_PASSWORD: "${DB_PW}"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
proxy1:
build:
context: ./proxy
dockerfile: Dockerfile
restart: always
depends_on:
- dataapidocker
ports:
- "9999:80"
The method which i tried was creating two application
First application : A nodejs express api server running at port 3001 with axios also attached
Second Application : A nodejs express api server running at port 3010 with /data(can be anything) path to return some data, with cross origin access allowed.
What i did was from first application, using axios.get command i queried localhost:3010/data to and printed it.
Now create a separate dockerfile image of both.When you run them they might not work as they are querying the localhost.
Create a taskdefinition into awsfargate and launch the task. Access the public IP of first container you will be able to receive the data from the second container just by querying the localhost, as fargate has them in the same network.
If you want the code i can share them.

Docker networking a spawned container running inside a container

I have a docker-compose file below. There is an api which I'm trying to reach via the network-test service which basically calls docker run curl-test through docker.sock. curl-test image/service basically calls curl onto the api endpoint via: curl http://api:3000. When I run docker compose the curl-test service works as expected, however the network-test service fails with could not resolve host api. My question is how do I pass in a reference to the container to the spawned container from within network-test?
version: '3'
services:
api:
build:
context: api
dockerfile: Dockerfile
ports:
- 3000:3000
curl-test:
image: curl-test
build:
context: curl-test
dockerfile: Dockerfile
depends_on:
- api
links:
- api
tty: true
network-test:
build:
context: network-test
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- api
links:
- api
The composed services are on their own private network and the names you have given these services, api, curl-test, and network-test, are only resolved by the dns on this network. You have spawned a container from within this network, but it is not attached to it. As the new container is not part of this network it cannot resolve the name 'API'. You can attach the container to the network to fix this problem, but only if your composed network is 'attachable', which is the default.
You can read more here: https://docs.docker.com/compose/networking/
Further, it is not necessary to run the curl-test service -- it'll probably exit immediately. You're creating this at runtime, so it shouldn't be listed here.

Prevent publishing ports defined in compose file

I have a docker compose file that defines a service that will run my application and a service that that application is dependent on to run:
services:
frontend:
build:
context: .
volumes:
- "../.:/opt/app"
ports:
- "8080:8080"
links:
- redis
image: node
command: ['yarn', 'start']
redis:
image: redis
expose:
- "6379"
For development this compose file exposes 8080 so that I can access the running code from a browser.
In jenkins however I can't expose that port as then two jobs running simultaneously would conflict trying to bind to the same port on jenkins.
Is there a way to prevent docker-compose from binding service ports? Like an inverse of the --service-ports flag?
For context:
In jenkins I run tests using docker-compose run frontend yarn test which won't map ports and so isn't a problem.
The issue presents when I try to run end to end browser tests against the application. I use a container to run CodeceptJS tests against a running instance of the app. In that case I need the frontend to start before I run the tests, as they will fail if the app is not up.
Q. Is there a way to prevent docker-compose from binding service ports?
It has no sense to prevent something that you are asking to do. docker-compose will start stuff as the docker-compose.yml file indicates.
I propose duplicate the frontend service using extends::
version: "2"
services:
frontend-base:
build:
context: .
volumes:
- "../.:/opt/app"
image: node
command: ['yarn', 'start']
frontend:
extends: frontend-base
links:
- redis
ports:
- "8080:8080"
frontend-test:
extends: frontend-base
links:
- redis
command: ['yarn', 'test']
redis:
image: redis
expose:
- "6379"
So use it as this:
docker-compose run frontend # in dev environment
docker-compose run frontend-test # in jenkins
Note that extends: is not available in version: "3", but they will bring it back again in the future.
For preventing to publish ports outside the docker network you just
need to write on a single port in the ports segment.
Instead of using this:
ports:
- 8080:8080
Just use this one(at below):
ports:
- 8080

Resources