I have a docker-compose file below. There is an api which I'm trying to reach via the network-test service which basically calls docker run curl-test through docker.sock. curl-test image/service basically calls curl onto the api endpoint via: curl http://api:3000. When I run docker compose the curl-test service works as expected, however the network-test service fails with could not resolve host api. My question is how do I pass in a reference to the container to the spawned container from within network-test?
version: '3'
services:
api:
build:
context: api
dockerfile: Dockerfile
ports:
- 3000:3000
curl-test:
image: curl-test
build:
context: curl-test
dockerfile: Dockerfile
depends_on:
- api
links:
- api
tty: true
network-test:
build:
context: network-test
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- api
links:
- api
The composed services are on their own private network and the names you have given these services, api, curl-test, and network-test, are only resolved by the dns on this network. You have spawned a container from within this network, but it is not attached to it. As the new container is not part of this network it cannot resolve the name 'API'. You can attach the container to the network to fix this problem, but only if your composed network is 'attachable', which is the default.
You can read more here: https://docs.docker.com/compose/networking/
Further, it is not necessary to run the curl-test service -- it'll probably exit immediately. You're creating this at runtime, so it shouldn't be listed here.
Related
Here is tha raw question:
i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers on the server or i have to duplicate the data from django project to every container or is there another way?
here is the question with explanation:
i am new to this and i tried to make a docker composer file for a Django-rest project with celery and Rabitmq and PostgreSQL,
i followed bunch of tutorials and i manage to make it work as the celery container use a shared volume from Django to start the worker(also the beat celery container) see the picture for related code from docker compose:(edited part from one of suggestions i put the code instead of pic)
version: '3.8'
services:
api:
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.prod
container_name: 'prod-djbackend'
image: youdeal_djangopart-prod:0.63
restart: unless-stopped
expose:
- '8000'
env_file: .env
volumes:
- static-data:/static
- media-data:/youdeal_djangopart/media
- api_vol:/youdeal_djangopart
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
depends_on:
- db
- rabbitmq
celeryworker:
container_name: celeryworker
image: celeryworker:0.51
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.celery.prod
env_file: .env
volumes:
- ./:/api_vol/
links:
- db
- rabbitmq
- api
depends_on:
- rabbitmq
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
volumes:
api_vol:
now here is the problem when i want to deploy the project the server i am using (https://www.arvancloud.com/en) doesn't really allow shared volume between containers and dont answer well in this regard, i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers or i have to duplicate the data from django project to every container or is there another way?
i find this and some other related topics but i couldn't make it work
I am currently setting up a buildkite build pipeline which runs an api in one docker container and an application in another docker container alongside it while running Cypress tests (which are also running within the second container).
I use the following docker compose file:
version: '3'
services:
testing-image:
build:
context: ../
dockerfile: ./deploy/Dockerfile-cypress
image: cypress-testing
cypress:
image: cypress-testing
ipc: host
depends_on:
- api
db:
image: postgres:latest
ports:
- "54320:5432"
redis:
image: redis:latest
ports:
- "63790:6379"
api:
build:
context: ../api/
dockerfile: ./Dockerfile
image: api
command: /env/development/command.sh
links:
- db
- redis
depends_on:
- db
- redis
The application runs in the cypress container when started by buildkite. It then starts the cypress tests and some of them pass. However, any test that requires communication with the API fails because the cypress container is unable to see localhost within the API container. I am able to enter the API container using a terminal and have verified that it is working perfectly internally using cURL.
I have tried various URLs within the cypress container to try to reach the API, which is available on port 8080 within the API container, including api://api:8080 and http://api:8080 but none of them have been able to see the API.
Does anybody know what could be going on here?
I have a docker-compose file which has 3 services, the yaml file works but how do i push this into registry as a single image and retrieve this is AWS Fargate so it spins up container ?
What are my options to spin up multiple containers as images are pushed into separate repositories.
This below is my Docker-compose.yaml file
version: '3.4'
services:
dataapidocker:
image: ${DOCKER_REGISTRY-}dataapidocker
build:
context: .
dockerfile: DataAPIDocker/Dockerfile
environment:
- DB_PW
depends_on:
- db
db:
image: mcr.microsoft.com/mssql/server
environment:
SA_PASSWORD: "${DB_PW}"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
proxy1:
build:
context: ./proxy
dockerfile: Dockerfile
restart: always
depends_on:
- dataapidocker
ports:
- "9999:80"
The method which i tried was creating two application
First application : A nodejs express api server running at port 3001 with axios also attached
Second Application : A nodejs express api server running at port 3010 with /data(can be anything) path to return some data, with cross origin access allowed.
What i did was from first application, using axios.get command i queried localhost:3010/data to and printed it.
Now create a separate dockerfile image of both.When you run them they might not work as they are querying the localhost.
Create a taskdefinition into awsfargate and launch the task. Access the public IP of first container you will be able to receive the data from the second container just by querying the localhost, as fargate has them in the same network.
If you want the code i can share them.
I have a flask service running through docker-compose on port 5000. Similarly, I have a different go service running through another docker-compose on port 8000. The Golang service needs to call a flask API running on 5000. I am facing trouble in getting the go service to call flask service. I have tried adding docker-network but failed. What are the pros and cons of running both the services through different docker-compose as compared to single docker-compose? (I have not been able to successfully run them in a single docker-compose, btw). docker ps running both the containers.
Flask Docker compose
version: '3' # version of compose format
services:
bidders:
build:
dockerfile: Dockerfile
context: .
volumes:
- .:/usr/src/bidders # mount point
ports:
- 5000:5000 # host:container
Go Docker Compose
version: '3'
services:
auctions:
container_name: auctions
build: .
command: go run main.go
volumes:
- .:/go/src/auctions
working_dir: /go/src/auctions
ports:
- "8000:8000"
Third Nwtwork Docker-compose.yml
#docker-compose.yml
version: '3'
networks:
- second_network
networks:
second_network:
driver: bridge
With a single docker-compose.yml it will be easier to make both services inside the same network. So what was the issue you got while doing this ? Also make sure that your flask and go application both are binding to 0.0.0.0 from the code itself and not 127.0.0.1 so you can reach them from outside the container.
With two docker-compose.yml you have two options:
Create a network through one of these files and make the other container which in another file join this external network.
Create a network using docker network create and define an external network in both files for your containers
There is a similar question that you can check it's answer from here with example included
You can check Networking in Compose for more information
I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379