docker compose pushing to docker hub - docker

I have a docker-compose file which has 3 services, the yaml file works but how do i push this into registry as a single image and retrieve this is AWS Fargate so it spins up container ?
What are my options to spin up multiple containers as images are pushed into separate repositories.
This below is my Docker-compose.yaml file
version: '3.4'
services:
dataapidocker:
image: ${DOCKER_REGISTRY-}dataapidocker
build:
context: .
dockerfile: DataAPIDocker/Dockerfile
environment:
- DB_PW
depends_on:
- db
db:
image: mcr.microsoft.com/mssql/server
environment:
SA_PASSWORD: "${DB_PW}"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
proxy1:
build:
context: ./proxy
dockerfile: Dockerfile
restart: always
depends_on:
- dataapidocker
ports:
- "9999:80"

The method which i tried was creating two application
First application : A nodejs express api server running at port 3001 with axios also attached
Second Application : A nodejs express api server running at port 3010 with /data(can be anything) path to return some data, with cross origin access allowed.
What i did was from first application, using axios.get command i queried localhost:3010/data to and printed it.
Now create a separate dockerfile image of both.When you run them they might not work as they are querying the localhost.
Create a taskdefinition into awsfargate and launch the task. Access the public IP of first container you will be able to receive the data from the second container just by querying the localhost, as fargate has them in the same network.
If you want the code i can share them.

Related

does celery docker container have to copy the same files from django container build?

Here is tha raw question:
i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers on the server or i have to duplicate the data from django project to every container or is there another way?
here is the question with explanation:
i am new to this and i tried to make a docker composer file for a Django-rest project with celery and Rabitmq and PostgreSQL,
i followed bunch of tutorials and i manage to make it work as the celery container use a shared volume from Django to start the worker(also the beat celery container) see the picture for related code from docker compose:(edited part from one of suggestions i put the code instead of pic)
version: '3.8'
services:
api:
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.prod
container_name: 'prod-djbackend'
image: youdeal_djangopart-prod:0.63
restart: unless-stopped
expose:
- '8000'
env_file: .env
volumes:
- static-data:/static
- media-data:/youdeal_djangopart/media
- api_vol:/youdeal_djangopart
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
depends_on:
- db
- rabbitmq
celeryworker:
container_name: celeryworker
image: celeryworker:0.51
build:
context: ./youdeal_djangopart
dockerfile: Dockerfile.celery.prod
env_file: .env
volumes:
- ./:/api_vol/
links:
- db
- rabbitmq
- api
depends_on:
- rabbitmq
environment:
- "CELERY_BROKER_URL=amqp://guest:guest#rabbitmq:5672//"
volumes:
api_vol:
now here is the problem when i want to deploy the project the server i am using (https://www.arvancloud.com/en) doesn't really allow shared volume between containers and dont answer well in this regard, i was wondering if i can run the command from celery container using another container data like django container as these containers are in the same network of containers or i have to duplicate the data from django project to every container or is there another way?
i find this and some other related topics but i couldn't make it work

Communication between containers in docker (docker-compose)

I am currently setting up a buildkite build pipeline which runs an api in one docker container and an application in another docker container alongside it while running Cypress tests (which are also running within the second container).
I use the following docker compose file:
version: '3'
services:
testing-image:
build:
context: ../
dockerfile: ./deploy/Dockerfile-cypress
image: cypress-testing
cypress:
image: cypress-testing
ipc: host
depends_on:
- api
db:
image: postgres:latest
ports:
- "54320:5432"
redis:
image: redis:latest
ports:
- "63790:6379"
api:
build:
context: ../api/
dockerfile: ./Dockerfile
image: api
command: /env/development/command.sh
links:
- db
- redis
depends_on:
- db
- redis
The application runs in the cypress container when started by buildkite. It then starts the cypress tests and some of them pass. However, any test that requires communication with the API fails because the cypress container is unable to see localhost within the API container. I am able to enter the API container using a terminal and have verified that it is working perfectly internally using cURL.
I have tried various URLs within the cypress container to try to reach the API, which is available on port 8080 within the API container, including api://api:8080 and http://api:8080 but none of them have been able to see the API.
Does anybody know what could be going on here?

Two docker containers cannot communicate

I have two docker containers. One container is a database and the other is a web application.
Web application calls the database through this link http://localhost:7200. However, the web application docker container cannot reach the database container.
I tried this docker-compose.yml, but does not work:
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
links:
- graph-db
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
networks:
- backend
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
hostname: graph-db
ports:
- "7200:7200"
networks:
backend:
driver: "bridge"
So I have two containers:
web application: http://localhost:8080/reasoner and this container calls a database in http://localhost:7200 which resides in a different container.
However database container is not reachable by web container.
SOLUTION
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
ports:
- "7200:7200"
and replace http://localhost:7200 in web app code with http://graph-db:7200
Do not use localhost to communicate between containers. Networking is one of the namespaces in docker, so localhost inside of a container only connects to that container, not to your external host, and not to another container. In this case, use the service name, graph-db, instead of localhost, in your app to connect to the db.
Your db host is graph-db, and that name that you should use in database configuration in your app. eg: http://graph-db:7200
From docker network documentation (bridge networks - the default network driver in Docker):
Imagine an application with a web front-end and a database back-end.
If you call your containers web and db, the web container can connect
to the db container at db, no matter which Docker host the application
stack is running on.

How to configure Dockerfile and docker-compose to deploy two containers to docker hub?

I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app

Docker web app can't communicate with API app

I have 2 .net core apps running in docker (one is a web api, the other is a web app consuming the web api):
I can't seem to communicate with the api via the web app, but I can access the api by going directly to it in my browser at http://localhost:44389
I have an environment variable in my web app that has that same info, but it can't get to it.
If I were to specify the deployed version of my API on azure, it's able to communicate with that address. Seems like the problem is the containers talking to each other.
I read that creating a bridge should fix that problem but it doesn't seem to. What am I doing wrong?
Here is my docker compose file:
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://localhost:44389
depends_on:
- rc.api
networks:
my-net:
driver: bridge
docker-compose automatically creates a network between your containers. As your containers are in the same network you would be able to connect between containers using aliases. docker-compose creates an alias with the container name and the container IP. So in your case docker-compose should look like
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://rc.api
depends_on:
- rc.api
networks:
my-net:
driver: bridge
As in rc.api opens port 80 in its container, therefore rc.web can access to 80 port with http://rc.api:80 or http://rc.api (without port since HTTP port is default 80)
You need to call http://rc.api because you have two containers and the API containers localhost is different from the web apps container localhost.
The convention is each service can be resolved by its name specified in the docker-compose.yml.
Thus you can call the API on internal Port 80 instead of exposing it on a particular port.

Resources