Two docker containers cannot communicate - docker

I have two docker containers. One container is a database and the other is a web application.
Web application calls the database through this link http://localhost:7200. However, the web application docker container cannot reach the database container.
I tried this docker-compose.yml, but does not work:
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
links:
- graph-db
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
networks:
- backend
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
hostname: graph-db
ports:
- "7200:7200"
networks:
backend:
driver: "bridge"
So I have two containers:
web application: http://localhost:8080/reasoner and this container calls a database in http://localhost:7200 which resides in a different container.
However database container is not reachable by web container.
SOLUTION
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build:
context: .
dockerfile: ./docker/web/Dockerfile
depends_on:
- graph-db
ports:
- "8080:8080"
environment:
- WAIT_HOSTS=graph-db:7200
graph-db:
# will build ./docker/graph-db/Dockerfile
build:
./docker/graph-db
ports:
- "7200:7200"
and replace http://localhost:7200 in web app code with http://graph-db:7200

Do not use localhost to communicate between containers. Networking is one of the namespaces in docker, so localhost inside of a container only connects to that container, not to your external host, and not to another container. In this case, use the service name, graph-db, instead of localhost, in your app to connect to the db.

Your db host is graph-db, and that name that you should use in database configuration in your app. eg: http://graph-db:7200
From docker network documentation (bridge networks - the default network driver in Docker):
Imagine an application with a web front-end and a database back-end.
If you call your containers web and db, the web container can connect
to the db container at db, no matter which Docker host the application
stack is running on.

Related

Docker Containers' Network Access Configuration

I'm struggling to configure docker-compose file in order to achieve below structure. Web container needs to be accessible through virtual pcs, physical devices (local & external), but the Keycloak container needs to be only accessible by web container. How can I achieve this?
Desired Network Structure
Web Container starts flask app expose on port 5000.
My docker-compose file currently:
version: '2'
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- .:/app
depends_on:
- keycloak
keycloak:
container_name: keycloak
image: jboss/keycloak:13.0.1
ports:
- '8080:8080'
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
If a container doesn't have ports:, it (mostly*) isn't accessible from outside of Docker. If your goal is to have the container only be accessible from other containers, you can just delete ports:.
In comments you ask about the container being reachable from other containers. So long as both containers are on the same Docker network (or the same Compose-provided default network) they can communicate using the other container's Compose service name and the port the process inside the container is listening on. ports: aren't required, and they're ignored if they're present.
So in your setup, it should be enough to remove the ports: from the keycloak container.
version: '2.4'
services:
web:
build: .
ports:
- '5000:5000'
depends_on:
- keycloak
# can call keycloak:8080
keycloak:
image: jboss/keycloak:13.0.1
environment: { ... }
# no ports:, container_name: is also unnecessary
(*) On a native-Linux host, the container's Docker-internal IP address will be reachable from the same host, but not other hosts, if you have some way of finding it (including port-scanning 172.16.0.0/20). If someone can run docker commands then they can also easily attach other containers to the same network and gain access to the container, but if they can run docker commands then they can also pretty straightforwardly root the entire host.

docker compose pushing to docker hub

I have a docker-compose file which has 3 services, the yaml file works but how do i push this into registry as a single image and retrieve this is AWS Fargate so it spins up container ?
What are my options to spin up multiple containers as images are pushed into separate repositories.
This below is my Docker-compose.yaml file
version: '3.4'
services:
dataapidocker:
image: ${DOCKER_REGISTRY-}dataapidocker
build:
context: .
dockerfile: DataAPIDocker/Dockerfile
environment:
- DB_PW
depends_on:
- db
db:
image: mcr.microsoft.com/mssql/server
environment:
SA_PASSWORD: "${DB_PW}"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
proxy1:
build:
context: ./proxy
dockerfile: Dockerfile
restart: always
depends_on:
- dataapidocker
ports:
- "9999:80"
The method which i tried was creating two application
First application : A nodejs express api server running at port 3001 with axios also attached
Second Application : A nodejs express api server running at port 3010 with /data(can be anything) path to return some data, with cross origin access allowed.
What i did was from first application, using axios.get command i queried localhost:3010/data to and printed it.
Now create a separate dockerfile image of both.When you run them they might not work as they are querying the localhost.
Create a taskdefinition into awsfargate and launch the task. Access the public IP of first container you will be able to receive the data from the second container just by querying the localhost, as fargate has them in the same network.
If you want the code i can share them.

Docker-Compose, How To Connect Java Application With Custom Docker Network On Redis Container

I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.

Docker web app can't communicate with API app

I have 2 .net core apps running in docker (one is a web api, the other is a web app consuming the web api):
I can't seem to communicate with the api via the web app, but I can access the api by going directly to it in my browser at http://localhost:44389
I have an environment variable in my web app that has that same info, but it can't get to it.
If I were to specify the deployed version of my API on azure, it's able to communicate with that address. Seems like the problem is the containers talking to each other.
I read that creating a bridge should fix that problem but it doesn't seem to. What am I doing wrong?
Here is my docker compose file:
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://localhost:44389
depends_on:
- rc.api
networks:
my-net:
driver: bridge
docker-compose automatically creates a network between your containers. As your containers are in the same network you would be able to connect between containers using aliases. docker-compose creates an alias with the container name and the container IP. So in your case docker-compose should look like
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://rc.api
depends_on:
- rc.api
networks:
my-net:
driver: bridge
As in rc.api opens port 80 in its container, therefore rc.web can access to 80 port with http://rc.api:80 or http://rc.api (without port since HTTP port is default 80)
You need to call http://rc.api because you have two containers and the API containers localhost is different from the web apps container localhost.
The convention is each service can be resolved by its name specified in the docker-compose.yml.
Thus you can call the API on internal Port 80 instead of exposing it on a particular port.

docker - multiple databases on local

I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.

Resources