Docker-compose internal communication using endpoints - docker

Working on getting two different services running inside a single docker-compose.yml to communicate w. each other within docker-compose.
The two services are regular NodeJS servers (app1 & app2). app1 receives POST requests from an external source, and should then send a request to the other NodeJS server, app2 w. information based on the initial POST request.
The challenge that I'm facing is how to make the two NodeJS containers communicate w. each other w/o hardcoding a specific container name. The only way I can get the two containers to communicate currently, is to hardcode a url like: http://myproject_app1_1, which will then direct the POST request from app1 to app2 correctly, but due to the way Docker increments container names, it doesn't scale very well nor support potential container crashing etc.
Instead I'd prefer to send the POST request to something along the lines of http://app2 or a similar way to handle and alias a number of containers, and no matter how many instances of the app2 container exists Docker will pass the request one of the running app2 containers.
Here's a sample of my docker-compose.yml file:
version: '2'
services:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
# databases [...]
Thanks in advance.

Ok. This is two questions.
First: how to don`t hardcode container names.
you can use system environment variables like:
nodeJS file:
app2Address = process.env.APP2_ADDRESS;
response = http.request(app2Address);
docker compose file:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
environment:
- APP2_ADDRESS: ${app2_address}
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
environment:
- HOSTNAME: ${app2_address}
and .env file like:
app2_address=myapp2.com
also you can use wildcard application config file. And when container starts you need to substitute real hostname.
for this action you need create entrypoint.sh and use "sed" like:
sed -i '/sAPP_2HOSTNAME_WILDCARD/${app2_address}/g /app1/congig.js
Second. how to make a transparent load balancing:
you need use http load balancer like
haproxy
nginx as load balancer
There is hello-world tutorial how to make a load balancing with docker

When you run two containers from one compose file, docker automatically sets up an "internal dns" that allows to reference other containers by their service name defined in the compose file (assuming they are in the same network). So this should work when referencing http://app2 from the first service.
See this example proxying requests from proxy to the backend whoamiapp by just using the service name.
default.conf
server {
listen 80;
location / {
proxy_pass http://whoamiapp;
}
}
docker-compose.yml
version: "2"
services:
proxy:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- "80:80"
whoamiapp:
image: emilevauge/whoami
Run it using docker-compose up -d and try running curl <dockerhost>.
This sample uses the default network with docker-compose file version 2. You can read more about how networking with docker-compose works here: https://docs.docker.com/compose/networking/
Probably your configuration of the container_name property somehow interferes with this behaviour? You should not need to define this on your own.

Related

Is it possible for webservice to read apiservice via container_name on docker?e.g. http://container_name:port

I'm new to docker and I composed a web service and other services used in the web in a docker-compose file. I wonder if it's possible to have access to the services(e.g. api service) for the web service via container_name.
like http://container_name:8080. Container_name is specified in docker-compose file, and web service on docker containers can read other service via http://localhost:port. I want to replace localhost to container_name, can docker do this mapping via some configuration? I tried depends_on and links and none of them work.
Part of my docker-compose.yml:
version: "3.7"
services:
mywebservice:
container_name: mywebservice
ports:
- "8080:80"
depends_on:
- myapiservice
myapiservice:
container_name:myapiservice
ports:
- "8081:80"
You can resolve your container name to the container ip via the hosts file.
192.168.10.10 mywebservice
You can have this file in your application source and get docker to copy it to /etc

(Vue) Axios API call doesnt work when using docker networking hostname

I have created a containerised web app.
The front end is running VueJS in one container on port 8080 and the back end is using flask in a separate container and is running on port 5000.
I have set up a docker compose file to spin up both containers
version: '3'
services:
front-end:
build:
context: ./
dockerfile: front.Dockerfile
volumes:
- DataVolume:/app/confgen-plus/src/assets/Downloads
ports:
- 8080:8080
back-end:
build:
context: ./
dockerfile: back.Dockerfile
volumes:
- DataVolume:/app/confgen-plus/src/assets/Downloads
volumes:
DataVolume:
driver: local
Once I have spun up the containers, I am able to successfully access the front-end UI by going to: http://localhost:8080
On the home page I have a button which when pressed runs the following code:
fetchData: function(){
this.$axios.get('http://back-end:5000/templates',{
headers:{
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, GET, PUT, OPTIONS, DELETE',
'Access-Control-Allow-Headers': 'Access-Control-Allow-Methods, Access-Control-Allow-Origin, Origin, Accept, Content-Type',
}
}).then((response)=>{
this.templates = response.data
}
);
The API call is addressed to back-end:5000/templates. 'back-end' is the service name given to the back-end container in the docker compose file which should also be the host name automatically assigned to the back-end container in the docker virtual network.
The code above takes the JSON data found at /templates on the back-end and assigns it to the data variable called 'templates'. I then use 2 way binding to display the contents of the JSON file on the front end UI by simply using:
{{ templates }}
However with the setup shown above this fails and I am not sure why. I have tested a few things to narrow down the problem:
First I exposed the backend on port 5000 and mapped this to port 5000 on the host machine. I then ammended the code above to address the API call to http://localhost:5000/templates. This allowed me to test if the error was somewhere in the front end code. However when I tested the above, it worked successfully showing there was no error here.
Next I used docker exec to enter the front end container and tested if the virtual docker network was correctly set up. I used the following command: curl http://back-end:5000/test
The /test endpoint just returns a string which confirms if the back-end is loaded correctly. I got a successful return. I also used the command curl http://back-end:5000/templates to see If I would get back the JSON data that was meant to be called by the front end. Once again the test was successful. So this proves that the virtual network is also correct and the front-end container should be able to access the back-end container via the hostname back-end:5000/templates.
If i share the error from the console in the browser:
You can see that the call is going to the correct address but there is an error. Any help would be greatly appreciated.
I think as you are using docker-compose version ~3 you must add the aliases for networking.
See docker-compose reference
I think it will work like this:
services:
front-end:
build:
context: ./
dockerfile: front.Dockerfile
volumes:
- DataVolume:/app/confgen-plus/src/assets/Downloads
ports:
- 8080:8080
networks:
default:
aliases:
- front-end
back-end:
build:
context: ./
dockerfile: back.Dockerfile
volumes:
- DataVolume:/app/confgen-plus/src/assets/Downloads
networks:
default:
aliases:
- back-end

how to get two docker containers one running a flask service and golang services to talk to each other?

I have a flask service running through docker-compose on port 5000. Similarly, I have a different go service running through another docker-compose on port 8000. The Golang service needs to call a flask API running on 5000. I am facing trouble in getting the go service to call flask service. I have tried adding docker-network but failed. What are the pros and cons of running both the services through different docker-compose as compared to single docker-compose? (I have not been able to successfully run them in a single docker-compose, btw). docker ps running both the containers.
Flask Docker compose
version: '3' # version of compose format
services:
bidders:
build:
dockerfile: Dockerfile
context: .
volumes:
- .:/usr/src/bidders # mount point
ports:
- 5000:5000 # host:container
Go Docker Compose
version: '3'
services:
auctions:
container_name: auctions
build: .
command: go run main.go
volumes:
- .:/go/src/auctions
working_dir: /go/src/auctions
ports:
- "8000:8000"
Third Nwtwork Docker-compose.yml
#docker-compose.yml
version: '3'
networks:
- second_network
networks:
second_network:
driver: bridge
With a single docker-compose.yml it will be easier to make both services inside the same network. So what was the issue you got while doing this ? Also make sure that your flask and go application both are binding to 0.0.0.0 from the code itself and not 127.0.0.1 so you can reach them from outside the container.
With two docker-compose.yml you have two options:
Create a network through one of these files and make the other container which in another file join this external network.
Create a network using docker network create and define an external network in both files for your containers
There is a similar question that you can check it's answer from here with example included
You can check Networking in Compose for more information

Why my Nginx proxy succeed to find my node webserver since my docker-compose doesn't expose any webserver port on the network?

My node webserver uses express and listen on port 5500.
My dockerfile-compose doesn't expose any port of my node webserver (named webserver) as following:
version: "3"
services:
webserver:
build: ./server
form:
build: ./ui
ports:
- "6800:80"
networks:
- backend // i let the backend network just for legacy but in fact webserver isn't in this networks
command: [nginx-debug, '-g', 'daemon off;']
networks:
backend:
My Nginx reverse proxy as following:
/request {
proxy_pass http://webserver:5500/request
}
expectation: y request have to fail because of the absence of shared network between the two services.
result : request succeed.
I can't understand why ? maybe the default network between the container make the job ?
more info: the request fails when the reverse_proxy redirect to a bad port, but succeed if the domain name is wrong and the port is good as following :
proxy_pass http://webver:5500/request > succeed
I can't understand the Nginx / Docker flow here. Someone would please explain to us what happens here ?
More recent versions of Docker Compose create a Docker network automatically. Once that network exists, Docker provides its own DNS system so that containers can reach each other by name or network alias; Compose registers each service under its name in the YAML file, so within this set of containers, webserver and form would both be resolvable host names.
(The corollary to this is that you don't usually need to include a networks: block in the YAML file at all, and there's not much benefit to explicitly specifying a container_name: or manually setting container network settings.)

How to reach services with the URL in Docker-Compose

Im trying to setup an application environment with two different docker-compose.yml files. The first one creates services in the default network elastic-apm-stack_default. To reach the services of both docker-compose files I used the external command within the second docker-compose file. Both files look like this:
# elastic-apm-stack/docker-compose.yml
services:
apm-server:
image: docker.elastic.co/apm/apm-server:6.2.4
build: ./apm_server
ports:
- 8200:8200
depends_on:
- elasticsearch
- kibana
...
# sockshop/docker-compose.yml
services:
front-end:
...
...
networks:
- elastic-apm-stack_default
networks:
elastic-apm-stack_default:
external: true
Now the front-end service in the second file needs to send data to the apm-server service in the first file. Therefore, I used the url http://apm-server:8200 in the source code of the front-end service but i always get an connectionRefused error. If I define all services in a single docker-compose file it works but I want to separate the docker-compose files.
Could anyone help me? :)
Docker containers run in network 172.17.0.1
So, you may use url
http://172.17.0.1:8200
to get access to your apm-server container.

Resources