Connecting dockerized apps network to do api call - docker

I have a bit of a problem with connecting the dots.
I managed to dockerized our legacy app and our newer app, but now I need to make them to talk to one another via API call.
Projects:
Project1 = using project1_appnet (bridge driver)
Project2 = using project2_appnet (bridge driver)
Project3 = using project3_appnet (bridge driver)
On my local, I have these 3 projects on 3 separates folders. Each project will have their own app, db and cache services.
This is the docker-compose.yml for one of the project. (They have nearly all the same docker-compose.yml only with different image and volume path)
version: '3'
services:
app:
build: ./docker/app
image: 'cms/app:latest'
networks:
- appnet
volumes:
- './:/var/www/html:cached'
ports:
- '${APP_PORT}:80'
working_dir: /var/www/html
cache:
image: 'redis:alpine'
networks:
- appnet
volumes:
- 'cachedata:/data'
db:
image: 'mysql:5.7'
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USER}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
ports:
- '${DB_PORT}:3306'
networks:
- appnet
volumes:
- 'dbdata:/var/lib/mysql'
networks:
appnet:
driver: bridge
volumes:
dbdata:
driver: local
cachedata:
driver: local
Question:
How can I make them be able to talk to one another via API call? (On my local for development and for prod environment)
On production, the setting will be a bit different, they will be in different machines but still in the same VPC or even through public network. What is the setting for that?
Note:
I have been looking at link but apparently it is deprecated for v3 or not really recommended
Tried curl from project1 container to project2 container, by doing:
root#bc3afb31a5f1:/var/www/html# curl localhost:8050/login
curl: (7) Failed to connect to localhost port 8050: Connection refused

If your final setup will be that each service will be running on a physically different system, there aren't really any choices. One system can't directly access the Docker network on another system; the only way service 1 will be able to reach service 2 is via its host's DNS name (or IP address) and the published port. Since this will be different in different environments, I'd suggest making that value a configured environment variable.
environment:
SERVICE_2_URL: 'http://service-2-host.example.com/' # default port 80
Once you've settled on that, you can use the same setup for a single-host deployment, mostly. If your developer systems use Docker for Mac or Docker for Windows you should be able to use a special Docker hostname to reach the other service
environment:
SERVICE_2_URL: 'http://host.docker.internal:8082/'
(If you use Linux on the desktop you will have to know some IP address for the host; not localhost because that means "this container", and not the docker0 interface address because that will be on a specific network, but something like the host's eth0 address.)
Your other option is to "borrow" the other Docker Compose network as an external network. There is some trickiness if all of your Docker Compose setups have the same names; from some experimentation it seems like the Docker-internal DNS will always resolve to your own Docker Compose file first, and you have to know something like the Compose-assigned container name (which isn't hard to reconstruct and is stable) to reach the other service.
version: '3'
networks:
app2:
external:
name: app2_appnet
services:
app:
networks:
- appnet
- app2_appnet
environment:
SERVICE_2_URL: 'http://app2_app_1/' # using the service-internal port
MYSQL_HOST: db # in this docker-compose.yml
(I would suggest using the Docker Compose default network over declaring your own; that will mostly let you delete all of the networks: blocks in the file without any ill effect, but in this specific case you will need to declare networks: [default, app2_default] to connect to both.)
You may also consider a multi-host container solution when you're starting to look at this. Kubernetes is kind of heavy-weight, but it will run containers on any node in the cluster (you don't specifically have to worry about placement) and it provides both namespaces and automatic DNS resolution for you; you can just set SERVICE_2_URL: 'http://app.app2/' to point at the other namespace without worrying about these networking details.

If you run this docker compose locally; given app and db are on the same network - appnet - app should be able to talk to db using localhost:${DB_PORT}.
In production, if app and db are on different machines; app would probably need to talk to database using ip or domain name.

Considering that you are using different machines for the different docker deployments you good put them behind a regular webserver (Apache2, Nginx) and then route the traffic from the specific domain to $APP_PORT using a simple vhost. I prefer to do that instead of directly exposing the container to the network. This way you would also be able to host multiple applications on the same machine ( if you like to ). So I suggest you should not try to connect docker networks but "regular " ones.

Was playing around with inspect and cURL. I think I found the solution.
Locally:
In my local, I inspected the container and view the NetworkSettings.Network.<network name>.Gateway which is 172.25.0.1
Then I get the the exposed port which is 8050
Then I did a curl inside the app1 container curl 172.25.0.1:8050/login to check whether app1 can do a http request to app2 container. OR docker exec -it project1_app_1 curl 172.25.0.1:8050/login
Vice versa, I did curl 172.25.0.1:80 for app2 -> app 1 OR docker exec -it project2_app_1 curl 172.25.0.1:80
The only issue is that, the Gateway value changes when we restart via docker-compose up -d
Production likewise:
I am not that pro with networking and stuff. My estimate for production would be:
Do curl app2-domain.com which is pointed to the app by the webserver as they are in their own machine (even with a load balancer).

Related

How can I enforce all containers work together with domain localhost

I have 8 frontends apps, 12 backends servers. Frontends are Vue.js or AngularJS, backedends are ASP.NET Core 3.1, and SQL server and Redis and Other services
all services are similar configs for Docker containers, except loggings, ports and so on. they all work in same named network mynetwork
abcservice:
image: ${DOCKER_REGISTRY-}abcservice
container_name: abcServer
hostname: abcservice
build:
context: .
dockerfile: abcService/Dockerfile
networks:
- mynetwork
but I have to use http://host.docker.internal:{portnumer}, so all containers can work well together. How can I force all apps work together on http://localhost:{portnumer}?
let's say a simple ASP.NET core app, if started it WITHOUT docker, it can access SQL Server(run in docker) and Redis(run in docker) with domain http://localhost:port, but once if start it with docker, I have to access the app via domain http://host.docker.internal:port, otherwise it cannot reach SQL and Redis. because inside containers, localhost means the container itself, I need something config to let container reach other containers with localhost and specified ports.
Appreciate.
Option 1: Environment variables
You can either use ports for all services, and use environment variables in a .env file to switch between hostnames. The .env file works out-of-the-box with Docker Compose, see docs.
Using ports:
ports:
- 6379:6379
Sample .env file:
REDIS_HOST=redis
REDIS_PORT=6379
Option 2: using network_mode host
Another option is apply host network settings instead using network_mode on each service. That should apply host network settings to a service, instead of running in isolation.
network_mode: host

Docker-compose: Docker containers can't connect using service names

I have 3 containers. One is a lighttpd server serving static content (front). I have 2 flask servers handling the backend (back and model)
This is my docker-compose.yml
version: "3"
services:
front:
image: ecd3:latest
ports:
- 4200:80
tty: true
links:
- "back"
depends_on:
- back
networks:
- mynet
back:
image: esd3:latest
ports:
- 5000:5000
links:
- "model"
depends_on:
- model
networks:
- mynet
model:
image: mok:latest
ports:
- 5001:5001
networks:
- mynet
networks:
mynet:
I'm trying to send an http request to my flask server (back) from my frontend (front). I have bound the flask server to 0.0.0.0 and even used the service name in the frontend (http://back:5000/endpoint)
Trying to curl the flask server inside the frontend container (curl back:5000) gives me this:
curl: (52) Empty reply from server
Pinging the flask server from inside the frontend container works. This means that the connection must have been established.
Why can't I connect to my flask server from my frontend?
We discovered several things in the comments. Firstly, that you had a proxy problem that prevented one container using the API in another container.
Secondly, and critically, you discovered that the service names in your Docker Compose configuration file are made available in the virtual networking system set up by Docker. So, you can ping front from back and vice-versa. Importantly, it's worth noting that you can do this because they are on the same virtual network, mynet. If they were on different Docker networks, then by design the DNS names would not be available, and the virtual container IP addresses would not be reachable.
Incidentally, since you have all of your containers on the same network, and you have not changed any network settings, you could drop this network for now. In other words, you can remove the networks definition and the three container references to it, since they can just join the default network instead.
Thirdly, you learned that Docker's virtual DNS entries are not made available on the host, and so front and back are not available here. Even if the were (e.g. if manual entries were made in the hosts file) those IPs would not work, since there is no direct networking route from the host to the containers.
Instead, those containers are exposed by a Docker device that proxies connections from a custom localhost port down to those containers (4200, 5000 and 5001 in your case).
A good interim solution is to load your frontend at http://localhost:4200 and hardwire its API address as http://localhost:5000. You may have some CORS issues with that though, since browsers will see these as different servers.
Moreover, if you go live, you may have some problems with mobile networks and corporate firewalls - you will probably want your frontend app to sit on port 443, but since it is a separate server, you will either need a different IP address for your API, so it can also go on 443, or you will need to use another port. A clean solution for this is to put a frontend proxy in front of both containers, and then just expose the proxy in Docker. This will send HTTP requests from the outside to the correct container, depending on a filtering criteria set by you. I recommend Traefik for this, but there are undoubtedly several other approaches.

Failing consul health check on local machine

I have Consul running via docker using docker-compose
version: '3'
services:
consul:
image: unifio/consul:latest
ports:
- "8500:8500"
- "8300:8300"
volumes:
- ./config:/config
- ./.data/consul:/data
command: agent -server -data-dir=/data -ui -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
mongo:
image: mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./.data/mongodb:/data/db
command: mongod --bind_ip_all
and have a nodejs service running on port 6001 which exposes a /health endpoint for health checks.
I am able to register the service via this consul package.
However, visiting the consul UI I can see that the service has a status of failing because the health check is not working.
The UI show this message:
Get http://127.0.0.1:6001/health: dial tcp 127.0.0.1:6001: getsockopt: connection refused
Not sure why is not working exactly, but I kind of sense that i may have misconfigured consul.
Any help would be great.
Consul is running in your docker container. When you use 127.0.0.1 in this container, it refers to itself, not to your host.
You need to use a host IP that is known to your container (and of course make sure your service is reachable and listening on this particular IP).
In most cases, you should be able to contact your host from a container through the default docker0 bridge ip that you can get with ip addr show dev docker0 from your host as outlined in this other answer.
The best solution IMO is to discover the gateway that your container is using which will point to the particular bridge IP on your host (i.e. the bridge created for your docker-compose project when starting it). There are several methods you can use to discover this ip from the container depending on the installed tooling and your linux flavor.
While Zeitounator's answer is perfectly fine and answers your direct question,
the "indirect" solution to your problem would be to manage the nodejs service
through docker-compose.
IMHO it's a good idea to manage all services involved using the same tool,
as then their lifecycles are aligned and also it's easy to configure them to talk
to each other (at least that's the case for docker-compose).
Moreover, letting containers access services on the host is risky in terms of security.
In production environments you usually want to shield host services from containers,
as otherwise the containers lose their "containment" role.
So, in your case you would need to add the nodejs service to docker-compose.yml:
services:
(...)
nodejs-service:
image: nodejs-service-image
ports:
- "6001:6001" [this is only required if you need to expose the port on the host]
command: nodejs service.js
And then your Consul service would be able to access nodejs-service
through http://nodejs-service:6001/health.

How to get docker oracle container ip into the java application using docker-compose?

The below code iam using in docker-compose:
integration_test:
image: service:1.0.0
volumes:
- .:/service
links:
- oracle_container
# used volumes_from as workaround to wait until the following containers to start
volumes_from:
- oracle_container
container_name: integration_test
tty: true
environment:
USER: go
command: ["mvn clean install -DskipTests"]
oracle_container:
image: inmage_name:1.0.0
container_name: oracle_container
ports:
- "49161:1521"
I want to make the both containers talk application-->oracle
Both containers are running in same machine and i used the below jdbc string to connect the oracle via application,
jdbc:oracle:thin:#localhost:49161/xe
But iam not able to connect the oracle and its throwing SQLRecoverable Exception.
As per my understanding, this comes under Docker Networking and I have used links to connect two containers. but this issue is with the connection string and more specifically ip of the oracle container.
Can someone help on this issue?
You need to use
jdbc:oracle:thin:#oracle_container:1521/xe
In docker-compose each container can reach other on their service name of the container name. You should not used the host ports instead the container port only

How can I make docker-compose bind the containers only on defined network instead of 0.0.0.0?

In recent versions docker-compose automatically creates a new network for the services it creates. Basically, every docker-compose setup is getting its own IP range, so that in theory I could call my services on the network's IP address with the predefined ports. This is great when developing multiple projects at the same time, since there is then no need to change the ports in docker-compose.yml (i.e. I can run multiple nginx projects at the same time on port 8080 on different interfaces)
However, this does not work as intended: every exposed port is still exposed on 0.0.0.0 and thus there are port conflicts with multiple projects. It is possible to put the bind IP into docker-compose.yml, however this is a killer for portability -- not every developer on the team uses the same OS or works on the same projects, therefore it's not clear which IP to configure.
It's be great to define the IP to bind the containers to in terms of the network created for this particular project. docker-compose should both know which network it created as well as its IP, so this shouldn't be a problem, however I couldn't find an easy way to do it. Is there a way or is this something yet to be implemented?
EDIT: An example of a port conflict: imagine two projects, each with an application server running on port 8080 and a MySQL database running on port 3306, both respectively exposed as "8080:8080" and "3306:3306". Running the first one with docker-compose creates a network called something like app1_network with an IP range of 172.18.0.0/16. Every exposed port is exposed on 0.0.0.0, i.e. on 127.0.0.1, on the WAN address, on the default bridge (172.17.0.0/16) and also on the 172.18.0.0/16. In this case I can reach my application server of all of 127.0.0.1:8080, 172.17.0.1:8080, 172.18.0.1:8080 and als on $WAN_IP:8080. If I start the second application now, it starts a second network app2_network 172.19.0.0/16, but still tries to bind every exposed port on all interfaces. Those ports are of course already taken (except for 172.19.0.1). If there had been a possibility to restrict each application to its network, application 1 would have available at 172.18.0.1:8080 and the second at 172.19.0.1:8080 and I wouldn't need to change port mappings to 8081 and 3307 respectively to run both applications at the same time.
In your service configuration, in docker-compose.yml:
ports:
- "127.0.0.1:8001:8001"
Reference: https://github.com/compose-spec/compose-spec/blob/master/spec.md#ports
You can publish a port to a single IP address on the host by including the IP before the ports:
docker run -p 127.0.0.1:80:80 -d nginx
The above runs nginx on the loopback interface. You can use a similar port mapping inside of a docker-compose.yml file. e.g.:
ports:
- "127.0.0.1:80:80"
docker-compose doesn't have any special abilities to infer which network interface to use based on the docker network. You'd need to specify the unique IP address to use in each compose file, and that IP needs to be for a network interface on your host. For a developer machine, that IP may change as DHCP gives the laptop/workstation new addresses.
Because of the difficulty implementing your goal, most would either map different ports on the host to different containers, so 13307:3307 for container a, 23307:3307 for container b, 33307:3307 for container c, or whatever number scheme makes sense for you. And when dealing with HTTP traffic, then using a reverse proxy like traefik often makes the most sense.
It can be achieved by configuring network in docker-compose file.
Please consider below two docker-compose files. There is still drawback of needing to specify subnet unique across all project you work on at the same time. On the other hand you need to know which service you connecting too - this is why it cannot assign it dynamically.
my-project.yaml:
services:
nginx:
networks:
- my-project-network
image: nginx
ports:
- 80:80
networks:
my-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.20.0.1"
ipam:
config:
- subnet: "172.20.0.0/16"
my-other-project.yaml
services:
nginx:
networks:
- my-other-project-network
image: nginx
ports:
- 80:80
networks:
my-other-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.21.0.1"
ipam:
config:
- subnet: "172.21.0.0/16"
Note: that if you have other service binding to *:80 like for instance apache running on host - it will also bind on docker-compose networks' interfaces and you will not be able to use this port.
To run above two services:
docker-compose -f my-project.yaml up -d
docker-compose -f my-other-project.yaml up -d

Resources