Docker container to container connect: connection refused - docker

When all are run standalone outside of docker it works with no problem when core attempts to do a get from cerner. However, doing the same when all are dockerized as below I get:
Get http://cerner:8602/api/v1/patient/search: dial TCP 192.168.240.4:8602: connect: connection refused. The .4 is the IP of the cerner container and .2 is the IP of the core container
Cerner is the name of the container being called from core. If I change the name to the ip-address of the host server and use the ports, it works fine also. It just does not allow container to container using the containers DNS or IP. I have attempted with and without the private network and get the same thing.
The containers are all scratch go.
version: '3.7'
services: caConnector:
image: vertisoft/ca_connector:latest
ports:
- "8601:7001"
env_file:
- .env.ca_connector
networks:
- core-net
fhir:
image: vertisoft/fhir_connector:latest
container_name: cerner
ports:
- "8602:7002"
env_file:
- .env.fhir_connector
networks:
- core-net
core:
image: vertisoft/core:latest
ports:
- "8600:7000"
env_file:
- .env.core
networks:
- core-net
networks: core-net:
driver: bridge

You should call the container service with containerPort, not with hostPort in service to service communication. in your case, it should be 7000 to 7002 for any container to connect using container name.
Get http://cerner:8602/api/v1/patient/search: dial TCP
192.168.240.4:8602: connect: connection refused.
As in the error, it tries to attempt connection using publish port.
For example
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created
using web’s configuration. It joins the network myapp_default under
the name web. A container is created using db’s configuration. It
joins the network myapp_default under the name db.
In v2.1+, overlay
networks are always attachable
Each container can now look up the hostname web or db and get back the
appropriate container’s IP address. For example, web’s application
code could connect to the URL postgres://db:5432 and start using the
Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication use the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.
compose-networking

Related

docker-compose: Connect container to "network=host" and to other services [duplicate]

I want to connect two Docker containers, defined in a Docker-Compose file to each other (app and db). And one of them (app) should also be connected to the host network.
The containers should be connected to a common user-defined network (appnet or default) to use the embedded DNS capabilities from docker networking.
app needs also to be directly connected to the host network to receive ethernet broadcasts (network layer 2) in the physical network of the docker host.
Using both directives network_mode: host and networks in compose together, results in the following error:
ERROR: 'network_mode' and 'networks' cannot be combined
Specifying the network name host in the service without defining it in networks (because it already exists), results in:
ERROR: Service "app" uses an undefined network "host"
Next try: define both networks explicitly and do not use the network_mode: host attribute at service level.
version: '3'
services:
app:
build: .
image: app
container_name: app
environment:
- MONGODB_HOST=db
depends_on:
- db
networks:
- appnet
- hostnet
db:
image: 'mongo:latest'
container_name: db
networks:
- appnet
networks:
appnet: null
hostnet:
external:
name: host
The foregoing compose file produces an error:
ERROR: for app network-scoped alias is supported only for containers in user defined networks
How to use the host network, and any other user-defined network (or the default) together in Docker-Compose?
TL;DR you can't. The host networking turns off the docker network namespace for that container. You can't have it both on and off at the same time.
Instead, connect to your database with a published port, or a unix socket that you can share as a volume. E.g. here's how to publish the port:
version: "3.3"
services:
app:
build: .
image: app
container_name: app
environment:
- MONGODB_HOST=127.0.0.1
db:
image: mongo:latest
container_name: db
ports:
- "127.0.0.1:27017:27017"
To use host network, you don't need to define it. Just use "ports" keyword to define, which port(s) from service you want to expose in host network.
Since Docker 18.03+ one can use host.docker.internal to access your host from within your containers. No need to add host network or mix it with the user defined networks.
Source: Docker Tip #65: Get Your Docker Host's IP Address from in a Container

Connecting to a dockerized REST JaxRS end point from within another container locally

I am attempting to connect to a rest end point of a JaxRS liferay portlet.
If I try and connect through postman using http://localhost:8078/engine-rest/process-definition
It works 200 okay.
I am attempting to connect to the same end point from within another docker container part of the same docker network, I have tried with localhost and I receive the error:
java.net.ConnectException: Connection refused (Connection refused)
I have also tried http://wasp-engine:8078, wasp-engine is the docker name of the container. Still receiving the same error.
Here are the two containers in my compose file:
wasp-engine:
image: in/digicor-engine:test
container_name: wasp-engine
ports:
- "8078:8080"
depends_on:
mysql:
condition: service_healthy
wasp:
image: in/wasp:local2
container_name: Wasp
volumes:
- liferay-document-library:/opt/liferay/data
environment:
- camundaEndPoint=http://wasp-engine:8078
ports:
- "8079:8080"
depends_on:
mysql:
condition: service_healthy
They are both connecting to the mysql fine which is part of the same docker network and referenced via:
jdbc.default.url=jdbc:mysql://mysql/liferay_test
tl;dr
Use http://wasp-engine:8080
The why
In your docker-compose the
ports: - "8078:8080"
field on wasp-engine will expose port 8080 of the docker container to your host computer on port 8078. This is what allows your postman to succeed in connecting to the container over localhost. However, once inside the docker container localhost refers to the docker container itself. This port forwarding no longer applies.
Using docker-compose you can use the name of the container to target the specific docker container. You mentioned you tried this with the URI http://wasp-engine:8078. When you access the container this way the original port is used not the forwarded port for the host machine. This means that the docker container should be targeting port 8080.
Putting it all together, the final URI should be http://wasp-engine:8080.

Docker Grafana with two InfluxDBs: Connection refused

i created a new docker-stack where i would need several influxdb instances, which i can’t connect to my grafana container atm. Here is a port of my docker-compose.yml
services:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
ports:
- 3000:3000
networks:
- monitoring
volumes:
- grafana-volume:/var/lib/grafana
influxdb:
image: influxdb
container_name: influxdb
restart: always
ports:
- 8086:8086
networks:
- monitoring
volumes:
- influxdb-volume:/var/lib/influxdb
influxdb-2:
image: influxdb
container_name: influxdb-2
restart: always
ports:
- 12380:12380
networks:
- monitoring
volumes:
- influxdb-volume-2:/var/lib/influxdb
When i try to create a new influxdb datasource in grafana with influxdb-2 i get a Network Error: Bad Gateway(502), the logfile is showing:
2782ca98a4d7_grafana | 2019/10/05 13:18:50 http: proxy error: dial tcp 172.20.0.4:12380: connect: connection refused
Any ideas?
Thanks
#hmm provides the answer.
When you create services within Docker Compose, you:
are able to access containers by the service name. Grafana will reference influxdb-2 by that name.
are not able to change the ports a container exposes. Per #hmm, influxdb-2 must still be referenced on port 8086 because that's the port the container exposes; you can't change it unless you change the image.
you may (but you don't need to) expose the containers' ports to the host (using --ports: [[HOST-PORT]]:[[CONTAINER-PORT]]
Long and the short of it is that the InfluxDB service in influxdb-2 should be referenced as influxdb-2:8086. If you want to expose this service to the host (!), you could do ports: - 12380:8086. You may change the value of 12380 to something available on your host but you cannot change the value of the container port (8086).
The main reason that you would include the --ports: flag on influxdb-2 is for debugging from the host. But the grafana service does not require this. It will access the influxdb-2 service through the network provisioned by Docker Compose on port 8086.
You do want to expose the grafana service on the host because, otherwise, it would be inaccessible to you (from the host). It's akin to public|private. grafana is host public but the influxdb* services may be host private because they are generally only needed by the grafana service.
HTH!

docker postgresql access from other container

I have a docker-compose file which is globally like this.
version '2'
services:
app:
image: myimage
ports:
- "80:80"
networks:
mynet:
ipv4_adress: 192.168.22.22
db:
image: postgres:9.5
ports:
- "6432:5432"
networks:
mynet:
ipv4_adress: 192.168.22.23
...
networks:
mynet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.22.0/24
I want to put my postgresql and application in subnetworks to avoid the ports to be exposed outside my computer/server.
From within the app container, I can't connect to 192.168.22.23, I installed net-tools to use ifconfig/netstat, and it doesn't seem the containers are able to communicate.
I assume I have this problem because I'm using subnetworks with static ipv4 adresses.
I can access both static IPs from the host (connect to postgres and access the application)
Do you have any piece of advice, the goal is to access the ports of another container to communicate with him, without removing the use of static ips (on app at least). Here, to connect to postgresql from the app container.
The docker run -p option and Docker Compose ports: option take a bind address as an optional parameter. You can use this to make a service accessible from the same host, but not from other hosts:
services:
db:
ports:
- '127.0.0.1:6432:5432'
(The other good use of this setting is if you have a gateway machine with both a public and private network interface, and you want a service to only be accessible from the private network.)
Once you have this, you can dispense with all of the manual networks: setup. Non-Docker services on the same host can reach the service via the special host name localhost and the published port number. Docker services can use inter-container networking; within the same docker-compose.yml file you can use the service name as a host name, and the internal port number.
host$ PGHOST=localhost PGPORT=6432 psql
services:
app:
environment:
- PGHOST=db
- PGPORT=5432
You should remove all of the manual networks: setup, and in general try not to think about the Docker-internal IP addresses at all. If your Docker is Docker for Mac or Docker Toolbox, you cannot reach the internal IP addresses at all. In a multi-host environment they will be similarly unreachable from hosts other than where the container itself is running.

Docker - Connect my docker image to another computer outside docker

I am new to Docker.
I have an image containing a yii framework. both front and back end are containing yii framework.
here is my docker-compose.yml file:
version: '2'
services:
frontend:
build: ./dockerfile-frontend
container_name: erp2_frontend
links:
- backend
environment:
ENABLE_ENV_FILE: 1
ENABLE_LOCALCONF: 1
API_TOKEN: "4022dfde02359429d905066e557245c760f68f5c"
ports:
- "8080:80"
backend:
build: ./dockerfile-backend
container_name: erp2_backend
environment:
ENABLE_ENV_FILE: 1
Now I want to connect my backend image to the mssql server which is outside the docker network. Now, the server contains the mssql server are connected to the local network of my host container. My host container is ubuntu-linux. How can I connect the backend to the mssql server ? is that possible?
thanks for reply.
I don't see a network configuration in your docker-compose file which means that the default bridge network will be used.
You can go ahead and simply specify the external mssql IP and port and your container would be able to communication with mssql. Although you can't initiate a connection from the outside as you have not exposed and mapped any port in backend service.

Resources