docker container has no access to localhost webhook - docker

I'm working on our webhook locally before building a docker container, and want my (Linux) container to communicate with it, using host.docker.internal:ping.
It had been working before, but lately, for some reason, I'm getting this error from our graphql-engine, Hasura:
{
"timestamp":"2019-11-05T18:45:32.860+0000",
"level":"error",
"type":"webhook-log",
"detail":{
"response":null,
"url":"http://host.docker.internal:3000/simple/webhook",
"method":"GET",
"http_error":{
"type":"http_exception",
"message":"ConnectionFailure Network.Socket.getAddrInfo (called with preferred socket type/protocol: AddrInfo {addrFlags = [AI_ADDRCONFIG], addrFamily = AF_UNSPEC, addrSocketType = Stream, addrProtocol = 0, addrAddress = <assumed to be undefined>, addrCanonName = <assumed to be undefined>}, host name: Just \"host.docker.internal\", service name: Just \"3000\"): does not exist (Temporary failure in name resolution)"
},
"status_code":null
}
}
Here's my docker compose:
version: '3.6'
services:
postgres:
image: postgres:11.2
restart: always
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=...
- POSTGRES_PASSWORD=...
graphql-engine:
image: hasura/graphql-engine:latest
depends_on:
- postgres
restart: always
environment:
- HASURA_GRAPHQL_DATABASE_URL=postgres://...:...#postgres:5432/postgres
- HASURA_GRAPHQL_ACCESS_KEY=...
- HASURA_GRAPHQL_AUTH_HOOK=http://host.docker.internal:3000/simple/webhook
command:
- graphql-engine
- serve
- --enable-console
ports:
- 8080:8080
volumes:
postgres:
data:
The local project is for sure working and listening to port 3000. Nontheless, it isn't receiving any requests [as it should] from the graphql-engine container. Could it be related to our proxy?

Seemed to be an issue with Docker Desktop.
Uninstalled the whole docker environment and rebuilt it all, that fixed it.

Related

Connecting to Docker from external network: modifying YML file

I am trying to set up a Learning Locker server within Docker (on Windows 10, Docker using WSL for emulation) using the repo from michzimney. This service is composed of several Docker containers (Mongo, Redis, NGINX, etc) networked together. Using the provided docker-compose.yml file I have been able to set up the service and access it from localhost, but I cannot access the server from any machine on the rest of my home network.
This is a specific case, but some guidance will be valuable as I am very new to Docker and will need to build many such environments in the future, for now in Windows but later in Docker on Synology, where the services can be access from network and internet.
My research has led me to user-defined bridging using docker -p [hostip]:80:80 but this didn't work for me. I have also turned off Windows firewall since that seems to cause a host of issues for some but still no effect. I tried to bridge my virtual switch manager for WSL using Windows 10 Hyper-V manager, but that didn't work, and I have tried bridging the WSL connector to LAN using basic Windows 10 networking but that didn't work and I had to reset my network.
So the first question is is this a Windows networking issue or a
Docker configuration issue?
The second question, assuming it's a
Docker configuration issue, is how can I modify the following YML
file to make the service accessible to the outside network:
version: '2'
services:
mongo:
image: mongo:3.4
restart: unless-stopped
volumes:
- "${DATA_LOCATION}/mongo:/data/db"
redis:
image: redis:4-alpine
restart: unless-stopped
xapi:
image: learninglocker/xapi-service:2.1.10
restart: unless-stopped
environment:
- MONGO_URL=mongodb://mongo:27017/learninglocker_v2
- MONGO_DB=learninglocker_v2
- REDIS_URL=redis://redis:6379/0
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/xapi-storage:/usr/src/app/storage"
api:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node api/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
ui:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "./entrypoint-ui.sh"
restart: unless-stopped
depends_on:
- mongo
- redis
- api
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
- "${DATA_LOCATION}/ui-logs:/opt/learninglocker/logs"
worker:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node worker/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
nginx:
image: michzimny/learninglocker2-nginx:${DOCKER_TAG}
environment:
- DOMAIN_NAME
restart: unless-stopped
depends_on:
- ui
- xapi
ports:
- "443:443"
- "80:80"
So far I have attempted to change the ports option to the following:
ports:
- "192.168.1.102:443:443"
- "192.168.1.102:80:80"
But then the container wasn't even accessible from the host machine anymore. I also tried adding network-mode=host under the nginx service but the build failed saying it was not compatible with port mapping. Do I need to set network-mode=host for every service or is the problem something else entirely?
Any help is appreciated.
By the looks of your docker-compose.yml, you are exposing ports 80 & 443 to your host (Windows machine). So, if your windows IP is 192.168.1.102 - you should be able to reach http://192.168.1.102 & https://192.168.1.102 on your LAN if there is nothing blocking it (firewall etc.).
You can confirm that you are indeed listening on those ports by running 'netstat -a' and checking to see if you are LISTENING on those ports.

Want to run elasticsearch with Laravel app in docker, but it doesn't work

I have laravel app that lives in docker, and I want to integrate elasticsearch to my app
That is how my docker-compose.yaml looks
version: '3'
services:
laravel:
build: ./docker/build
container_name: laravel
restart: unless-stopped
privileged: true
ports:
- 8084:80
- "22:22"
volumes:
- ./docker/settings:/settings
- ../2agsapp:/var/www/html
# - vendor:/var/www/html/vendor
- ./docker/temp:/backup
- composer_cache:/root/.composer/cache
environment:
- ENABLE_XDEBUG=true
links:
- mysql
mysql:
image: mariadb:10.2
container_name: mysql
volumes:
- ./docker/db_config:/etc/mysql/conf.d
- ./db:/var/lib/mysql
ports:
- "8989:3306"
environment:
- MYSQL_USER=dev
- MYSQL_PASSWORD=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=laravel
command: --innodb_use_native_aio=0
phpmyadmin:
container_name: pma_laravel
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_USER=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=dev
- MYSQL_DATABASE=laravel
- PMA_HOST=mysql
ports:
- 8083:80
links:
- mysql
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
volumes:
storage:
composer_cache:
I run docker-compose up -d and then got really strange issue
If I execute curl localhost:9200 inside laravel container it returns this message Failed to connect to localhost port 9200: Connection refused
But if I wull run curl localhost:9200 out of the docker it returns expected response
Maybe I don't understand how it works, hope someone will help me
when you want to access another container within some container you should use the container name, not localhost.
If you are inside laravel and want to access Elasticsearch you should:
curl es:9200
Since you mapped the 9200 port to localhost (ports section in docker-compose) this port is available from your local machine as well, that's why curling from local machine to 9200 works.

Problem with docker containers communication with port redirection

So my structure contains 3 apps , 2 servers and 1 client, all in docker containers.
I have no problem communicating with my server containers "manually" (from my UNcontainerized client)
But once my client is containerized I can't communicate with the server with port redirection.
I get an Error: connect ECONNREFUSED
Here is my docker-compose :
client:
build: ./api-img2txt
expose:
- "4000"
networks:
- portfolio-network
container_name: api-img2txt
#####################################################
###################Models############################
#####################################################
tfserv-1:
image: tensorflow/serving
environment:
- MODEL_NAME=sentimentAnalysis
ports:
- "8400:8500"
- "8401:8501"
networks:
- portfolio-network
container_name: tfserv-1
volumes:
- /home/flo/portfolio/api-sentimentAnalysis/sentimentanalysis/model:/models/sentimentAnalysis
tfserv-2:
image: tensorflow/serving
environment:
- MODEL_NAME=lineCounting
ports:
- "8500:8500"
- "8501:8501"
networks:
- portfolio-network
container_name: tfserv-2
volumes:
- /home/flo/portfolio/api-img2txt/models/lineCounting:/models/lineCounting
networks:
portfolio-network:
This is what my client does :
import requests
URL = "http://localhost:8501/v1/models/lineCounting:predict"
json_request = '{ "instances" : [0] }'
r = requests.post(url=URL, data=json_request)
print('req1',r.json())
print('**********')
URL = "http://localhost:8401/v1/models/sentimentAnalysis:predict"
json_request = '{ "instances" : [0] }'
r = requests.post(url=URL, data=json_request)
print('req2',r.json())
I can't change the final port, it must be 8501. How can I make my client communicate with the server on 8401 ? Thank you in advance for your help.
First of all. you are saying port redirections - which is more like port mapping in docker compose.
Secondly - attempt to hep you:
Assuming no magic in portfolio-network
and since your client in the same network as both of your servers you should communicate to the through their names but not localhost. i.e.
URL = "http://tfserv-2:8501/v1/models/lineCounting:predict"
and
URL = "http://tfserv-1:8401/v1/models/sentimentAnalysis:predict"
thus you don't even need to map tfserv-1 to different ports, because you not trying to connect to them from your host PC. docker-compose does names resolutions inside of docker-compose network for you.
i.e. ports are same as in container
tfserv-1:
image: tensorflow/serving
environment:
- MODEL_NAME=sentimentAnalysis
ports:
- "8500:8500"
- "8501:8501"
networks:
- portfolio-network
container_name: tfserv-1
volumes:
- /home/flo/portfolio/api-sentimentAnalysis/sentimentanalysis/model:/models/sentimentAnalysis
and then just do
URL = "http://tfserv-1:8501/v1/models/sentimentAnalysis:predict"
Meanwhile from you host computer you should be able to go on to URL = "http://localhost:8401/v1/models/sentimentAnalysis:predict" with configuration you've provided in the question.

Access docker ports from a container inside another container at localhost

I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.

Setting up IPFS Cluster on docker environment

I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode

Resources