Docker container cannot connect to Host Machine - docker

i tried to deploy some containers to a new CentOS7 VM (newest docker version) with docker-compose:
ASP.NET Application
Mongo
Nginx Reverse Proxy
Lets Encrypt Sidecar for Nginx
The connection between the containers works fine. But my ASP.NET Application has to make a request to itself using the public domain name. It fails and when I make a curl request inside the container it fails with No route to Host. Note: I am not using localhost or so.
I found another post: Docker container cannot connect to host machine: No route to host
It seemed to be firewall problem in this case. So I also tried to add firewall rules, but it did not help.

Assuming the container X, where your ASP net container is running, does need to contact your app using the domain. Assuming, you either use domain.dev in development or domain.tld, it you can pass "host" arguments to the container on start ( or in docker compose ways ) to bind that domain to either localhost, so --host domain.tld:127.0.0.1 or the actual IP of the container ( the private one ).
The latter one needs you to actually use docker-networks and give every container a static IP in the docker network, so you know the ASP net container IP prior starting it. You can of course do some docker-socket mount + docker inspect magic in the container, but that is far to overblown for development purposes.
References:
- docker host entries: https://docs.docker.com/engine/reference/run/#managing-etchosts
- docker networks https://docs.docker.com/engine/reference/run/#network-settings
And in docker compose:
- docker host entries: https://docs.docker.com/compose/compose-file/#extra_hosts
- networks: https://docs.docker.com/compose/compose-file/#networks
Hint: i would strongly encourage you to use docker-compose.
Alternativ ways:
you can also use the service-name as the domain in your container, since this is automatically resolving to the container IP, so assuming your ASP net container is named "app" in the docker-compose file ( the service ), you can access it using app in the container.

Related

Export docker container through cloudflared

I have a NAS where I am running various web apps in docker containers through docker-compose. I want some of these web apps to be accessible through the internet, not only when I am connected to my home network.
The problem I'm currently facing is that while cloudflare is able to expose the default web apps (default NAS management 192.168.1.135:80 can be mapped to subdomain.domain.com, for instance), it is unable to expose any docker container I try to run (192.168.1.135:4444 cannot be mapped to subdomain2.domain.com), and I receive a 502 bad gateway error with every app I have tried so far.
The configuration shouldn't be the issue, and it's definitely not the NoTLSVerify flag because the apps run on HTTP and I have configured it that way, so I am out of options to know what is going on and how to solve it.
Looks like the apps you're running on your NAS are proxied through the docker runtime. Consequently, the IP:port you need to add to the cloudflare tunnel config is the one that is reachable from the Host (not the IP of the host itself).
If the host is 192.168.1.135, you need to know which the the IP (internal to the docker network) of the app that you want to access from the outside, typically in the 172.0.0.1/24 range.
Example: If the containers running the apps you want to access are running on 172.0.0.2:4444 for app1 and 172.0.0.3:5555 for app2, the cloudflare config would look like this:
tunnel: the_ID_of_the_tunnel
credentials-file: /root/.cloudflared/the_ID_of_the_tunnel.json
ingress:
- hostname: yourapp1.example.com
service: http://172.0.0.2:4444
- hostname: ypurapp2.example.com
service: http://172.0.0.3:5555
- service: http_status:404
See more details and a video here: How to redirect subdomain to port (docker)
Turns out the problem is due to how docker works with networks, not with how Cloudflare accesses them. I first had to create a network that connected both containers, since adding cloudflare to my docker-compose file didn't work for some reason.
Create a docker network docker network create tunnel
Run docker without specifying the network docker run -d --name cloudflare cloudflare/cloudflared:latest tunnel --no-autoupdate run --token
Add the docker to the network docker network connect tunnel cloudflare
Run the container (note the container should have, as you specified, the network name identical to the one you created earlier, but cloudflare should not be in your docker-compose file) docker-compose up
In the cloudflare tunnel config, you will have to specify the docker internal address of your container (as #lu4t suggested). You can identify the address with docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container

How to Make Docker Use Specific IP Address for Browser Access from the Host

I'm using docker for building both UI and some backend microservices, and using Spring Zuul as the Proxy to pass Restful API calls from UI to the downstream microservices. My UI project needs to specify an IP address in the JS file before the build, and the Zuul project also needs to specify the IP addresses for the downstream microservices. So that after starting the containers, I can access my application using my docker machine IP http://192.168.10.1/myapp and the restful API calls in the browser network tab will be http://192.168.10.1/mymicroservices/getProduct, etc.
I can set all the IPs to my docker machine IP and build them without issues. However for my colleagues located in other countries, their docker machine IP will be different. How can I make docker use a specific IP, for example, 192.168.10.50, which I can set in the UI project and Zuul Proxy project, so that the docker IP will be the same for everyone, regardless of what their actual docker machine IP is?
What I've tried:
I've tried port forwarding in VirtualBox. It works for the UI, however the restful API calls failed.
I also tried the solution mentioned in this post:
Assign static IP to Docker container
However I can't access the services from the browser using the container IP address.
Do you have any better ideas? Thank you!
first of to clarify couple things,
If you are doin docker run ..... then you just starting container in your docker which is installed on the host machine. And there now way docker can change ip of your host machine. Thus if your other services are running somewhere else they will have to know something about docker host machine, ip or dns name.
so basically docker does runs on 127.0.0.1 if you are trying it on docker host machine, or on host machine IP if from outside of it. So docker don't need IP of host to start.
The other thing is if you are doing docker-composer up/start. Which means all services are in that docker compose file. In this case docker composer creates docker network for all containers in it. in this case you definitely can use fixed IPs for containers, though most often you don't need to because docker takes care of name resolution in that network.
if you are doing k8s way - then it is third way (production way), and it os another story.
if that is neither of above then please provide more info on how are you doing stuff.
EDIT - to:
if you are using docker composer and need to expose any of your containers to host machine you can do it through port mapping:
web:
image: some image here
ports:
- 8181:8080
left is the host machine port, right is container port
and then in browser on the host you can do request to localhost:8181
here is doc
https://docs.docker.com/compose/compose-file/#ports

How to use confluent/cp-kafka image in docker compose with advertising on localhost and my network container name kafka?

How to use confluent/cp-kafka image in docker compose with exposing on localhost and my network container name kafka?
Do not link this as duplicate of:
Connect to docker kafka container from localhost and another docker container
Cannot produce message to kafka from service running in docker
These do not solve my issue because the methods they use are depreciated by confluent/cp-kafka and I want to connect on localhost and on the docker network.
In the configure script on confluent/cp-kafka they do this annoying task:
# By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing
# hosts with 0.0.0.0. This is good default as it ensures that the broker
# process listens on all ports.
if [[ -z "${KAFKA_LISTENERS-}" ]]
then
export KAFKA_LISTENERS
KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS")
fi
It always sets whatever I give KAFKA_ADVERTISED_LISTENERS to 0.0.0.0! Using the docker network, doing
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9093,PLAINTEXT://kafka:9093
I expect the listeners to be either localhost:9092 or 0.0.0.0:9092 and some docker ip PLAINTEXT://172.17.0.1:9093 (whatever kafka resolves to on the docker network)
Currently I can get only one or the other to work. So using localhost, it only works on the host system, no docker containers can access it. Using kafka, it only works in the docker network, no host applications can access it. I want it to work with both. I am using docker compose so that I can have zookeeper, kafka, redis, and my application start up. I have other applications that will startup without docker.
Update
So when I set PLAINTEXT://localhost:9092 I can access kafka running docker, outside of docker.
When I set PLAINTEXT://kafka:9092 I cannot access kafka running docker, outside of docker.
This is expected, however doing this: PLAINTEXT://localhost:9092,PLAINTEXT://kafka:9093 I would expect to access kafka running docker, both inside and outside docker. The confluent/cp-kafka image is wiping out localhost and kafka. Setting them both to 0.0.0.0, then throwing an error that I set 2 different ports to the same ip...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
The image is fine. You might want to read this explanation of the listeners.
tl;dr - you don't want to (and shouldn't?) use the same listener "protocol" in different networks.
Use the advertised.listeners, no need to edit the listeners
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
When PLAINTEXT://localhost:9093 is being loaded inside of the container, you need to add port mappings for 9093, which should be self explanatory, and you connect to localhost:9093 and it should work.
Then, if you also had PLAINTEXT://kafka:9092, that will only work within the Docker Compose network overlay, not externally to your DNS servers, because that's how Docker networking works. You should be able to run other applications as part of that Docker network with the --network flag, or link containers using Docker Compose
Keep in mind that if you're running on Mac, the recommended way (as per the Confluent docs) is to run these containers in Docker Machine, in a VM, where you can manage the external port mappings correctly using the --net=host flag of Docker. However, using the blog above, it all works fine on a Mac outside a VM.

I am migrating from local to remote Docker, can I discover the daemon's public IP?

I am using Docker Compose to deploy my applications. In my docker-compose.yml I have a container my-frontend which must know the public IP of the backend my-backend. The image my-frontend is NodeJS application which runs in the client's browser.
Before I did this:
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND=http://localhost:81
This works fine when I deploy to a local Docker daemon and when the client runs locally.
I am now migrating to a remote Docker daemon. In this situation, the client does not run on the same host as the Docker daemon any more. Hence, I need to alter the environment variable BACKEND in my-frontend:
environment:
- BACKEND=http://<ip-of-daemon>:81
When I hardcode <ip-of-daemon> to the actual ip of the Docker daemon, everything is working fine. But I am wondering if there is a way to dynamically fill this in? So I can use the same docker-compose.yml for any remote Docker daemon.
With Docker Compose, your Docker containers will all appear on the same machine. Perhaps you are using tools like Swarm or Kubernetes in order to distribute your containers on different hosts, which would mean that your backend and frontend containers would indeed be accessible via different public IP addresses.
The usual way of dealing with this is to use a frontend proxy like Traefik on a single entry point. This means that from the browser's perspective, the IP address for your frontend and backend is the same. Internally, the proxy will use filtering rules to direct traffic to the correct LAN name. The usual approach is to use a URL path prefix like /backend/.
You correctly mentioned in the comments that, assuming your frontend container is accessible on a static public IP, you could just internally proxy from there to your backend, using NginX. That should work just fine.
Either of these approaches will allow a single IP to appear to "share" ports - this resolves the problem of wanting to listen on the same IP on 80/443 in more than one container. You need to try to avoid non-standard ports for backend calls, since some networks can block them (e.g. mobile networks, corporate firewalled environments).
I am not sure what an alternative would be to those approaches. You can certainly obtain a machine's public IP if you can run code on the host, but if your container orchestration is sending containers to machines, the only code that will run is inside each container, and I don't believe public IP information is exposed there.
Update based on your use-case
I had initially assumed from your question that you were expecting your containers to spin up on arbitrary hosts in a Docker farm. In fact, your current approach confirmed in the comments is a number of non-connected Docker hosts, so whenever you deploy, your containers are guaranteed to share a public IP. I understand the purpose behind your question a bit better now - you were wanting to specify a base URL for your backend, including a fully-qualified domain, non-standard port, and URL path prefix.
As I indicated in the discussion, this is probably not necessary, since you are able to put a proxy URL path prefix (/backend) in your frontend NginX. This negates the need for a non-standard port.
If you wanted to specify a custom backend prefix (e.g. /backend/v1 to version your API) then you could do that in env vars in your Docker Compose config.
If you need to refer to the backend's fully-qualified address in your JavaScript for the purposes of connecting to AJAX/WebSocket servers, you can just derive this from window.location.host. In your dev env this will be a bare IP address, and in your remote envs, it sounds like you have a domain.
Addendum
Some of the confusion on this question was about what sort of IP addresses we are referring to. For example:
I believe that the public IP of my-backend is equal to the docker daemon's IP
Well, your Docker host has several IP addresses, and the public address is just one of them. For example, the virtual network interface docker0 is the LAN IP of your Docker host, and if you ask for the IP of your Docker host, that would indeed be a correct answer (though of course it is not accessible on the public internet).
In fact, I would say the LAN address belongs to the daemon (since Docker sets it up) and the public IP does not (it is a feature of the box, not Docker).
In any of your Docker hosts, try this command:
ifconfig docker0
That will give you some information about your host's IP, and is useful if a Docker container wishes to contact the host (e.g. if you want to connect to a service that is not running in a container). It is quite useful to pass the IP herein into a container as an env var, in order to allow this connection to take place.
my-backend:
image: my-backend:latest
ports:
- 81:80
my-frontend:
image: my-frontend:latest
ports:
- 80:80
environment:
- BACKEND="${BACKEND_ENV}"
Where BACKEND_ENV is and enviroment variable setted to the the docker daemon's ip.
In the machine where is docker-compose executed set the environment variable before.
export BACKEND_ENV="http://remoteip..."
Or just start the frontend pointing to the remote address
docker run -p 80:80 -e BACKEND='http://remote_backend_ip:81' my-frontend:latest

Nodejs Docker Development microservices

I'm building a application with microservices architecture.
So basically, my app look like this
API GATEWAY(port 3000) => USERS-SERVICE(port 9090), AUTH-SERVICE(port 8080), SEND-SMS-SERVICE(port 7070).
all work fine until now.
now I try to implement docker in my project. I build an image for each service
and run container instance for each on my local machine.
now I want to develop new service Customer-Service. and this service run on
http://localhost:3030
.
question:
1) How i can request http://localhost:3030 from api gateway, if in development I run api-gateway from container.
You must understand the network concept, when you start independent docker instance and you don't define the network they will be unreachable between them.
There is other things, you CAN'T access to one micro service hosted in a Docker to other Micro services hosted in other docker image using localhost, localhost is a 127.0.0.1. This is a call for the local machine. Then the concept of docker is like "diferent machines running on a same machine" is like a virtual machine but docker shares the host machine kernel.
You can access to another docker image in 2 ways.
Configure in a host network, which i do not recommend
Create a network, add every docker image instance to this network and call other micro services using the container name. IE you can use http://my-service-1:3400/api/v1/post
I recommend you to use docker-compose.
This is one of my repositories, I created with the propuse of share an Node App using JWT, but this project use Docker and docker-compose
https://github.com/camiloperezv/jwt-template
how you can see, i define an Network attribute in the docker-compose.ymland use this network in all of my services.
In the service section you will put all your micro-services, and in the code you will make the http request using the container name instead of using localhost or an IP address.
In my services y use the build: . this is for development propuse, in production you should use the pre build docker image instead of building it on the production server.
Feel free to use my github code.
Regards
As far as I understand from the question, a new service Costumer-Service runs on http://localhost:3030 on the host machine.
If yes, api-gateway docker container should be started in the host network:
docker run --network host -d <api-gateway_image_name>
After this Costumer-Service will be reachable on localhost:3030 from the api-gateway container.

Resources