Docker container DNS - Resolve URL - docker

I have a docker container that needs to access an network server on the LAN. This server is visible from the docker host machine and I can access it from within the container when I reference the IP address directly.
However I need to be able to specify a url and port (e.g http://myserver:8080) rather than an IP address, which the docker container cannot resolve.
How can I configure the container to resolve this? ideally using the docker hosts dns. I have looked at many of the docs, but not being a DNS expert, it doesn't seem straightforward.
UPDATE:
I have tried this, which seems to work, but does this have any downsides or unintended consequences?
--network host
Thanks,

The rigth way to do this is to configure the docker daemon dns as specified under daemon-dns-options.
Using the host network is not recommended as it has some downsides https://docs.docker.com/network/host/

Related

Nginx proxy manager is not being able to serve the page from another docker container

I am trying for nginx proxy manager (running in a docker container) to connect to another docker container that has port 8080 open on it. When I setup the proxy to connect to 192.168.0.29:8080 the ip address of the host, but it doesn't work, the browser just says that the site didn't send any data.
I tried setting up the reverse proxy with other services (that weren't running inside a docker container), and they worked flawlessly. So, I've concluded, the problem is something with the docker containers.
First, I tried replacing the ip address with the address of the container (shown in portainer) which showed to be 172.17.0.2. But, that didn't work. I can confirm that both containers are in the same network, bridge.
I could not find any solutions for this problem either here, at Stack Overflow, or anywhere else. Hope there's enough data to solve this problem. Thanks ahead of time!
Edit:
running arp -na from within the container gives this output:
[root#docker-00244f7ab2cc:/app]# arp -na
? (172.17.0.1) at 02:42:d1:fc:fc:6b [ether] on eth0
I found the solution to my question after lots of searching and testing and it's quite simple. The solution is to start the nginx proxy manager docker container on the host network instead of the bridge network. Then, you can use localhost and then the port to refer to which service you want to redirect to.

How to Make Docker Use Specific IP Address for Browser Access from the Host

I'm using docker for building both UI and some backend microservices, and using Spring Zuul as the Proxy to pass Restful API calls from UI to the downstream microservices. My UI project needs to specify an IP address in the JS file before the build, and the Zuul project also needs to specify the IP addresses for the downstream microservices. So that after starting the containers, I can access my application using my docker machine IP http://192.168.10.1/myapp and the restful API calls in the browser network tab will be http://192.168.10.1/mymicroservices/getProduct, etc.
I can set all the IPs to my docker machine IP and build them without issues. However for my colleagues located in other countries, their docker machine IP will be different. How can I make docker use a specific IP, for example, 192.168.10.50, which I can set in the UI project and Zuul Proxy project, so that the docker IP will be the same for everyone, regardless of what their actual docker machine IP is?
What I've tried:
I've tried port forwarding in VirtualBox. It works for the UI, however the restful API calls failed.
I also tried the solution mentioned in this post:
Assign static IP to Docker container
However I can't access the services from the browser using the container IP address.
Do you have any better ideas? Thank you!
first of to clarify couple things,
If you are doin docker run ..... then you just starting container in your docker which is installed on the host machine. And there now way docker can change ip of your host machine. Thus if your other services are running somewhere else they will have to know something about docker host machine, ip or dns name.
so basically docker does runs on 127.0.0.1 if you are trying it on docker host machine, or on host machine IP if from outside of it. So docker don't need IP of host to start.
The other thing is if you are doing docker-composer up/start. Which means all services are in that docker compose file. In this case docker composer creates docker network for all containers in it. in this case you definitely can use fixed IPs for containers, though most often you don't need to because docker takes care of name resolution in that network.
if you are doing k8s way - then it is third way (production way), and it os another story.
if that is neither of above then please provide more info on how are you doing stuff.
EDIT - to:
if you are using docker composer and need to expose any of your containers to host machine you can do it through port mapping:
web:
image: some image here
ports:
- 8181:8080
left is the host machine port, right is container port
and then in browser on the host you can do request to localhost:8181
here is doc
https://docs.docker.com/compose/compose-file/#ports

Why can docker not access an endpoint that the VM has access to?

I have a container on my machine. My machine can access xxx.net, but the container can't unless I start it like this:
docker run --add-host xxx.net:192.xxx.xxx.x -p 8080:8080 -d image_id
or enter the container and add in the /etc/hosts file this line:
192.xxx.xxx.x xxx.net
Now, this would not be a problem if this address (192.xxx.xxx.x) would not change, but unfortunately, it does.
Can something be done about it?
Well, that's because in order to access the VM you are using port 8080. When you start docker on your machine, you need to expose whatever ports you have to use. Think of it this way, if your docker container was a ship. And there was water in the ship how would you get rid of it? You would need to expose a port from the ship to the sea cause to begin with no ports are exposed. So you can't connect to your `VM because it doesn't have a port to access it by.
What you can try doing, is pass your hostname in, and edit the file from within the Dockerfile? So when your docker is created, it already has the host in /etc/hosts
EDIT: If the address is changing, what platform is this? If using a cloud platform, you could reserve a static IP-address for this service?
I have a similar issue when using Filestore. If restarted, the IP would change. What I then did was query the API to get the IP address, since it was the only filestore in my GCP Project, it wasn't too bad.
What IP address is this? As in, what service, on what platform. Is there anyway for you to get the IP address without manually getting it?
I'm missing some details here
What is xxx.net? does the host sees that address because it is resolved by the DNS or it's a local service in other computer in the NAT? I assume the latter (the container should have access to the host DNS) so try running the docker with --network=host and see if that's help (it will share the same network with the host)

How do I make Eureka client to use host machine's IP instead of Docker container's IP?

I'm trying to dockerize my SpringBoot application. When the application is deployed in the docker container, it gets registered with Eureka using the docker container's IP.
I want it to get registered with the host machine's IP.
I've set eureka.instance.preferIpAddress to true. I tried ignoring the network interfaces like it is mentioned in documentation, but had no luck with it.
Is there any way to tell Eureka client to use host machine's IP?
If you start you container with --network=host, your container will have host's ip address and you wont need any additional configuration. Like docker run -it --network=host you-container ...
But consider drawbacks of this mode like lack of isolation of container, because your container will have access to host's networking.

Remote Docker container by hostname

How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.

Resources