How do I make Eureka client to use host machine's IP instead of Docker container's IP? - docker

I'm trying to dockerize my SpringBoot application. When the application is deployed in the docker container, it gets registered with Eureka using the docker container's IP.
I want it to get registered with the host machine's IP.
I've set eureka.instance.preferIpAddress to true. I tried ignoring the network interfaces like it is mentioned in documentation, but had no luck with it.
Is there any way to tell Eureka client to use host machine's IP?

If you start you container with --network=host, your container will have host's ip address and you wont need any additional configuration. Like docker run -it --network=host you-container ...
But consider drawbacks of this mode like lack of isolation of container, because your container will have access to host's networking.

Related

How to Make Docker Use Specific IP Address for Browser Access from the Host

I'm using docker for building both UI and some backend microservices, and using Spring Zuul as the Proxy to pass Restful API calls from UI to the downstream microservices. My UI project needs to specify an IP address in the JS file before the build, and the Zuul project also needs to specify the IP addresses for the downstream microservices. So that after starting the containers, I can access my application using my docker machine IP http://192.168.10.1/myapp and the restful API calls in the browser network tab will be http://192.168.10.1/mymicroservices/getProduct, etc.
I can set all the IPs to my docker machine IP and build them without issues. However for my colleagues located in other countries, their docker machine IP will be different. How can I make docker use a specific IP, for example, 192.168.10.50, which I can set in the UI project and Zuul Proxy project, so that the docker IP will be the same for everyone, regardless of what their actual docker machine IP is?
What I've tried:
I've tried port forwarding in VirtualBox. It works for the UI, however the restful API calls failed.
I also tried the solution mentioned in this post:
Assign static IP to Docker container
However I can't access the services from the browser using the container IP address.
Do you have any better ideas? Thank you!
first of to clarify couple things,
If you are doin docker run ..... then you just starting container in your docker which is installed on the host machine. And there now way docker can change ip of your host machine. Thus if your other services are running somewhere else they will have to know something about docker host machine, ip or dns name.
so basically docker does runs on 127.0.0.1 if you are trying it on docker host machine, or on host machine IP if from outside of it. So docker don't need IP of host to start.
The other thing is if you are doing docker-composer up/start. Which means all services are in that docker compose file. In this case docker composer creates docker network for all containers in it. in this case you definitely can use fixed IPs for containers, though most often you don't need to because docker takes care of name resolution in that network.
if you are doing k8s way - then it is third way (production way), and it os another story.
if that is neither of above then please provide more info on how are you doing stuff.
EDIT - to:
if you are using docker composer and need to expose any of your containers to host machine you can do it through port mapping:
web:
image: some image here
ports:
- 8181:8080
left is the host machine port, right is container port
and then in browser on the host you can do request to localhost:8181
here is doc
https://docs.docker.com/compose/compose-file/#ports

How to expose the docker container ip to the external network?

i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Since you tagged this as kubernetes:
You cannot directly send packets to individual Docker containers. You need to send them to somewhere else that’s able to route them. In the case of plain Docker, you need to use the docker run -p option to publish a port to the host, and then containers will be reachable via the published port via the host’s IP address or DNS name. In a Kubernetes context, you need to set up a Service that’s able to route traffic to the Pod (or Pods) that are running your container, and you ultimately reach containers via that Service.
The container-internal IP addresses are essentially useless in many contexts. (They cannot be reached from off-host at all; in some environments you can’t even reach them from outside of Docker on the same host.) There are other mechanisms you can use to reach containers (docker run -p from outside Docker, inter-container DNS from within Docker) and you never need to look up these IP addresses at all.
Your question places a heavy emphasis on ping(1). This is a very-low-level debugging tool that uses a network protocol called ICMP. If sending packets using ICMP is actually core to your workflow, you will have difficulty running it in Docker or Kubernetes. I suspect you aren’t actually. Don’t worry so much about being able to directly ping containers; use higher-level tools like curl(1) if you need to verify that a request is reaching its container.
It's pretty easy actually, assuming you have control over the routing tables of your external devices (either directly, or via your LAN's gateway/router). Assuming your containers are using a bridge network of 172.17.0.0/16, you add a static entry for the 172.17.0.0/16 network, with your Docker physical LAN IP as the gateway. You might need to also allow this forwarding in your Docker OS firewall configuration.
After that, you should be able to connect to your docker container using its bridge address (172.17.0.2 for example). Note however that it will likely not respond to pings, due to the container's firewall.
If you're content to access your container using only the bridge IP (and never again use your Docker host IP with the mapped-port), you can remove port mapping from the container entirely.
You need to create a new bridge docker network and attach the container to this network. You should be able to connect by this way.
docker network create -d bridge my-new-bridge-network
or
docker network create --driver=bridge --subnet=192.168.0.0/16 my-new-bridge-network
connect:
docker network connect my-new-bridge-network container1
or
docker network connect --ip 192.168.0.10/16 my-new-bridge-network container-name
If the problem persist, just reload docker daemon, restart the service. Is a known issue.

Docker container DNS - Resolve URL

I have a docker container that needs to access an network server on the LAN. This server is visible from the docker host machine and I can access it from within the container when I reference the IP address directly.
However I need to be able to specify a url and port (e.g http://myserver:8080) rather than an IP address, which the docker container cannot resolve.
How can I configure the container to resolve this? ideally using the docker hosts dns. I have looked at many of the docs, but not being a DNS expert, it doesn't seem straightforward.
UPDATE:
I have tried this, which seems to work, but does this have any downsides or unintended consequences?
--network host
Thanks,
The rigth way to do this is to configure the docker daemon dns as specified under daemon-dns-options.
Using the host network is not recommended as it has some downsides https://docs.docker.com/network/host/

Can't connect to ASP.Net site in Docker for Windows

I am having difficulty connecting from the host to an ASP.Net website running in a Windows container on Docker. I can connect to a website running in a Linux container without any problem.
I have tried connecting to both localhost and to the IP port assigned to the container but in both cases I just get a timeout error.
I have tried several ASP.Net examples which are already pre-built along with trying to build my own custom image. In every case I get the same timeout error. I have also tried uninstalling and re-installing docker but that didn't change anything.
I am running Windows 10 Pro and Docker Community Edition Version 17.03.1-ce-win12 (12058)
Ultimately I was able to completely reset my container network using a customized older version of the Microsoft Vitualization cleanup scripts. https://github.com/Microsoft/Virtualization-Documentation/tree/live/windows-server-container-tools/CleanupContainerHostNetworking This reset my container network and everything is now working as expected.
SUMMARY:
When the published port/s for a container are defined using the EXPOSE directive in the container's Dockerfile, the -P argument must be used with the docker run command in order to "activate" those exposed port/s.
It is not possible for a Windows container host to access containers that it is running using localhost, 127.0.0.1 or its external host IP address. Access containers running on a given host, A, by using the IP address of A from a second host, B. Alternatively, you can use the IP address of a container directly.
FULL EXPLANATION:
So there are a few nuances with ensuring that the proper firewall rules are created, and your containers are actually accessible on their published port/s.
For instance, I'll assume that your ASP.Net containerized application is defined by a container image, which was defined by a Dockerfile. If so, you probably defined the published port for the image/app using the Dockerfile EXPOSE directive. In this case, when you actually run the container you need to "activate" that published port using the "-P" argument to the docker run command.
For example, if your container image is web_app, and the Dockerfile for that image included the line, EXPOSE 80, then when you go ahead and run that image you need to do something like:
C:\> docker run -P web_app
Once the container is running, it should be available on container port 80. You can then go ahead and view the app via browser. To do that you have two options:
You can access the app from your container host, using the container IP and port
Find the container IP using docker network inspect nat, then looking for the endpoint/IP address that corresponds with your container.
You can also fund the container IP by running docker exec <CONTAINER ID> ipconfig, where <CONTAINER ID> is the ID of your container.
You can get the ID of your container and the exposed port for your container by running docker ps on the container host.
You can access the app from another host machine, using the container host IP and host port
You can find the IP address of your host using ipconfig.
You can identify the host port upon which your app is exposed, by running docker ps from the host. Then, under PORTS you'll see a mapping of the form 0.0.0.0:<HOST PORT>-><CONTAINER PORT>/TCP. In this mapping <HOST PORT>, is the port upon which your app is available on the host.
Once you have the IP address of your container host, and the port upon which your app is available on the host, you can use that information to access your app from a browser on a separate host.
NOTE: Today you cannot access a container in this way from its own host--currently a Windows container host cannot access the containers it is running, despite whether localhost, 127.0.0.1 or the host IP address is used.

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources