Im using an nginx container on my VMs as a proxy to some services.
In most of the VMs it works just fine, but I have a single VM which the nginx does not work in it. Tried to make requests with curl inside the container and it does not work, whereas, the curls do work outside the container on the VM.
Because it works on all the other VMs, I assumed it is a problem with the docker configuration on that specific VM.
The error i get from the curl inside the container is:
Failed to connect to x.x.x.x port 443: no route to host
(Tried maybe to add the --add-host parameter in the docker run, but it didn't help either)
Appreciate any help :)
Solved my problem by appending --net=host to the docker run command.
Apparently, for this specific VM, I had to explicitly define the network of the nginx container as host
(which enable the container to sends request to the outside world)
Related
I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others
Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).
If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.
A browser running in docker container needs to make a POST to a login service running on a test API in our network. The service is very picky about where POST can come from so it's rejecting the POST because it's coming from host.docker.internal instead of localhost.company.com.
It's very unlikely I'd be able to get host.docker.internal added to the whitelist.
The POST will work fine if the browser is running on my local machine but fails when the browser is running inside a container on my local machine.
I've tried docker run --add-host='localhost.mycompany.com:127.0.0.1' and docker run --add-host='localhost:127.0.0.1', neither one worked. The latter seems silly; it was kind of a shot in the dark...
A possible further complication: the browser is running in testcafe inside a Docker container, so my request will have headers like 'Origin: http://172.17.0.2:1337' 'Referer: http://172.17.0.2:1337/WBrtZV38p/http://host.docker.internal:3000/app/'
Short of making a proxy of some sort on my local machine, is there a way to make a POST from the docker container appear to be coming from my local machine?
Start container in the host OS network space with docker run --network host ... - container will be running in the network of your local machine directly. But you will lose container network isolation, so you should to review security of this approach.
Doc: https://docs.docker.com/network/host/
I have a web application running inside a php:7.1.8-apache docker container. The application has port 80 inside the container and port 8080 outside of it.
One part of the application sends requests to itself, but uses the outside hostname/port (for example to http://outsidehostname.local:8080).
This doesn't work because the port and the hostname does not exist inside the container.
I already tried the --hostname flag, but this doesn't solve the problem with the different port inside and outside of my container. So I am looking for a different solution.
The hostname (outsidehostname.local) comes from the host os (in my case macos). I am using dnsmasq to resolve all *.local hostnames to 127.0.0.1.
Is there any way to configure docker so that this request works without changing the behavior of the application?
In docker you have various options to set hostnames that can be resolved from container to container: When to use --hostname in docker?
This doesn't work because the port and the hostname does not exist inside the container.
Why not? Where does this outside hostname come from?
Hostnames that cannot be resolved by docker could be resolved by other DNS servers configured on OS or network level. In general how a hostname will be resolved is not a trivial question and you first need to understand how / where your outside hostname is defined and resolved.
UPDATE:
The hostname (outsidehostname.local) comes from the host os (in my case macos). I am using dnsmasq to resolve all *.local hostnames to 127.0.0.1.
This explains your problem: log in to your running container (assuming it's Linux based) using docker exec -it <containerId> /bin/sh then inside the container if you try to look up outsidehostname.local you should see that outsidehostname.local cannot be resolved because there is no such DNS info inside the container OS. If it could be resolved to 127.0.0.1, your next problem would indeed be the wrong port.
Basically running the webserver inside the container defeats the purpose of running your own OSX DNS resolver outside the container. I don't know enough about your use case to really suggest a good solution, but for Linux based images you can always edit /etc/hosts or /etc/resolv.conf.
I'm trying to access Docker remote API from within a container because I need to start other containers.
The host address is 172.19.0.1, so I'm using http://172.19.0.1:2375/images/json to get the list of images (from host, http://localhost:2375/images/json works as expected.
The connection is refused, I guess because Docker (for Windows) listens on 127.0.0.1 and not on 0.0.0.0.
I've tried to change configuration (both from UI and daemon.json) adding the entry:
"hosts": ["tcp://0.0.0.0:2375"]
but the daemon fails to start. How can I access the api?
You can set DOCKER_OPTS in windows as below and try. In Windows, Docker runs inside a VM. So, you have to ssh into the VM and make the changes.
DOCKER_OPTS='-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock'
Check if it works for you.
Update :- To ssh into the VM (assuming default is the VM name you have created using Docker toolbox), enter the following command in the Docker Quickstart Terminal,
docker-machine ssh default
You can find more details here.
You could link the host's /var/run/docker.sock within the container where you need it. This way, you don't expose the Docker Remote API via an open port.
Be aware that it does provide root-like access to docker.
-v /var/run/docker.sock:/var/run/docker.sock
You should use "tcp://host.docker.internal:2375" to connect to host machine from container. Please make sure that you can ping the "host.docker.internal" address
https://github.com/docker/for-win/issues/1976
Im experimenting with Docker containers and Im having a problem with resolving ips from hostnames from inside my server.
It works fine on my machine (windows 10).
Basically Im just pinging hostnames on our internal network from my server (windows server 2016 running in a VM on VMWare) and it cannot find the host.
I run the container like this:
docker run -it microsoft/nanoserver
and when in the command prompt I ping one of our internal servers using its hostname.
This works fine on my windows 10 machine.
However if I ping the ip directly it works on the server.
If I ping the same hostname directly from the host it works fine.
Im quite new at this and I've been trying to figure it out using various guides, but I havent found anyone who has asked this before.
Any ideas?
The Docker container does not know anything about "your" network. Docker uses virtual interfaces to spin container - networks.
docker run --dns=127.0.0.1
Anyway, you can add your DNS Server to the Docker engine or add some static "host" entries like:
docker run --add-host=myserver.local:192.168.66.66 ...