Docker cannot access exposed port inside container - docker

I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others

Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).

If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.

Related

Docker not exposing the port

There is a python application which I'm trying to run inside a docker container.
So inside the container when I'm trying to curl I can see the output but when I try to see the output on my host machine using curl it says
curl: (56) Recv failure: Connection reset by peer
and I'm not able to see any output in the browser as well
The port is exposed on 8050
host machine is centos 7
firewall and selinux are disabled
It would help if you posted the docker command / docker-compose file you use.
From what you say, it looks like you used the expose option (or, the container was made exposing that port).
I find the name "expose" a bit misleading.
Exposing a port simply means that the container listens to that port. It does not mean that this port is available ("exposed") to the host.
For that, you need to use publish (-p <host port>:<container port>).
How did you run the container ?
Connection Reset to a Docker container usually indicates that you've defined a port mapping for the container that does not point to an application.
So, if you've defined a mapping of 8050:8050, check that your process inside the docker instance is in fact running on port 8050 (netstat -an|grep LISTEN).

How to send a request from inside a docker container to the outside hostname/port of this container?

I have a web application running inside a php:7.1.8-apache docker container. The application has port 80 inside the container and port 8080 outside of it.
One part of the application sends requests to itself, but uses the outside hostname/port (for example to http://outsidehostname.local:8080).
This doesn't work because the port and the hostname does not exist inside the container.
I already tried the --hostname flag, but this doesn't solve the problem with the different port inside and outside of my container. So I am looking for a different solution.
The hostname (outsidehostname.local) comes from the host os (in my case macos). I am using dnsmasq to resolve all *.local hostnames to 127.0.0.1.
Is there any way to configure docker so that this request works without changing the behavior of the application?
In docker you have various options to set hostnames that can be resolved from container to container: When to use --hostname in docker?
This doesn't work because the port and the hostname does not exist inside the container.
Why not? Where does this outside hostname come from?
Hostnames that cannot be resolved by docker could be resolved by other DNS servers configured on OS or network level. In general how a hostname will be resolved is not a trivial question and you first need to understand how / where your outside hostname is defined and resolved.
UPDATE:
The hostname (outsidehostname.local) comes from the host os (in my case macos). I am using dnsmasq to resolve all *.local hostnames to 127.0.0.1.
This explains your problem: log in to your running container (assuming it's Linux based) using docker exec -it <containerId> /bin/sh then inside the container if you try to look up outsidehostname.local you should see that outsidehostname.local cannot be resolved because there is no such DNS info inside the container OS. If it could be resolved to 127.0.0.1, your next problem would indeed be the wrong port.
Basically running the webserver inside the container defeats the purpose of running your own OSX DNS resolver outside the container. I don't know enough about your use case to really suggest a good solution, but for Linux based images you can always edit /etc/hosts or /etc/resolv.conf.

How to use confluent/cp-kafka image in docker compose with advertising on localhost and my network container name kafka?

How to use confluent/cp-kafka image in docker compose with exposing on localhost and my network container name kafka?
Do not link this as duplicate of:
Connect to docker kafka container from localhost and another docker container
Cannot produce message to kafka from service running in docker
These do not solve my issue because the methods they use are depreciated by confluent/cp-kafka and I want to connect on localhost and on the docker network.
In the configure script on confluent/cp-kafka they do this annoying task:
# By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing
# hosts with 0.0.0.0. This is good default as it ensures that the broker
# process listens on all ports.
if [[ -z "${KAFKA_LISTENERS-}" ]]
then
export KAFKA_LISTENERS
KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS")
fi
It always sets whatever I give KAFKA_ADVERTISED_LISTENERS to 0.0.0.0! Using the docker network, doing
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9093,PLAINTEXT://kafka:9093
I expect the listeners to be either localhost:9092 or 0.0.0.0:9092 and some docker ip PLAINTEXT://172.17.0.1:9093 (whatever kafka resolves to on the docker network)
Currently I can get only one or the other to work. So using localhost, it only works on the host system, no docker containers can access it. Using kafka, it only works in the docker network, no host applications can access it. I want it to work with both. I am using docker compose so that I can have zookeeper, kafka, redis, and my application start up. I have other applications that will startup without docker.
Update
So when I set PLAINTEXT://localhost:9092 I can access kafka running docker, outside of docker.
When I set PLAINTEXT://kafka:9092 I cannot access kafka running docker, outside of docker.
This is expected, however doing this: PLAINTEXT://localhost:9092,PLAINTEXT://kafka:9093 I would expect to access kafka running docker, both inside and outside docker. The confluent/cp-kafka image is wiping out localhost and kafka. Setting them both to 0.0.0.0, then throwing an error that I set 2 different ports to the same ip...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
Maybe I'm just clashing into some opinionated docker image and should look for a different image...
The image is fine. You might want to read this explanation of the listeners.
tl;dr - you don't want to (and shouldn't?) use the same listener "protocol" in different networks.
Use the advertised.listeners, no need to edit the listeners
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
When PLAINTEXT://localhost:9093 is being loaded inside of the container, you need to add port mappings for 9093, which should be self explanatory, and you connect to localhost:9093 and it should work.
Then, if you also had PLAINTEXT://kafka:9092, that will only work within the Docker Compose network overlay, not externally to your DNS servers, because that's how Docker networking works. You should be able to run other applications as part of that Docker network with the --network flag, or link containers using Docker Compose
Keep in mind that if you're running on Mac, the recommended way (as per the Confluent docs) is to run these containers in Docker Machine, in a VM, where you can manage the external port mappings correctly using the --net=host flag of Docker. However, using the blog above, it all works fine on a Mac outside a VM.

Can't connect to ASP.Net site in Docker for Windows

I am having difficulty connecting from the host to an ASP.Net website running in a Windows container on Docker. I can connect to a website running in a Linux container without any problem.
I have tried connecting to both localhost and to the IP port assigned to the container but in both cases I just get a timeout error.
I have tried several ASP.Net examples which are already pre-built along with trying to build my own custom image. In every case I get the same timeout error. I have also tried uninstalling and re-installing docker but that didn't change anything.
I am running Windows 10 Pro and Docker Community Edition Version 17.03.1-ce-win12 (12058)
Ultimately I was able to completely reset my container network using a customized older version of the Microsoft Vitualization cleanup scripts. https://github.com/Microsoft/Virtualization-Documentation/tree/live/windows-server-container-tools/CleanupContainerHostNetworking This reset my container network and everything is now working as expected.
SUMMARY:
When the published port/s for a container are defined using the EXPOSE directive in the container's Dockerfile, the -P argument must be used with the docker run command in order to "activate" those exposed port/s.
It is not possible for a Windows container host to access containers that it is running using localhost, 127.0.0.1 or its external host IP address. Access containers running on a given host, A, by using the IP address of A from a second host, B. Alternatively, you can use the IP address of a container directly.
FULL EXPLANATION:
So there are a few nuances with ensuring that the proper firewall rules are created, and your containers are actually accessible on their published port/s.
For instance, I'll assume that your ASP.Net containerized application is defined by a container image, which was defined by a Dockerfile. If so, you probably defined the published port for the image/app using the Dockerfile EXPOSE directive. In this case, when you actually run the container you need to "activate" that published port using the "-P" argument to the docker run command.
For example, if your container image is web_app, and the Dockerfile for that image included the line, EXPOSE 80, then when you go ahead and run that image you need to do something like:
C:\> docker run -P web_app
Once the container is running, it should be available on container port 80. You can then go ahead and view the app via browser. To do that you have two options:
You can access the app from your container host, using the container IP and port
Find the container IP using docker network inspect nat, then looking for the endpoint/IP address that corresponds with your container.
You can also fund the container IP by running docker exec <CONTAINER ID> ipconfig, where <CONTAINER ID> is the ID of your container.
You can get the ID of your container and the exposed port for your container by running docker ps on the container host.
You can access the app from another host machine, using the container host IP and host port
You can find the IP address of your host using ipconfig.
You can identify the host port upon which your app is exposed, by running docker ps from the host. Then, under PORTS you'll see a mapping of the form 0.0.0.0:<HOST PORT>-><CONTAINER PORT>/TCP. In this mapping <HOST PORT>, is the port upon which your app is available on the host.
Once you have the IP address of your container host, and the port upon which your app is available on the host, you can use that information to access your app from a browser on a separate host.
NOTE: Today you cannot access a container in this way from its own host--currently a Windows container host cannot access the containers it is running, despite whether localhost, 127.0.0.1 or the host IP address is used.

Docker: Map host ports to several docker containers

I haven't fully understood the way port forwarding works with docker.
My scenario looks like this:
I have a Dockerfile that exposes a port (in my case it's 8000)
I have built an image using this Dockerfile (by using "docker build -t test_docker")
Now I created several containers by using "docker run -p 808X:8000 -d test_docker"
The host reacts on calling its IP with the different ports I have assigned on "docker run"
What exactly does this EXPOSE command do in the Dockerfile? I understood that the docker daemon itself handles the network connections and while calling "docker run" I also tell what image should be used...
Thanks
OK, I think I understood the reason.
If you are listening on ports within your application, you need to expose exactly this port. E.g.
HttpServer.bind('127.0.0.1', 8000).then((server) {...}
will need "EXPOSE 8000". Like this you can listen to several to several ports in your app but then need to expose them all.
Am I right?
Exposing ports in your dockerfile allows you to spin up a container using the -P(See here) flag on the docker run command.
A simple use case might be that you have nginx sitting on port 80 on a load balancing server, and it's going to load balance that traffic across a few docker conatiners sitting on a coreos docker server. Since each of your apps use the same port, 8000, you wouldn't be able to get to them individually. So docker would map each container to a high random, and non conflicting port on the host. So when you hit 49805, it goes to container 1s 8000, and when you hit 49807, it goes to container 2s 8000.

Resources