Connection Refused when trying to access exposed port of docker container - docker

I created a simple golang web application working on port 8080.
Docker file:
When I tried on web, I got empty response error.
Then, I opened bash of the container and found something strange.
When I did curl http://localhost:8080, I received the response but when I did same on eth0 ip, it failed.
ignore the 404. 404 means my server app is responding.
The application is accepting traffic only on localhost of the container and not accessible using docker/k8s IP.
Kindly suggest!

Did you start the container with the -p flag set to allow traffic on port 8080? (Like this: docker run -p 8080:8080 [your_image])

Related

Docker container allows me to curl hosts port 80 , but not other port

SO i have a service running in my localhost:80 , and localhost:8080
I need to fetch it from within a docker container
So I enter inside my docker container cli and try the following commands.
curl http://www.example.com //a famous website --> works
curl 172.17.0.1 ---> works , this is fetching my hosts localhost port 80 , its the default docker interface and resolves on the host
curl 172.17.0.1:1112 ---> I can fetch this , i have a simple express server running there returning a hello world in my local machine , it can also be curled from withing the host with a curl localhost:1112
Above as you can see im using 172.17.0.1 to connecto to my host from within my container, and not localhost, because localhost would mean the local connection of said container, and thats not what im looking for.
Now the issue comes with the following.
I crate a ssh tunnel in my port 8888 to another machine, which can only be accessed using a vpn that is running in my host. With the following command
ssh -fN myname#database.pl -L 8888:db.env.prise:80
This creates a tunnel that I can curl in my host machine localhost:8888
If I try this from within my host
curl -s http://localhost:8888/test | jq
It correctly fetches a json. SO the idea is to do something similar from within my container.
I go ahead to my container and type
curl http://172.17.0.1:8888/test
Failed to connect to 172.17.0.1 port 8888: Connection refused
And thats the eerror that I receive.
Why can I fetch every single port except that one? I suspect it might have something to do with my docker not being in the vpn ¿?
HOw can I fix this.
I have a openvpn file for the connection but thats it.
Altho I dont really think its the vpns fault, because if I Curl from my localhost with the vpn disconnected, the curl will fail but at least it attempts to curl it being stuck for a while. WHile trying to curl that port from within the docker instantly gets rejected.
It looks like you are attempting to connect from docker to a resource that you can only access via SSH on your host.
A valid solution to this problem would be to forward the port of the external machine to your machine via:
ssh -fN myname#database.pl -L 8888:db.env.prise:80
This will redirect the external port to a local port. The problem is that docker cannot access this port.
With socat, you can open a new port that listens on all interfaces and not just on local:
socat TCP-LISTEN:8889,fork,bind=0.0.0.0.0 TCP:localhost:8888
Through this port, connections will be redirected to your target machine:
8889->8888->80

How to setup docker squid container and host to route requests from container host to internet?

Right now I have a docker container running squid listening on a range of ports. I ran it using the following command so that the port range is published to the host as well.
docker run -ti -v /var/log/squid:/var/log/squid -p 3133-3168:3133-3168 my_image/squid_test4 -name squid
I am trying to setup this up so clients can hit the container host on a port within the port range described above and still get out to the internet.
From the container host I can run curl -x http://172.17.x.x:3134 http://ipinfo.io and get out no problem. Whenever I try to use the hosts ip (ie curl -x http://host_ip:3134 http://ipinfo.io) on a client it hangs and times out. I can see the request hit the host via tcpdump but nothing is returned.
When I run netstat -tlpn on the host I can see entries saying that docker is listening on the port range I specify. When I am on a client and do something like telnet host_ip 3134 it connects and tells me something is listening there.
Do I need to setup iptables PREROUTE NAT on the host to forward traffic to those ports or could I use something like HA proxy on the host and set the squid container up as a backend? Kind of stumped here...
Ugh, simple check and noticed iptables/ufw not running. However when I did iptables -L or iptables-save they showed current in memory rules. Restarted UFW all good now...now I feel dumb.

How to expose app running on localhost inside docker?

I have an app running inside a docker container on localhost:4200 (not 0.0.0.0:4200). How can I expose it to the host?
When I use -p 4200:4200 on docker run, I am not able to get to the app.
i.e. curl localhost:4200 or curl http://127.0.0.1:4200 or curl http://<ip from ifconfig>:4200 or curl http://0.0.0.0:4200 doesn't work.
However, docker exec <container-id> curl localhost:4200 works. This means that app started successfully but 4200 port on localhost from container is not exposed to the host.
I stopped the app and modified the app (on the same running container) to expose the app on 0.0.0.0:4200 instead of localhost:4200. After that, I am able to get to curl http://0.0.0.0:4200.
I am trying to figure out on how can I expose the port on localhost from container to host (so that I don't have to modify my app).
You can be more explicit and use:
docker run --publish=127.0.0.1:4200:4200/tcp ....
See documentation
However:
127.0.0.1 corresponds to your host's loopback adaptor
0.0.0.0 is effectively any network adaptor on this host (including 127.0.0.1)
localhost is the DNS name that is customarily mapped to 127.0.0.1
So the consequence should be the same regardless of which of these approaches you use.
HTH!

Why can't I access open HTTP port of NiFi flow via Docker?

I'm trying to do something very simple: use the official NiFi docker image (https://hub.docker.com/r/apache/nifi/) to run a very simple NiFi "Hello World" tutorial (https://github.com/drnice/NifiHelloWorld).
The problem is that I cannot access the port of the HandleHttpRequest processor from that tutorial (called Nifi-WebServer-HandleHTTP). The port is 6688.
I've mapped port 6688 to localhost, which I've confirmed in portainer:
Portainer Snapshot Showing port mappings
The URL localhost:8080 works fine, I can access the NiFi UI and interact with it.
But when I try localhost:6688, I get an error (empty response from server).
More info
1) When I log in through Portainer to the "nifi3" container console, I can run "curl localhost:6688", and get the expected result (some HTML coming back).
2) I've confirmed via netstat that nothing else is using 6688 on my host.
3) Full container run command:
docker run --name nifi4 -d -p 8080:8080 -p 6688:6688 -p 9998:9998 -v `C:/temp/GitHub/NifiHelloWorld/Archive`:/mnt/nifi_hello_world -v C:/temp/nifi_out:/mnt/nifi_out nifi3 -v 4a8bd6cab08f08af457001810a312816757f40a7c16d2583dd6a9eabfd76db78:/opt/nifi/nifi-current/conf
So the HTTP server seems to be up on the correct port inside the container, the port mapping is there, but I cannot access it from outside.
Anyone know why?
It looks like you were hit with a bit of container inception. The template you are using specifies a hostname for the HandleHttpRequest processor of "localhost". Accordingly, it will only accept requests on the loopback interface internal to the container instance.
You will need to remove this from your template instance such that it can bind to all interfaces and allow the port forwarding to work as expected via the docker command arguments.

Docker container can't connect to host application using IP whitelist

I have an application running on my host which has the following features: it listens to port 4001 (configurable) and only accepts connections from a whitelist of trusted IP addresses (127.0.0.1 only by default, other addresses can be be added but one by one, not using a mask).
(It's the interactive brokers gateway application which is run in java but I don't think that's important)
I have another application running inside a docker container which needs to connect to the host application.
(It's a python application accessing the IB API, but again I don't think that matters)
Ultimately I have will multiple containers on multiple machines trying to do the same thing, but I can't even get it working with one running on the same machine.
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(No response from IB Gateway on host machine)
IDEALLY I'd be able to set up the docker containers / bridge so that all the docker containers appear as if they are on a specific IP address, add it to the whitelist, and voila.
What I've tried:
1) using -p and EXPOSE
sudo docker run -t -p 4001:4001 myimage
Bind for 0.0.0.0:4001 failed: port is already allocated.
(No response from gateway)
This eithier doesn't work or leads to a "port already in use" conflict. I gather that these settings are designed for the opposite problem (host can't see a particular port on the container).
2) setting --net=host
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
This should work since the docker container should now look like it's 127.0.0.1... but it doesn't.
3) setting --net=host and adding the local host's real IP address 192.168.0.12 (as suggested in comments) to the whitelist
sudo docker run -t --net=host myimage
Exception caught while reading socket - Connection reset by peer
(no response from gateway)
4) adding 172.17.0.1, ...2, ...3 to the whitelist on the host application (the bridge network is 172.17.0.0 and subsequent containers get allocated in this range)
sudo docker run -t myimage
Error: Couldn't connect to TWS. Confirm that "Enable ActiveX and Socket Clients" is enabled on the TWS "Configure->API" menu.
(no response from host)
This is horribly hacky but doesn't work eithier.
PS Note this is different from the problem of trying to run the host application IB Gateway inside a container - I am not doing that.
I don't want to run the host application inside another container, although in some ways that might be a neater solution.
Running the IB gateway is tricky on a number of different levels, including connecting to it, and especially if you want to automate the process.
We took a close look at connecting to it from other IPs, and finally gave up on it--gateway bug as far as we could tell. There is a setting to white IPs that can connect to the gateway, but it does not work and can not be scripted.
In our build process we create a docker base image, then add the gateway and any/all of the gateway's clients to that image. Then we run that final image.
(Posted on behalf of the OP).
Setting --net=host and changing the port from 4001 so it doesn't conflict with a live version of the gateway on the same network. The only IP address required in the whitelist is 127.0.0.1.
sudo docker run -t --net=host myimage
Use socat to forward port from the gateway to a new port which can listen on any address. For example, set the gateway to listen on port 4002 (localhost only) and use command in the container
socat tcp-listen:4001,reuseaddr,fork tcp:localhost:4002
to forward the port to 4001.
Then you can connect to the gateway from outside of the container using port 4001 when running the container with parameter -p 4001:4001.
In case this one is useful for another person. I tried a couple suggestions that were put here to connect from my python app running on a Docker container to a TWS IBGateway instance running on another server and none of them were 100% working. The socat option was connecting, but then the connection was being drop due an issue with the socat buffer that we couldn't fix.
The solution we found was to create an ssh tunnel from the machine that is running the Docker container to the machine that is running the TWS IBGateway.
ssh -i ib-gateway.pem <ib-gateway-server-user>#<ib-gateway-server-ip> -f -N -L 4002:127.0.0.1:4001
After you establish this ssh tunnel, you can test it running
telnet 127.0.0.1 4002
If this command run successfully, your ssh tunnel is ready. The next step would be to configure your python application to connect to 127.0.0.1 on port 4002 and start your docker container with --net=host to be able to access the ssh tunnel running on Docker host machine.

Resources