Not able to connect to docker port from host machine - docker

I'm not able to access docker port 8080 from the host machine. We have a docker container with a React application. We are able to get the landing page from inside the container but not from the host.
From the container:
root#d4947f7b1710:/# wget localhost:8080
--2019-04-01 19:38:00-- http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1, ::1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: 492 [text/html]
Saving to: 'index.html'
index.html 100%[===============================================================================================>] 492 --.-KB/s in 0s
2019-04-01 19:38:00 (49.5 MB/s) - 'index.html' saved [492/492]
From the host:
wget localhost:8000
--2019-04-01 19:38:59-- http://localhost:8000/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.Retrying.
Also tried wget 0.0.0.0:8000 but got the same result.
The ports seem to be mapped correctly:
docker port d4947f7b1710
8080/tcp -> 0.0.0.0:8000
Command used to start the container:
docker run -d -p 8000:8080 <docker repo>:<version>

It might be that you inverted the ports from what I see in the last part of your post

Related

Understanding Docker overlay network

I am using an overlay network to deploy an application on multiple VMs on the same LAN. I am using nginx as the front end for this application and this is running on host_1. All the containers that are part of the application are communicating with each other without any issues. But HTTP requests to the published port 80 of the nginx container (mapped to port 8080 on host_1) from a different VM on the same LAN, say host_2, time out[1]. But HTTP requests to localhost:8080 on host_1 succeed[2]. If I start the nginx container by removing the overlay network, I am able to send HTTP requests[3].
Output of curl -vvv <host_1 IP>:8080 on host_2.
ubuntu#host_2:~$ curl -vvv <host_1>:8080
Rebuilt URL to: <host_1 IP>:8080/
Trying <host_1 IP>...
TCP_NODELAY set
connect to <host_1 IP> port 8080 failed: Connection timed out
Failed to connect to <host_1 IP> port 8080: Connection timed out
Closing connection 0 curl: (7) Failed to connect to <host_1 IP> port 8080: Connection timed out
Output of curl localhost:8080 on host_1.
nginx welcome page
Output of curl -vvv <host_1 IP>:8080 on host_2 when I recreate the container without the overlay network
nginx welcome page
The docker-compose file for the front end is as below:
version: '3'
nginx-frontend:
hostname: nginx-frontend
image: nginx
ports: ['8080:80']
restart: always
networks:
default:
external: {name: overlay-network}
I checked that the nginx and the host are listening on 0.0.0.0:80 and 0.0.0.0:8080 respectively.
Since the port 80 of the nginx is published by mapping it to port 8080 of the host, I should be able to send HTTP requests from any VM that is on the same LAN as the host of this container. Can someone please explain what I am doing wrong or where my assumptions are wrong?

Expose port of SSH tunnel that is running inside a docker container

Inside a docker container I create the following tunnel in an interactive shell:
ssh -4 root#remotehost.com -L 8443:127.0.0.1:80
In another shell on the same container I can successfully run the following:
curl http://localhost:8443
The server (remotehost.com) does respond with HTML content.
(Note: I'm using plain HTTP for now to make it easier to debug. In the end I need to be using HTTPS, that's why I choose the local port to be 8443.)
This docker container does expose its port 8443:
# docker port be68e57bc3e0
8443/tcp -> 0.0.0.0:8443
But when I try to connect from the host to that port I get the following:
# curl --verbose http://localhost:8443
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8443 (#0)
> GET / HTTP/1.1
> Host: localhost:8443
> User-Agent: curl/7.64.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
* Closing connection 0
Here I'm lost. Why doesn't it behave exactly the same way as when connecting from inside the container? Am I misunderstanding something about SSH tunnels?
The solution was to add the -g flag to the ssh line that creates the tunnel.

Docker Kong admin API is unreachable

I've upgraded the Docker Kong image to the version 0.14.0 and it stopped responding to connections from outside the container:
$ curl 127.0.0.1:8001 --trace-ascii dump.txt
== Info: Rebuilt URL to: 127.0.0.1:8001/
== Info: Trying 127.0.0.1...
== Info: Connected to 127.0.0.1 (127.0.0.1) port 8001 (#0)
=> Send header, 78 bytes (0x4e)
0000: GET / HTTP/1.1
0010: Host: 127.0.0.1:8001
0026: User-Agent: curl/7.47.0
003f: Accept: */*
004c:
== Info: Recv failure: Connection reset by peer
== Info: Closing connection 0
The ports mapping is
0.0.0.0:8000-8001->8000-8001/tcp, 0.0.0.0:8443-8444->8443-8444/tcp
Everything is ok when trying to connect from inside the container:
/ # curl 127.0.0.1:8001
{"plugins":{"enabled_in_cluster":[], ...
Port 8000 is available from outside and inside the container. What can that be?
I have encountered the same issue. The reason is the kong admin configuration set to loopback address by default. But I didn't modify the configuration file. Since Kong Docker Image providing an environment variable to expose the admin port.
KONG_ADMIN_LISTEN="0.0.0.0:8001, 0.0.0.0:8444 ssl"
This bind the admin port to the host machine port
The problem was in the binding of the admin server to localhost in /usr/local/kong/nginx-kong.conf
server {
server_name kong_admin;
listen 127.0.0.1:8001;
listen 127.0.0.1:8444 ssl;
...
I've added the following code into my custom entrypoint which removes this binding just before starting nginx:
echo "Remove the admin API localhost binding..."
sed -i "s|^\s*listen 127.0.0.1:8001;|listen 0.0.0.0:8001;|g" /usr/local/kong/nginx-kong.conf && \
sed -i "s|^\s*listen 127.0.0.1:8444 ssl;|listen 0.0.0.0:8444 ssl;|g" /usr/local/kong/nginx-kong.conf
echo "Starting nginx $PREFIX..."
exec /usr/local/openresty/nginx/sbin/nginx \
-p $PREFIX \
-c nginx.conf
Of course the admin ports must be closed in production some other way.

Can't make outbound connections from haproxy-exporter docker

I am using MacOS docker, last version (1.12.6). In particular for docker haproxy-exporter (For Prometheus monitoring of haproxy).
It won't connect with my haproxy. I get timeouts. As a basic test I use Telnet... When I get into the docker and execute a telnet I get:
/ # telnet MY_IP_ADDRESS 80
HTTP/1.0 408 Request Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
Connection closed by foreign host
If I do this from my Mac shell, it connects:
/ # MacBook-Pro:~ icordoba$ telnet MY_IP_ADDRESS 80
Trying MY_IP_ADDRESS...
Connected to MY_IP_ADDRESS.
Escape character is '^]'.
^CConnection closed by foreign host.
It occurs on some dockers... this one is https://github.com/prometheus/haproxy_exporter
Thanks for any idea about what I'm missing...
If you use official haproxy image it listens 80 port, but in your case 9101 port hab been exposed.
Try run haproxy docker run -p 80:80 prom/haproxy-exporter -haproxy.scrape-uri="user:pass#haproxy.example.com/haproxy?s‌​tats;csv"
-p 80:80 publish port 80 from the container host to port 80 in the
container. Make sure the port you're using is free.
and run telnet MY_IP_ADDRESS 80

Bridge docker container port to host port

I run a docker container with the following command:
docker run -d --name frontend_service -net host --publish=3001:3000 frontend_service
As I understand it maps the local port 3001 to the container port 3000.
I already ssh to the container and checked curl localhost:3000. Works. But outside, on the host, I can't curl localhost:3001.
I checked nmap. The port is open:
nmap -v -sT localhost
Starting Nmap 6.47 ( http://nmap.org ) at 2016-10-19 01:24 UTC
Initiating Connect Scan at 01:24
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 5051/tcp on 127.0.0.1
Discovered open port 3001/tcp on 127.0.0.1
Completed Connect Scan at 01:24, 0.06s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0011s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
3001/tcp open nessus
5051/tcp open ida-agent
How can i connect the container port with my host port?
When you specify --net=host, you are completely turning off Docker's network setup steps. The container won't get its own network namespace, won't get its own interfaces, and the port publishing system will have nothing to route to.
If you want your -p 3001:3000 to work, don't use --net=host.

Resources