Why is it not possible to connect to my container from outside - docker

I'm trying to execute a small Scala server in my computer, the code work well so the problem in my opinion is in the docker side.
Here you can see my Dockerfile:
FROM java:8-jdk-alpine
RUN apk add --update \
curl \
&& rm -rf /var/cache/apk/*
COPY ./target/scala-2.13/hello-world-assembly-1.0.jar /usr/app/
EXPOSE 8080
WORKDIR /usr/app
CMD ["java", "-jar", "hello-world-assembly-1.0.jar"]
Building command: docker build -t carloshn90/first-scala-server:latest .
Executing command: docker run -p 8080:8080 --name scala-server -it carloshn90/first-scala-server:latest
The problem is that when I try to execute a curl inside the container is working well:
docker exec scala-server curl localhost:8080 but not from outside.
Docker container status:
Curl inside the container:
Finally here the same curl but from outside the container:
My docker version is 19.03.08 and the operative system is macOS Catalina.
I would appreciate if someone have any idea about how to solve this problem
-------- Solution --------
Maybe this information is useful for others, in my case the issue was that the local address was localhost instead 0.0.0.0:
/usr/app # netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:http-alt 0.0.0.0:* LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ] STREAM CONNECTED 172119
When the correct local address should be 0.0.0.0
/usr/app # netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:http-alt 0.0.0.0:* LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ] STREAM CONNECTED 215241

Ensure that your app inside the container is listening on the external ip, and not only localhost (127.0.0.1). This is typical done by listening on *:8080 or 0.0.0.0:8080.

Run this from outside the container:
curl -L http://localhost:8080

Related

Cant connect to docker container port of some images

Trying investigate my issue with docker container. I lost a day when I thought that issue is in nodejs code (it has server and I am trying to connect to this server).
After investigations I found interesting thing for me.
For example - Lets run some test docker image:
docker run -p 888:888 -it ubuntu:16 /bin/bash
After that, prepare and install "simple server to listen our port":
apt-get update
apt-get install -y netcat
nc -l 888
After that I going to try to telnet localhost 888 from my system and got telnet: connect to address 127.0.0.1: Connection refused. The same with nodejs image.
But if you try to use, for example, nginx container -
docker run -p 888:888 -it nginx /bin/bash
I will be successfull:
$telnet 127.0.0.1 888
Trying 127.0.0.1...
Connected to localhost.
How it is possible, what I am missing? Why I can bind and use any port in nginx but not for other images?
When you run nc -l 888, you are creating a port that is listening explicitly for IPv4 connections. If we run ss -tln, we will see:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 1 0.0.0.0:888 0.0.0.0:*
When you run telnet localhost 888 on your host, there's a good chance it's trying to connect to the IPv6 localhost address, ::1. This connection fails if you're trying to connect an IPv4-only socket.
If you explicitly use the IPv4 loopback address by typing telnet 127.0.0.1 888, it should work as expected.
If you enable IPv6 support in nc by adding the -6 parameter:
nc -6 -l 8888
Then you get a socket that listen for both IPv4 and IPv6 connections:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 1 *:888 *:*
And if you attempt to connect to this socket using telnet localhost 888, it will work as expected (as will telnet 127.0.0.1 888).
Most programs (like nginx) open multi-protocol sockets by default, so this isn't normally an issue.

Port mapping problems with VScode OSS running inside a docker container

I would like to run the VSCode OSS Web Server within a Docker Container, as described here: https://github.com/microsoft/vscode/wiki/How-to-Contribute#vs-code-for-the-web
The Container is running, but the port mapping doesn't work. I run my image with
docker run -it -p 9888:9888 -p 5877:5877 vscode-server
but I got nothing with curl -I http://localhost:9888 on my machine. The VScode server is running, but the mapping to the host will not work. I think the problem is the binding. It looks like the VScode Server will bind to 127.0.0.1 but should bind to 0.0.0.0
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:9888 0.0.0.0:* LISTEN 870/node
tcp 0 0 127.0.0.1:5877 0.0.0.0:* LISTEN 881/node
Can anybody help here?

How can I get my dockerized Python app to output on 2 separate ports?

I have a dockerized Python app that outputs data on port 8080 and port 8081.
I am running the code on a Ubuntu system.
$ docker version | grep Version
Version: 18.03.1-ce
The app responds on port 8080
$ curl -k localhost:8080 | tail -4
-->
TYPE hello_world_total counter
hello_world_total 3.0
TYPE hello_world_created gauge
hello_world_created 1.5617357381235116e+09
The app returns an ERROR on port 8081
$ curl -k localhost:8081
curl: (56) Recv failure: Connection reset by peer
Although I am not familiar with netstat, I used it to check that ports 8080 and 8081 were both in the LISTEN state ...
root#1d1ac2974893:/# netstat -apn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/python3
tcp 0 0 127.0.0.1:8081 0.0.0.0:* LISTEN 1/python3
tcp 0 0 172.17.0.2:58220 16.46.41.11:8080 TIME_WAIT -
tcp 0 0 172.17.0.2:58218 16.46.41.11:8080 TIME_WAIT -
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node PID/Program name Path
root#1d1ac2974893:/#
My Dockerfile looks as follows ...
$ cat Dockerfile
FROM python:3
RUN pip3 install prometheus_client
COPY sampleapp.py /src/sampleapp.py
EXPOSE 8081
CMD [ "python3", "/src/sampleapp.py" ]
When I run the application, I map both ports 8080 and 8081 from the Docker container to the same ports on the host as follows ...
$ docker run -p 8081:8081 -p 8080:8080 sampleapp
If I go into the Container and repeat the above curl commands, they work as I expect.
root#1d1ac2974893:/# curl -k localhost:8081 | tail -4
TYPE hello_world_total counter
hello_world_total 3.0
TYPE hello_world_created gauge
hello_world_created 1.5617357381235116e+09
root#1d1ac2974893:/#
AND
$ docker exec -it 1d1ac2974893 /bin/bash
root#1d1ac2974893:/# curl -k localhost:8081
Hello World
SO
the question is why the latter curl command does NOT work from the host system.
$ curl -k localhost:8081
curl: (56) Recv failure: Connection reset by peer
Solution was as follows
Expose both ports in the Dockerfile
$ grep EXPOSE Dockerfile
EXPOSE 8080
EXPOSE 8081
Use 0.0.0.0 rather than 127.0.0.1
import http.server
from prometheus_client import start_http_server
from prometheus_client import Counter
HOST='0.0.0.0'
HELLO_WORLD_PORT=8080
HELLO_WORLD_METRICS_PORT=8081
REQUESTS = Counter('hello_world_total', 'Hello World Requested')
class MyHandler(http.server.BaseHTTPRequestHandler):
def do_GET(self):
REQUESTS.inc()
self.send_response(200)
self.end_headers()
self.wfile.write(b"Hello World\n")
if name == "main":
start_http_server(HELLO_WORLD_METRICS_PORT)
server = http.server.HTTPServer((HOST, HELLO_WORLD_PORT), MyHandler)
server.serve_forever()
Contaainer now gives the expected results when run from the host
$ curl -k localhost:8080
Hello World
$
$ curl -k localhost:8081 | tail -4
...
# TYPE hello_world_total counter
hello_world_total 1.0
# TYPE hello_world_created gauge
hello_world_created 1.5619773258069074e+09
$
Xref :- Docker Rails app fails to be served - curl: (56) Recv failure: Connection reset by peer
for details of a similar issue

Process owner of a docker program

I have started an nginx container bound on the host network as follows:
docker run --rm -d --network host --name mynginx nginx
However, when querying process information with the ss command, this seems to be a pure nginx but not a docker process:
$ ss -tuap 'sport = :80'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 0.0.0.0:http 0.0.0.0:* users:(("nginx",pid=16563,fd=6),("nginx",pid=16524,fd=6))
why is that?
You configured the nginx process to run in the host networking namespace --net host. In that mode you do not setup port forwarding from the host to the container network (e.g. -p 80:80). Had you done the port forwarding, you would see a docker process on the host which is forwarding to the same port in the container namespace for the nginx process.
Keep in mind that containers are a method to run an application with kernel options for things like namespacing, it is not a VM running under a separate OS, so you will see processes running and ports opened directly on the host.
Here's an example of what it would look like if you forwarded the port instead of using the host network namespace, and how you can also look at the network namespace inside the container:
$ docker run --rm -d -p 8000:80 --name mynginx nginx
d177bc43166ad59f5cdf578eca819737635c43b2204b2f75f2ba54dd5a9cffbb
$ sudo ss -tuap 'sport = :8000'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 :::8000 :::* users:(("docker-proxy",pid=25229,fd=4))
$ docker run -it --rm --net container:mynginx --pid container:mynginx nicolaka/netshoot ss -tuap 'sport = :80'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 *:http *:* users:(("nginx",pid=1,fd=6))
The docker-proxy process there is the default way that docker forwards a port to the container.
I am afraid there is some misunderstanding here about so-called docker process.
First of all, ss command doesn’t show what kind of process it is. It may show the application name(nginx here). But we could not say it’s so-called pure nginx process.
You could try pwdx nginx_pid. Otherwise, each running container is a process which we could check with ps -ef on its host machine.
Above all, you could use ps -ef|grep nginx and pwdx nginx_pid to find out what kind of process it is.

Application running within a docker container is not accessible?

I have create a docker image with all the setup to run my Django application .
Step 1:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
sidhartha03/django latest c4ba9ec8e613 About an hour ago 704 MB
Step 2:
docker run -i -t c4ba9ec8e613 /bin/bash
Step 3:
root#257f4e73ffa0:/# cd /home
Step 4:Activate the virtual env
root#257f4e73ffa0:/home# source my_env/bin/activate
Step 5:
root#257f4e73ffa0:/home# cd my_project_directory
Step 6:Gunicorn bind coomand to deploy the Django application
root#257f4e73ffa0:/home/my_project_directory# gunicorn OPC.wsgi:application --bind=0.0.0.0:8000 --daemon
Step 7:Restart Nginx
root#257f4e73ffa0:/home/my_project_directory# sudo service nginx restart
Step 8:check wheather the application is running or not
root#257f4e73ffa0:/home/my_project_directory# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 22/python
---> But the application is not accessible 127.0.0.1:8000
getting the following in browser
This site can’t be reached
127.0.0.1 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
You should bind the container port where you run your gunicorn to the host. To do this, use the following command.
docker run -i -t -p 8000:8000 c4ba9ec8e613 /bin/bash

Resources