Docker process opens TCP port, but connection is refused - docker

How can I run a simple server listening on a port, inside a Docker container?
(In case it matters, this is a MacOS X 10.13.6 host.)
When I run a simple HTTP Server:
python3 -m http.server 8001
the process starts correctly, and it listens correctly on that port (confirmed with telnet localhost 8001).
When I run a Docker container, it also runs the process correctly, but now the connection is refused.
web-api/Dockerfile:
FROM python:3.7
CMD python3 -m http.server 8001
docker-compose.yaml:
version: '3'
services:
web-api:
hostname: web-api
build:
context: ./web-api/
dockerfile: Dockerfile
expose:
- "8001"
ports:
- "8001:8001"
networks:
- api_network
networks:
api_network:
driver: bridge
When i start the container(s), it is running the HTTP server:
$ docker-compose up --detach --build
Building web-api
[…]
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1db4cd1daaa4 ipsum_web-api "/bin/sh -c 'python3…" 8 minutes ago Up 8 minutes 0.0.0.0:8001->8001/tcp ipsum_web-api_1
$ docker-machine ssh lorem
docker#lorem:~$ ps -ef | grep http.server
root 12883 12829 0 04:40 ? 00:00:00 python3 -m http.server 8001
docker#lorem:~$ telnet localhost 8001
[connects successfully]
^D
docker#lorem:~$ exit
$ telnet localhost 8001
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
What is configured incorrectly here, such that the server works inside the Docker container; but I get Connection refused on port 8001 when connecting from outside the container on its exposed port?

Try telnet lorem 8001.
Your python container is exposed to port 8001 on its docker host, which is the one you ssh into (lorem).
When you run docker, docker-compose or docker-machine commands, it uses the value of DOCKER_HOST to figure out which host to connect to. I presume that your DOCKER_HOST has the same ip as lorem.

Related

How to expose redis-server port started using "webdis docker image" to host machine

I want to monitor redis running in webdis docker container.
I use telegraf which collects redis stats but, telegraf is installed on host machine and it cannot connect to redis as it is running inside docker on 6379 port.
I tried to map docker port 6379 on which redis is running inside docker with hosts 6379 port so telegraf can listen to redis metrices, but telegraf cannot listen as connection breaks.
when I use telnet on host, I get connection closed by foreign host error.
telnet 127.0.0.1 6379
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
Also, I am able to connect to webdis port on host machine, which is running on port 7379 inside wedis container.
To start webdis I am using following command : "docker run -d -p 8080:7379 -p 6379:6379 webdis"
Further to debug, I checked that redis inside webdis container is running on interface 127.0.0.1:6379
I checked that it should be running on 0.0.0.0:6379 in-order for port mapping to work properly.
How can I run redis inside webdis image on 0.0.0.0:6379?
Is there any other way I can monitor redis server running inside webdis container?
I tried to start redis-server inside webdis container by binding it to 0.0.0.0 using redis.conf file, but it still binds with 127.0.0.1
To which docker image are you refering. Is it this one? https://github.com/anapsix/docker-webdis/
If yes, when checking the Dockerfile, it does not include redis itself but in docker-compose.yaml there is a redis service include. This one does not expose ports which you need to connect to redis from outside of the container.
You need to change redis service to the following:
...
redis:
image: 'anapsix/redis:latest'
ports:
- '6379:6379'
I have this problem recently ago and I solve it.
webdis.Dockerfile
FROM nicolas/webdis:0.1.19
EXPOSE 6379
EXPOSE 7379
RUN sed -i "s/127.0.0.1/0.0.0.0/g" /etc/redis.conf
docker-compose.yaml
version: "3.8"
services:
webdis:
build:
context: .
dockerfile: webdis.Dockerfile
image: webdis_with_redis_expose
container_name: webdis
restart: unless-stopped
ports:
- "6379:6379"
- "7379:7379"
then execute docker-compose up

Accessing to gitlab docker container outputs connection refused

I have a docker container running this configuration for the gitlab-ce image:
version: "3"
services:
gitlab:
hostname: gitlab.mydomain.com
image: gitlab/gitlab-ce:latest
container_name: gitlab
restart: always
ports:
- 3000:80
volumes:
- /opt/gitlab/config:/etc/gitlab
- /opt/gitlab/logs:/var/log/gitlab
- /opt/gitlab/data:/var/opt/gitlab
networks:
default:
external:
name: custom_network
When running docker ps i see my container up and running with the 80 container port mapped to the 3000 host machine port as intended.
Altough when running : wget -O- https://172.25.0.2:3000 i am getting this error message:
Connecting to 172.25.0.2:3000... failed: Connection refused.
When you map a port, you should use the host IPs to access through the mapped port.
So if you need to access port 80 use the container IP.
If you need to access port 3000 use the host IP or localhost of the main host itself or even if you have a private interface inside your host.
So this command: wget -O- https://172.25.0.2:3000 means that you are talking to the container directly not through the mapped port and requesting a service listening on port 3000 which is not true so the result will be connection refused.

Docker container not accessible through localhost, but accessible through 127.0.0.1

Problem
I have a Docker service container exposed to *:8080.
I cannot access the container through localhost:8080. Chrome / curl hangs up indefinitely.
But I can access the container if I use any other local IP, such as 127.0.0.1
This is tripping me up because in my hosts file, localhost redirects to 127.0.0.1.
Why is this happening? And is it IPv4/IPv6 dual-stack related somehow?
Environment
I am on PopOS (Ubuntu-based), with Docker Swarm enabled.
I am using this test stack file, traefik.docker-compose.yml:
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
To run the stack, I use:
docker stack deploy -c traefik.docker-compose.yml traefik
Once the service is up, I confirm that it is listening to 8080 through 2 ways:
docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
4ejfsvenij3p traefik_reverse-proxy replicated 1/1 traefik:latest *:80->80/tcp, *:8080->8080/tcp
sudo ss -pnlt | grep 8080
LISTEN 3 128 *:8080 *:* users:(("dockerd",pid=2119,fd=45))
For reference, here is the contents of my /etc/hosts:
127.0.0.1 localhost
::1 localhost ipv6-localhost
127.0.1.1 pop-os.localdomain pop-os pop1810x220
I just use curl for the tests:
Works: curl http://127.0.0.1:8080
Hangs up: curl http://localhost:8080
Try editing /etc/hosts in the running container to remove the second 'localhost' reference which is on the second line and follows a few spaces after '::1' so that it reads:
::1 ipv6-localhost

Docker Compose: Expose not working

docker-ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b1503d2e7c app_nginx "nginx -g 'daemon ..." 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp app_nginx_1
c9dd2231e554 app_web "/home/start.sh" 2 hours ago Up 2 hours 8000/tcp app_web_1
baad0fb1fabf app_gremlin "/start.sh" 2 hours ago Up 2 hours 8182/tcp app_gremlin_1
b663a5f026bc postgres:9.5.1 "docker-entrypoint..." 25 hours ago Up 2 hours 5432/tcp app_db_1
They all work fine:
app_nginx connects well with app_web
app_web connects well with postgres
No working file:
app_web is not able to connect with app_gremlin
docker-compose.yaml
version: '3'
services:
db:
image: postgres:9.5.12
web:
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh
Errors:
Basically I am not able to connect to gremlin container from my app_web container.
All below have been executed inside web_app container
curl:
root#49a8f08a7b82:/# curl 0.0.0.0:8182
curl: (7) Failed to connect to 0.0.0.0 port 8182: Connection refused
netstat
root#49a8f08a7b82:/# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:42681 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
udp 0 0 127.0.0.11:54232 0.0.0.0:*
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
nmap
root#49a8f08a7b82:/# nmap -p 8182 0.0.0.0
Starting Nmap 7.60 ( https://nmap.org ) at 2018-06-22 09:28 UTC
Nmap scan report for 0.0.0.0
Host is up.
PORT STATE SERVICE
8182/tcp filtered vmware-fdm
Nmap done: 1 IP address (1 host up) scanned in 2.19 seconds
nslookup
root#88626de0c056:/# nslookup app_gremlin_1
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: app_gremlin_1
Address: 172.19.0.3
Experimenting:
For Gremlin container I did,
ports:
- "8182:8182"
Then from Host I can connect to gremlin container BUT still no connection between web and gremlin container
I am working on creating a re-creating sample Docker file (minimal stuff to recreate the issue) meanwhile anyone has any idea what the issue might be?
curl 0.0.0.0:8182
The 0.0.0.0 address is a wild card that tells an app to listen on all network interfaces, you do not connect to this interface as a client. For container to container communication, you need:
containers on the same user generated network (compose does this for you by default)
connect to the name of the service (or container name)
connect to the port inside the other container, not the published port.
In your case, the command should be:
curl http://gremlin:8182
Networking is namespaced in apps running inside containers, so each container gets it's open loopback interface and ip address on a bridge network. So moving an app into containers means you need to listen on 0.0.0.0 and connect to the bridge ip using DNS.
You should also remove links and depends_on from your Dockerfile, they don't apply in version 3. Links have long since been deprecated in favor of shared networks. And depends_on doesn't work in swarm mode along with probably not doing what you wanted since it never checked for the target app to be running, only the start of that container to have been kicked off.
One last note, expose doesn't affect the ability to communicate between containers on common networks or publish ports on the host. Expose simply sets meta data on the image that is documentation between the person creating the image and the person running the image. Applications are not required to use that value, but it's a good habit to make your app default to that value for the benefit of downstream users. Because of its role, unless you have another app checking for the exposed port list, like a self updating reverse proxy, there's no need to expose the port in the compose file unless you're giving the compose file to another person and they need the documentation.
There is no link configured in the docker-compose.yaml between web and gremlin. Try to use the following:
version: '3'
services:
db:
image: postgres:9.5.12
web:
links:
- gremlin
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh

How to setup xdebug and PhpStorm with docker for Windows (beta)

I'm a bit confused using Xdebug, Docker for Windows, and PhpStorm...
I have Xdebug configured in a container with PHP. Here is what appears in my php.ini from within this container :
xdebug.remote_enable=on
xdebug.remote_autostart=off
xdebug.idekey=PHPSTORM
xdebug.remote_port=9000
xdebug.remote_host=10.0.75.1
# xdebug.remote_connect_back=1
My Windows IP as seen by docker seems to be 10.0.75.1 (PHP shows 10.0.75.1 for $_SERVER['REMOTE_ADDR'] when visited from Windows). It's also the DockerNAT virtual device IP.
PHPStorm (on Windows) is listening to Xdebug on port 9000.
I have bound port 9000 to 9000 for this container :
web:
build: php5
ports:
- "80:80"
- "9000:9000"
Windows firewall is off. Still, PHPStorm doesn't get any incoming connection from this container.
Therefore I tried to telnet it from different places :
when I telnet 10.0.75.1 9000 from windows it connects successfully when PHPStorm is listening and returns this error when it is not listening : "Could not open connection to the host on port 9000 : connect failed" . This makes perfect sense.
Same thing when I try from another computer on my local network, telnet 192.168.1.4 9000 works fine.
But from my Docker's web container even though I can successfully ping 10.0.75.1 and telnet this IP on port 80 either (it connects), on port 9000 it returns an error whether PHPStorm is listening or not :
root#fd60276df273:/var/www/html# telnet 10.0.75.1 9000
Trying 10.0.75.1...
telnet: Unable to connect to remote host: Connection timed out
I've tried to change Xdebug port to some random other numbers and it doesn't change anything...
Do you have any idea what could cause this issue ?
i've finally got it to work ! the key was to set network_mode to host :
https://docs.docker.com/compose/compose-file/
docker-compose.yml :
version: '2'
services:
web:
build: php5
ports:
- "80:80"
#links:
# - db:db
network_mode: "host"
db:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
after trying this i noticed that my container had an interface IPed 192.168.65.2
so i telneted 192.168.65.1 9000 and it worked !
php.ini :
xdebug.idekey=PHPSTORM
xdebug.remote_port=9000
xdebug.remote_host=192.168.65.1
i've selected "Expose container ports on localhost" (new option) in docker settings.
i can't use links any more, because of the specified network_mode. so i've opened port 3306 and i have to choose 192.168.65.1 as mysql host. i will probably find some workaround about this, but finally it works !
Instead of using an IP address you can use a standard hostname defined by docker:
xdebug.remote_host=host.docker.internal
This will prevent cross-platform (mac/win) issues.

Resources