I run a docker container with the following command:
docker run -d --name frontend_service -net host --publish=3001:3000 frontend_service
As I understand it maps the local port 3001 to the container port 3000.
I already ssh to the container and checked curl localhost:3000. Works. But outside, on the host, I can't curl localhost:3001.
I checked nmap. The port is open:
nmap -v -sT localhost
Starting Nmap 6.47 ( http://nmap.org ) at 2016-10-19 01:24 UTC
Initiating Connect Scan at 01:24
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 5051/tcp on 127.0.0.1
Discovered open port 3001/tcp on 127.0.0.1
Completed Connect Scan at 01:24, 0.06s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0011s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
3001/tcp open nessus
5051/tcp open ida-agent
How can i connect the container port with my host port?
When you specify --net=host, you are completely turning off Docker's network setup steps. The container won't get its own network namespace, won't get its own interfaces, and the port publishing system will have nothing to route to.
If you want your -p 3001:3000 to work, don't use --net=host.
Related
My setup is:
Debian, Docker
Host machine running Protonmail Bridge as a service
Docker container running Discourse with their default recommended setup
Issue: From the Docker container, I cannot connect to the SMTP server exposed by the Protonmail Bridge on the host machine.
I checked open ports on the host machine, all good:
ss -plnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.1:1025 0.0.0.0:* users:(("proton-bridge",pid=953,fd=12))
How I test
Host machine:
openssl s_client -connect 127.0.01:1025 -starttls smtp
Works.
Docker container:
openssl s_client -connect 172.17.0.1:1025 -starttls smtp
Connection refused.
I’m wondering if the Protonmail Bridge service that’s listening on 127.0.0.1:1025 is not accepting connections from the Docker container because they are not coming from 127.0.0.1 exactly? If this is the problem, how to validate and fix? If this is not the problem, what am I doing wrong?
Other tests
nmap 127.0.0.1 on the host machine outputs:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
1025/tcp open NFS-or-IIS
1042/tcp open afrog
Note that it lists the open port 1025.
nmap 172.17.0.1 in the docker container does not output any 1025 port. I'm not sure if this is the problem either.
Output of route in the Docker container:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
This may be impossible currently, but should be solved by this pull request.
If you're comfortable compiling the proton-bridge package from source, you only have to change 1 line in the internal/bridge/constants.go file to say
Host = '127.0.0.1'
To
Host = '0.0.0.0'
Then recompile with make build-nogui (to build the "headless" version).
And you should be good to go!
I am running the container and mapping the port like so:
docker run -d --expose 4242 -p 4242:4242 42wim/matterbridge:stable --debug
I've created a firewall rule to allows TCP connections over port 4242 to my VM. When I send an http request to the public IP of my VM the connection is refused:
http://{public-ip}:4242/api/messages
Howevever if I open a shell into the container and do a curl to the path I get the expected response curl localhost:4242/api/messages
What is the correct way to map TCP requests on port 4242 from my Host to my Container? I'm running a Ubuntu VM on GCP that hosts my docker container
Update, if use docker run --network="host" I can curl from the host to the docker container with curl localhost:4242/api/messages with the expected response. Yet when I do the same curl request with the public IP the connection is still refused.
if I ss -na | grep :4242
tcp LISTEN 0 4096 127.0.0.1:4242 0.0.0.0:*
it shows it's listening. Is there additional mapping I need to do? I have validated from google firewall logs that it is allowing and forwarding TCP connections from port 4242 to the VM
I want to use docker eclipse-mosquitto just for communication on a local machine. Which settings do I need for mosquitto.conf to make the mosquitto broker only visible on localhost but not from outside? Since a second mosquitto is running, port 1883 is blocked and I'm using port 1884.
This is what I have:
port 1884
bind_address 127.0.0.1
is visible from outside.
port 1884
bind_address localhost
gives error Error: Address not available.
Binding to docker-ip
port 1884
bind_address 172.17.0.1
gives error Error: Address not available.
What can I do?
Your answer is the wrong approach, you should only really be using --network="host" for things that need to open raw sockets or receive broadcast messages from the local network.
The correct answer is to not use the bind_address option in the mosquitto.conf file and use the docker -p option to do the port mapping correctly (docs).
e.g.
docker run exec -rm -p 127.0.0.1:1884:1884/tcp mosquitto
Here the -p 127.0.0.1:1884:1884 maps port 1884 in the container to port 1884 bound to the loopback ip (127.0.0.1) on the host.
Ok, solved it myself:
Running docker with additional option --network="host" and than in mosquitto.conf:
port 1884
bind_address 127.0.0.1
does the job.
How can I open specific ports in order to use a SDK for a project?
I have already tried netcat, but it seems that you can only listen to a specific port or open a specific port if you have a hosting website.
To open a port and keep listening on it, on macOS this should be working:
nc -lk 8080
To test you can connect to the opened port by doing:
nc -vt 0 8080
To use UDP, you just need to use option -u, for example:
nc -u -lk 8080
To test you can connect:
nc -u -vt 0 8080
Output:
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif (null)
src 127.0.0.1 port 63214
dst 127.0.0.1 port 8080
rank info not available
Connection to 0 port 8080 [udp/http-alt] succeeded!
I have installed a fresh Centos 7 box on virtualbox running in a bridged network adapter. I have installed ruby on rails and setup a simple app. I started the server on port 3000, but when I try to reach it from my host machine hitting the IP I get no response.
On the server I can do a
wget "http://127.0.0.1:3000"
and I get the right index.html file. So I figured my port was getting blocked.
So I installed firewalld and issued the following commands
sudo firewall-cmd --zone=public --add-port=3000/tcp --permanent
sudo firewall-cmd --reload
firewall-cmd --list-all
The list all shows the following ;
public (default, active)
interfaces: enp0s3
sources:
services: dhcpv6-client ssh
ports: 3000/tcp 80/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
However when I run nmap I see it's closed
sudo nmap -p 3000 127.0.0.1
Starting Nmap 6.40 ( http://nmap.org ) at 2016-11-26 01:47 EST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000071s latency).
PORT STATE SERVICE
3000/tcp closed ppp
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
From my host machine I can ping the machine but when I nmap to port 3000 it says the host is unreachable.
I don't know how to go any further. Any thoughts?