I am setting up a blockchain network . One of my node is running on port 11628 of my machine, and another node is running inside a docker container on port 11625. I want to access the node running on my machine from inside the docker container. When I do
>curl localhost:11628
In machine It works fine. I understand that I cannot execute this command inside container as localhost would mean to itself.
When I do execute
route
inside docker container, it gives output
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
So I also tried curl 172.17.0.1:11628 it also gives
curl: (7) Failed to connect to 172.17.0.1 port 11628: Connection refused
What should I do now ?
Is there any problem with exposing the port ?
I started container by command
docker run -it --name container_name image
Related
My setup is:
Debian, Docker
Host machine running Protonmail Bridge as a service
Docker container running Discourse with their default recommended setup
Issue: From the Docker container, I cannot connect to the SMTP server exposed by the Protonmail Bridge on the host machine.
I checked open ports on the host machine, all good:
ss -plnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.1:1025 0.0.0.0:* users:(("proton-bridge",pid=953,fd=12))
How I test
Host machine:
openssl s_client -connect 127.0.01:1025 -starttls smtp
Works.
Docker container:
openssl s_client -connect 172.17.0.1:1025 -starttls smtp
Connection refused.
I’m wondering if the Protonmail Bridge service that’s listening on 127.0.0.1:1025 is not accepting connections from the Docker container because they are not coming from 127.0.0.1 exactly? If this is the problem, how to validate and fix? If this is not the problem, what am I doing wrong?
Other tests
nmap 127.0.0.1 on the host machine outputs:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
1025/tcp open NFS-or-IIS
1042/tcp open afrog
Note that it lists the open port 1025.
nmap 172.17.0.1 in the docker container does not output any 1025 port. I'm not sure if this is the problem either.
Output of route in the Docker container:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
This may be impossible currently, but should be solved by this pull request.
If you're comfortable compiling the proton-bridge package from source, you only have to change 1 line in the internal/bridge/constants.go file to say
Host = '127.0.0.1'
To
Host = '0.0.0.0'
Then recompile with make build-nogui (to build the "headless" version).
And you should be good to go!
I am running the container and mapping the port like so:
docker run -d --expose 4242 -p 4242:4242 42wim/matterbridge:stable --debug
I've created a firewall rule to allows TCP connections over port 4242 to my VM. When I send an http request to the public IP of my VM the connection is refused:
http://{public-ip}:4242/api/messages
Howevever if I open a shell into the container and do a curl to the path I get the expected response curl localhost:4242/api/messages
What is the correct way to map TCP requests on port 4242 from my Host to my Container? I'm running a Ubuntu VM on GCP that hosts my docker container
Update, if use docker run --network="host" I can curl from the host to the docker container with curl localhost:4242/api/messages with the expected response. Yet when I do the same curl request with the public IP the connection is still refused.
if I ss -na | grep :4242
tcp LISTEN 0 4096 127.0.0.1:4242 0.0.0.0:*
it shows it's listening. Is there additional mapping I need to do? I have validated from google firewall logs that it is allowing and forwarding TCP connections from port 4242 to the VM
I'm trying to configure a static IP address for a docker machine.
Thanks to VonC answer, I managed to get started.
However, I'm facing a problem: boot2docker seems to ignore the "route add default gw 192.168.0.1", no matter what.
To reproduce:
1. Create a new docker machine: OK
docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" Swarm-Worker1
2. Apply your batch: OK
dmvbf Swarm-Worker1 0 108
3. SSH into the machine and check that bootsync.sh is fine: OK
cat /var/lib/boot2docker/bootsync.sh
output:
kill $(more /var/run/udhcpc.eth0.pid)
ifconfig eth0 192.168.0.108 netmask 255.255.255.0 broadcast 192.168.0.255 up
route add default gw 192.168.0.1
4. Exit SSH, restart the machine then regenerate certs: OK
docker-machine restart Swarm-Worker1
docker-machine regenerate-certs Swarm-Worker1
5. Check that the IP matches the desired one: OK
docker-machine env Swarm-Worker1
6: SSH into the machine and check its routes: KO
route -n
output:
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
7: Try to set the gateway manually and check if it works: OK
route add default gw 192.168.0.1
OR
ip route add default via 192.168.0.1
Anyone knows what's happening? Why would boot2docker ignore only the route instruction? How can I solve that?
P.S: My docker-machines run on Docker-Engine - Community 18.09.6
Am newly started learning the docker concepts and i have installed the docker latest version on my VM as per docker official docs, but when i execute the "sudo docker run hello-world" command am getting the below error message.
Error message:
sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: http: error connecting to proxy http://127.0.0.1:3128/: dial tcp 127.0.0.1:3128: getsockopt: connection refused.
See 'docker run --help'.
Note: I have modified the docker file on /etc/default/docker and enabled the proxy setting as well.
Docker file:
If you need Docker to use an HTTP proxy, it can also be specified here.
export HTTP_PROXY="http://127.0.0.1:3128/"
please find below output as when i execute the "route" command on my VM.
$route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
10.0.2.0 * 255.255.255.0 U 1 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
please give your valuable comments on this issue.
Thank you in advance!
I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4