I'm on mac os, I have a service running on my machine on localhost:8000
now I want to launch a docker image and hit this service from here.
I did a docker bridge and I use it from inside, but it is not working.
Here are my steps:
My host ip:
ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 98:01:a7:b0:2b:41
inet 192.168.0.70 netmask 0xffffff00 broadcast 192.168.0.255
media: autoselect
status: active
I hit my service from host:
curl localhost:8000 #this is working!
I create a bridge and I use it:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet
docker run --rm -it -v "$(pwd):/src" --network=dockernet qatests /bin/bash
from inside docker, I do a curl but it is not working:
curl 192.168.0.1:8000 #it's not working :-(
any ideas?
You dont need to create a new network. You can use the default one (bridge).
Just check which ip is associated to the docker0 interface in your host with ip or ifconfig (in my case is 172.17.42.1), and use that ip from inside the container:
$ curl 172.17.42.1:8000
In the end, I've discovered that if I ping my pc ip, I can see it even from the docker image.
For convenience, I did a lounch script witch get my current IP and launch the docker image making my ip accessible under "mymac" address
So what i did is lunching
MY_IP=$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}')
docker run --rm -it -v "$(pwd):/src" --add-host=mymac:$MY_IP qatests
/bin/bash
inside docker I can now lunch:
curl mymac:8000 #it works! now mymac is my pc outside docker
Related
When I start a simple docker container (e.g. Portainer) with
docker run -d --name portainer -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
the container is accessable from the internet as expected.
When I stop (docker stop portainer) and start (docker start portainer) the container, the port 9000 is open again (verified with nmap), but the web interface of portainer loads forever.
# first run
networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier configured
2 enp35s0 ether routable configured
3 enp36s0 ether no-carrier configuring
5 br-1815f2210327 bridge no-carrier configuring
6 br-7f9b2f2637a1 bridge no-carrier configuring
7 br-a9ae27884558 bridge no-carrier configuring
6552 br-39aac8ad8ef3 bridge routable configuring
6559 docker0 bridge no-carrier configuring
# next run
networkctl
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier configured
2 enp35s0 ether routable configured
3 enp36s0 ether no-carrier configuring
5 br-1815f2210327 bridge no-carrier configuring
6 br-7f9b2f2637a1 bridge no-carrier configuring
7 br-a9ae27884558 bridge no-carrier configuring
6552 br-39aac8ad8ef3 bridge no-carrier configuring
6559 docker0 bridge no-carrier configuring
I already tried different workarounds that I found on the internet, like
nano /etc/docker/daemon.json
{ "debug": true, "bip": "172.20.0.1/16" }
and this config file in various configurations
nano /etc/systemd/network/docker0.network
#[Match]
#Name=docker0
#[Network]
#IPForward=yes
#[Network]
#Address=172.17.0.1/16
#[Link]
#Unmanaged=yes
(Currently everythings is commented out.)
When I restart the docker daemon with
systemctl restart docker
and then start the docker container
docker start portainer
it's working fine again.
My system is a linux root server hosted by strato.de:
docker -v
Docker version 20.10.6, build 370c289
cat /etc/issue
Ubuntu 20.04.2 LTS
uname -r
5.4.0-73-generic
The problem occurs with all of my docker containers on that server.
I would be very grateful for any further tips.
UPDATE
Docker on Ubuntu doesn't connect to localhost
The mentioned solution seems not work on my server with Ubuntu 20.04.
Yesterday I installed the same OS and docker version in a VM. Everything is working fine there.
Kind regards,
K1LLUM1N471
My problem was that i could ping google only once in a container (docker run --rm alpine ping google.com), after exit it would not ping the next time i ran the same command. In ifconfig docker0 the inet address was gone after exiting the container the inet6 was still there after running the command once.
when running networkctl status the docker0 link is at configuring.
This might do the trick:
the default Netplan config (/etc/netplan/01-netcfg.yaml)in my Ubuntu 22.04 server from Strato (dedicated server) is:
network:
version: 2
ethernets:
mainif:
match:
name: '*'
dhcp4: yes
replace it with something like this:
network:
version: 2
ethernets:
enp3s0:
dhcp4: yes
dhcp6: no
enp2s0:
dhcp4: yes
dhcp6: no
apply netplan config
sudo netplan try or sudo netplan apply
restart Docker
sudo systemctl restart docker
when now running networkctl the docker0 link should be unmanaged
For your interrest:
I know that is not the best answer, but in my case I solved the problem by downgrading the OS on the root server :(
cat /etc/issue
Ubuntu 18.04.4 LTS
docker -v
Docker version 20.10.7, build f0df350
I am creating a container with following command
docker run -it -p 81:80 -p 3307:3306 --net mynet123 --ip 172.18.0.22 -v /opt/lampp/htdocs:/var/www/html lamp-setia bash
Can Someone share the solution?
Thanks In Advance
You can check the existing port by running command
lsof -i tcp:81
and
lsof -i tcp:3307
if necessary you can kill that process with command
kill -9 [pid number]
After that, you can try to re-run that docker command.
Another scenario that have the exact same error is when the IP address is in use. In my setup, I had a network setup like this:
docker network create --subnet 172.28.5.0/24 cluster-test-net
and I was trying to start my docker container as below:
docker run -d --name wildfly1 --ip 172.28.5.1 -h wildfly1 -p 8080:8080 -p 9990:9990 --network=cluster-test-net wildfly-cluster-image
The reason that I got the error was that docker had already assigned the IP address 172.28.5.1 to the host itself. I noticed that when I ran ifconfig on my host and found this row in the result:
br-bb89994f6a73: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.28.5.1 netmask 255.255.255.0 broadcast 172.28.5.255
inet6 fe80::42:a2ff:fecd:81e9 prefixlen 64 scopeid 0x20<link>
ether 02:42:a2:cd:81:e9 txqueuelen 0 (Ethernet)
RX packets 4394 bytes 4695729 (4.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2342 bytes 175071 (175.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
So I just fixed it by choosing a different IP address for my docker container:
docker run -d --name wildfly1 --ip 172.28.5.10 -h wildfly1 -p 8080:8080 -p 9990:9990 --network=cluster-test-net wildfly-cluster-image
Seems that some other process is already holding the host ports that you are trying to map with the container. You may consider using netstat -aon to find out if there is/are existing processses that are holding ports 81 and 3307 on the docker host.
The port you have given in the docker run command might be assigned to some other process. Please find what is running over there. If something unimportant kill it. Or you can proceed with available ports.
Please find a snapshot below for reference,
Regards
this means another container is taking the container's IP.
Stop all containers and then start your container.
then start your container :
docker stop x
docker network connect --ip 172.24.0.4 yournetwork y
docker start y
docker start x
The order would tell indicate the conflicting containers
or use container network docker inspect network_name
to check whether the containers have the correct Ips
I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4
I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.
Let me first explain what I'm trying to do, as there may be multiple ways to solve this. I have two containers in docker 1.9.0:
node001 (172.17.0.2) (sudo docker run --net=<<bridge or test>> --name=node001 -h node001 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup <<image>>)
node002 (172.17.0.3) (,,)
When I launch them with --net=bridge I get the correct value for SSH_CLIENT when I ssh from one to the other:
[root#node001 ~]# ssh root#172.17.0.3
root#172.17.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.17.0.3 56194 22
[root#node001 ~]# ping -c 1 node002
ping: unknown host node002
In docker 1.8.3 I could also use the hostnames I supply when I start them, in 1.8.3 that last ping statement works!
In docker 1.9.0 I don't see anything being added in /etc/hosts, and the ping statement fails. This is a problem for me. So I tried creating a custom network...
docker network create --driver bridge test
When I launch the two containers with --net=test I get a different value for SSH_CLIENT:
[root#node001 ~]# ssh root#172.18.0.3
root#172.18.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.1 57388 22
[root#node001 ~]# ping -c 1 node002
PING node002 (172.18.0.3) 56(84) bytes of data.
64 bytes from node002 (172.18.0.3): icmp_seq=1 ttl=64 time=0.041 ms
Note that the ip address is not node001's, it seems to represent the docker host itself. The hosts file is correct though, containing:
172.18.0.2 node001
172.18.0.2 node001.test
172.18.0.3 node002
172.18.0.3 node002.test
My current workaround is using docker 1.8.3 with the default bridge network, but I want this to work with future docker versions.
Is there any way I can customize the test network to make it behave similarly to the default bridge network?
Alternatively:
Maybe make the default bridge network write out the /etc/hosts file in docker 1.9.0?
Any help or pointers towards different solutions will be greatly appreciated..
Edit: 21-01-2016
Apparently the problem is fixed in 1.9.1, with bridge in docker 1.8 and with a custom (--net=test) in 1.9.1, now the behaviour is correct:
[root#node001 tmp]# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.3 52162 22
Retried in 1.9.0 to see if I wasn't crazy, and yeah there the problem occurs:
[root#node001 tmp]# ip route
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
[root#node002 ~]# env|grep SSH_CLI
SSH_CLIENT=172.18.0.1 53734 22
So after remove/stop/start-ing the instances the IP-addresses were not exactly the same, but it can be easily seen that the ssh_client source ip is not correct in the last code block. Thanks #sourcejedi for making me re-check.
Firstly, I don't think it's possible to change any settings on the default network, i.e. to write /etc/hosts. You apparently can't delete the default networks, so you can't recreate them with different options.
Secondly
Docker is careful that its host-wide iptables rules fully expose containers to each other’s raw IP addresses, so connections from one container to another should always appear to be originating from the first container’s own IP address. docs.docker.com
I tried reproducing your issue with the random containers I've been playing with. Running wireshark on the bridge interface for the network, I didn't see my ping packets. From this I conclude my containers are indeed talking directly to each other; the host was not doing routing and NAT.
You need to check the routes on your client container ip route. Do you have a route for 172.18.0.2/16? If you only have a default route, it could try to send everything through the docker host. And it might get confused and do masquerading as if it was talking with the outside world.
This might happen if you're running some network configuration in your privileged container. I don't know what's happening if you're just booting it with bash though.