I have a Nginx Proxy Manager container, which proxies docker containers as well as some physical devices within host external network.
For NPM to get access to them, I've created a network:
sudo docker network create -d macvlan \
--subnet=192.168.0.0/23 \
--gateway=192.168.0.1 \
-o parent=enp2s0 \
npm
and added NPM to it with:
sudo docker network connect --ip 192.168.0.12 npm npm_nginxproxymanager_1
The issue with this is that after rebooting the host machine, the IP is not persistent.
NPM is still within that network, but the IP it gets for some reason is automatically assigned, and becomes 192.168.0.1. How can I make the container IP stay 0.12 after reboot?
As I discussed before, you are already using the --ip network setting to set the IP.
To keep it persistent across session, you would need to add that docker network connect directive in a .bashrc or .profile setting file, to be executed when you log in.
Or set it up as a service, like chung1905/docker-network-connector does.
Related
I have set up a new Ubuntu 22.04.1 server with Docker version 20.10.21, using docker images from the exact same dockerfiles that work without any problems on another Ubuntu server (20.04 though).
In my new docker installation, I experience problem reaching into the docker containers, but I can neither reach the outside world from within the docker containers.
For example, issuing this from a bash within the docker container:
# wget google.com
Resolving google.com (google.com)... 216.58.212.142, 2a00:1450:4001:82f::200e
Connecting to google.com (google.com)|216.58.212.142|:80...
That's all, it just hangs there forever. Doing the same in the other installation works just fine. So I suspect there is some significant difference between those installations, but I can't find out what it is.
I'm also running a reverse proxy docker container within the same docker network, and it cannot reach the app container in the broken environment. However, I feel that if I knew what block my outgoing requests, this would explain the other issues as well.
How can I find out what causes the docker container requests to be blocked?
This is my docker network setup:
Create the network
docker network create docker.mynet --driver bridge
Connect container #1
docker network connect docker.mynet container1
Run and connect container 2
docker run --name container2 -d -p 8485:8080 \
--network docker.mynet \
$IMAGE:$VERSION
Now
I can always wget outside from container1
I can wget outside from container2 on the old server, but not on the new one
Turned out that, while the default bridge worked as expected, any user-defined network (although defined with bridge driver) did not work at all:
requests from container to outside world not possible
requests into container not possible
requests between containers in the same network not possible
Because container1 was created first, then connected connected to user-defined network, it was still connected the default bridge, too, and thus was able to connect to the outside while container2 wasn't.
The solution is actually in the Docker docs under Enable forwarding from Docker containers to the outside world:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I don't think I had to make these changes on my Ubuntu 20.04 server, but I'm not 100% sure. However, after applying these changes, the connection issues were resolved.
I'm still looking how to make this configuration changes permanent (so they survive a reboot). Once I know it, I'll update this answer.
I am running devpi in a docker container like so:
[Unit]
Description=devpi docker-container
Requires=docker.service
After=docker.service
[Service]
Restart=always
RestartSec=3
ExecStart=/usr/bin/docker run --rm -p 3141:3141 --name devpi -v /devpi_data:/data -e DEVPI_PASSWORD='********' akailash/docker-devpi
ExecStop=/usr/bin/docker stop -t 2 devpi
[Install]
WantedBy=multi-user.target
It runs fine. I can access it via URL on the host as well as install packages from it as expected.
6f663ba131a1 akailash/docker-devpi "/docker-entrypoint.…" 3 hours ago Up 3 hours 0.0.0.0:3141->3141/tcp devpi
However, if I want to build another docker image installing packages from this container there is a ConnectTimeout. If I try a curl the connection times out after a while.
I can do a pip install if I use --net=host option as described in this issue . However, I don't want to have to use host networking. I have tried 0.0.0.0:3141 as well as 172.17.0.1:3141 and I have the same results. Adding --ip=0.0.0.0 in the docker daemon service doesn't work for me. How can I access the devpi container from another container without having to use --net=host every time?
If you don't want to use the --net=host then you need to open the ports on the machine that is running devpi to allow external clients to connect and use it.
The point is that, when you set the host network to docker it takes their own IP address and then it can bind as many ports you need on that IP address, but if you are not using it your computer is acting as a router for the container and applying a NAT to allow access the internet for outgoing traffic but denying incoming traffic.
Because of that if you don't want to use the host network you have to modify the firewall to add a destination NAT rule and allow the traffic to reach the service.
You have some good examples on how to allow ports on iptables here
Since I need access to devpi only which building the docker images required in my docker-compose file, I used the host networking within the build context:
build:
network: host
context: .
dockerfile: Dockerfile.local
This helps access devpi correctly.
I hava lots of pppoe accounts and want to build a small spider-network with them.
So, I want to use docker to virtualize multiple centos methine and do pppoe dialup within.
My methine has two adapter, em1 for pppoe dialup and em2 has a static ip address. when I run a container with bridge, It use em2 and can access to the Internet.
I have tried macvlan:
docker network create -d macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.1 -o parent=em1 -o macvlan_mode=bridge pppoe
and host mode:
docker run --net=host --cap-add=NET_ADMIN -it --rm pppoe
Nothing seems to work...
How can I dialup in containers and assign it with em1?
The pppoe failed due to can't access to /dev/ppp device, You can fix this by using:
--privileged --cap-add=NET_ADMIN
I just solved this problem yesterday, created OpenWRT 18.06.2 as container to serve my homelan as prime router, using macvlan to create the WAN network.
The main problem is pppoe module is not loaded at host side, so at container(OpenWRT) side you will see error messages like "/dev/ppp doesn't exist, create it by mknod /dev/ppp ...". After you created /dev/ppp as instructed, the problem will be solved, but temporarily. After you reboot the system, you have to create /dev/ppp again.
To solve this problem completely, just load pppoe module at boot time # host side,
echo pppoe >> /etc/modules
then /dev/ppp will be automatically created # container(OpenWRT) side.
Tested in the following environment:
hardware: Phicomm N1
host os: armbian_5.60_aml-s9xxx_debian_stretch_default_4.18.7
contaner: openwrt-18.06.2-armvirt-64-default-rootfs.tar.gz
I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)
I'm trying to expose a docker container to the outside world, not just the host machine. When I created the image from a base CentOS image it looks like this:
# install openssh server and ssh client
RUN yum install -y openssh-server
RUN yum install -y openssh-clients
RUN echo 'root:password' | chpasswd
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
RUN sed -ri 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I run this image like so:
sudo docker run -d -P crystal/ssh
When I try to look at the container with sudo docker ps, I see Ports:
0.0.0.0:49154->22tcp
If I ifconfig on the host machine (ubuntu), I see docker0 inet addr:172.17.42.1. I can ping this from my host machine, but not from any other machine. What am I doing wrong in setting up the container to look at the outside world? Thanks.
Edit:
I have tried inspecting the IPAddress of the container and I see IPAddress: 172.17.0.28, but I cannot ping that either...
If I try nmap , that seems to return the ports. So does that mean it is open and I should be able to ssh into it if I have ssh set up? Thanks.
nmap -p 49154 10.211.55.1 shows that the port is open with an unknown service.
I tried to ssh in by ssh -l root -p 49154 10.211.55.1 and I get
Read from socket failed: Connection reset by peer.
UPDATE
Your Dockerfile is wrong. Your sshd is not properly configured, it does not start properly and thats the reason while container does not respond on port 22 correctly. See errors:
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_dsa_key
You need to generate host keys. This line will do the magic:
RUN ssh-keygen -P "" -t dsa -f /etc/ssh/ssh_host_dsa_key
PREVIOUS ANSWER
You probably need to look up IP address of eth0 interface (that is accessible from network) and you need to connect to your container via this IP address. Traffic from/to docker0 bridge should be forwarded by default to your eth interfaces.
Also, you better check if you have ip forwarding enabled:
cat /proc/sys/net/ipv4/ip_forward
This command should return 1, otherwise you should execute:
sudo echo 1 > /proc/sys/net/ipv4/ip_forward
Q: Why you can connect this way to container?
If you have ip forwarding enabled, packets incoming from eth0 interface are forwarded to virtual docker0 interface. Magic happens and packet is received at correct container. See Docker Advanced Networking for more details:
But docker0 is no ordinary interface. It is a virtual Ethernet bridge
that automatically forwards packets between any other network
interfaces that are attached to it. This lets containers communicate
both with the host machine and with each other. Every time Docker
creates a container, it creates a pair of “peer” interfaces that are
like opposite ends of a pipe — a packet sent on one will be received
on the other. It gives one of the peers to the container to become its
eth0 interface and keeps the other peer, with a unique name like
vethAQI2QT, out in the namespace of the host machine. By binding every
veth* interface to the docker0 bridge, Docker creates a virtual subnet
shared between the host machine and every Docker container.
You can't ping 172.17.42.1 from outside your host because it is a private ip so it can be accessed only in private network as it is the one created by the host on which you run the docker container, the virtual switch docker0 and the docker container which is attached with a virtual interface to the bridge docker0...
Moreover 172.17.42.1 is the ip of the bridge docker0, not the ip of your docker instance. If you want to know the ip of the docker instance you have to run ifconfig inside it or you can use docker inspect
I'm not an expert about port mapping, but up to me that means that to access the docker container on port 22 you have to connect to port 49154 of the host and all the traffic will be forwarded.