I hava lots of pppoe accounts and want to build a small spider-network with them.
So, I want to use docker to virtualize multiple centos methine and do pppoe dialup within.
My methine has two adapter, em1 for pppoe dialup and em2 has a static ip address. when I run a container with bridge, It use em2 and can access to the Internet.
I have tried macvlan:
docker network create -d macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.1 -o parent=em1 -o macvlan_mode=bridge pppoe
and host mode:
docker run --net=host --cap-add=NET_ADMIN -it --rm pppoe
Nothing seems to work...
How can I dialup in containers and assign it with em1?
The pppoe failed due to can't access to /dev/ppp device, You can fix this by using:
--privileged --cap-add=NET_ADMIN
I just solved this problem yesterday, created OpenWRT 18.06.2 as container to serve my homelan as prime router, using macvlan to create the WAN network.
The main problem is pppoe module is not loaded at host side, so at container(OpenWRT) side you will see error messages like "/dev/ppp doesn't exist, create it by mknod /dev/ppp ...". After you created /dev/ppp as instructed, the problem will be solved, but temporarily. After you reboot the system, you have to create /dev/ppp again.
To solve this problem completely, just load pppoe module at boot time # host side,
echo pppoe >> /etc/modules
then /dev/ppp will be automatically created # container(OpenWRT) side.
Tested in the following environment:
hardware: Phicomm N1
host os: armbian_5.60_aml-s9xxx_debian_stretch_default_4.18.7
contaner: openwrt-18.06.2-armvirt-64-default-rootfs.tar.gz
Related
I have set up a new Ubuntu 22.04.1 server with Docker version 20.10.21, using docker images from the exact same dockerfiles that work without any problems on another Ubuntu server (20.04 though).
In my new docker installation, I experience problem reaching into the docker containers, but I can neither reach the outside world from within the docker containers.
For example, issuing this from a bash within the docker container:
# wget google.com
Resolving google.com (google.com)... 216.58.212.142, 2a00:1450:4001:82f::200e
Connecting to google.com (google.com)|216.58.212.142|:80...
That's all, it just hangs there forever. Doing the same in the other installation works just fine. So I suspect there is some significant difference between those installations, but I can't find out what it is.
I'm also running a reverse proxy docker container within the same docker network, and it cannot reach the app container in the broken environment. However, I feel that if I knew what block my outgoing requests, this would explain the other issues as well.
How can I find out what causes the docker container requests to be blocked?
This is my docker network setup:
Create the network
docker network create docker.mynet --driver bridge
Connect container #1
docker network connect docker.mynet container1
Run and connect container 2
docker run --name container2 -d -p 8485:8080 \
--network docker.mynet \
$IMAGE:$VERSION
Now
I can always wget outside from container1
I can wget outside from container2 on the old server, but not on the new one
Turned out that, while the default bridge worked as expected, any user-defined network (although defined with bridge driver) did not work at all:
requests from container to outside world not possible
requests into container not possible
requests between containers in the same network not possible
Because container1 was created first, then connected connected to user-defined network, it was still connected the default bridge, too, and thus was able to connect to the outside while container2 wasn't.
The solution is actually in the Docker docs under Enable forwarding from Docker containers to the outside world:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I don't think I had to make these changes on my Ubuntu 20.04 server, but I'm not 100% sure. However, after applying these changes, the connection issues were resolved.
I'm still looking how to make this configuration changes permanent (so they survive a reboot). Once I know it, I'll update this answer.
I am trying to connect and run a device (LiDAR) through Docker container since it needs Ubuntu 16 while my computer is Ubunutu 20.
I got the device to ping inside the docker container, but it is not recognised when I try to use it.
What I did:
Made Dockerfile with requirements (Added EXPOSE to expose all ports)
Built docker image using:
docker build -t testLidar
I then made a container using
docker run -d -P --name test_Lidar (imagename)
Then
docker exec -t test_Lidar (device_ip) works
I am able to ping my LiDAR IP inside the container, but when I do ip a I cannot see the interfaces connected to my machine.
Been stuck on this for 3 days, any suggestions?
Note: I have done the exact same steps but on an Ubuntu 16 machine. The only change was the docker run command had --net host instead of -P tag and my device worked perfectly. I feel like this is the root of my problem.
Use --net host flag with docker run to attach the container to your host's networking stack and make it available in for other hosts in your network.
When you use --net host, you actually attach the container to your host's networking stack. By default, containers are attached to the default network of type bridge and can communicate with each other. You can then reach them only from your host using its ip addresses typically in subnet 172.17.0.0/16.
Using -P actually binds exposed ports from a container with randomly selected free ports on your host. It should be used for exposing network services (eg. web server with port 80), but not for ICMP ping.
I run a RancherOS to run docker containers
I created a container on the GUI to run my databases (image: mysql, name: r-mysql-e4e8df05). Different containers use it.
I can link other containers to it on the GUI
This time I would like to automate the creation and starting of a container on jenkins, but the linking is not working well
My command:
docker run -d --name=app-that-needs-mysql --link mysql:mysql myimages.mycompany.com/appthatneedsmysql
I get error:
Error response from daemon: Could not get container for mysql
I tried different things:
1)
--link r-mysql-e4e8df05:mysql
Error:
Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
2)
Try to use --net options
Running: docker network ls
NETWORK ID NAME DRIVER SCOPE
c..........e bridge bridge local
4..........c host host local
c..........a none null local
With --net none it succeeds but actually it is not working. The app cannot connect to the DB
With --net host error message conflicting options: host type networking can't be used with links. This would result in undefined behavior
With --net bridge error message: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
I also checked on rancher GUI where this mysql runs:
It get a continer IP startin with: 10.X.X.X
I also tried to add --net managed but the error: network managed not found
I believe I miss understanding something in this docker linking process. Please give me some idea, how can I make these work.
(previously it was working when I created the same container and linked to the mysql in the GUI)
Hey #Tomi you can expose the mysql container on whatever port you like, from rancher. That way you dont have to link the container, then your jenkins spawned container connect to that on the exposed port on the host. You could also use jenkins to spin up the container within rancher, using the rancher cli. Thay way you dont have to surface mysql on the hosts network... a few ways to skin that cat with rancher.
At first glance it seems that Rancher uses a managed network, which docker network ls does not show.
Reproducing the problem
I used dummy alpine containers to reproduce this:
# create some network
docker network create your_invisible_network
# run a container belonging to this network
docker container run \
--detach \
--name r-mysql-e4e8df05 \
--net your_invisible_network \
alpine tail -f /dev/null
# trying to link this container
docker container run \
--link r-mysql-e4e8df05:mysql \
alpine ping mysql
Indeed I get docker: Error response from daemon: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network.
Possible Solution
A workaround would be to create a user-defined bridge network and simpy add your mysql container to it:
# create a network
docker network create \
--driver bridge \
a_workaround_network
# connect the mysql to this network (and alias it)
docker network connect \
--alias mysql \
a_workaround_network r-mysql-e4e8df05
# try to ping it using its alias
docker container run \
--net a_workaround_network \
alpine \
ping mysql
# yay!
PING mysql (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.135 ms
64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.084 ms
As you can see in the output pinging the mysql container via its DNS name is possible.
Good to know:
With a user-created bridge networks DNS resolution works out of the box without having to explicitly --link containers :)
Containers can belong to several networks, this is why this works. In this case the mysql container belongs to both your_invisible_network and a_workaround_network
I hope this helps!
I have a program which has two mandatory arguments -d and -t, both of them mean that bind to a specific network device (IP address), i.e.: ./myprogram -d 172.17.0.2 -t 172.17.0.3, and they can't be the same.
Now, I need to run this program in a docker container, how could I config the container so that I can run this program inside the container and for peer endpoint it is the same as I run this program in the host?
Thanks!
if your container needs to access your network device, you need to share the network devices
docker run --net-host...
extract from
docs.docker.com/engine/reference/run/#ipc-settings---ipc
Network: host With the network set to host a container will share the host’s network stack and all interfaces from the host will be available to the container.
an example, extract from this image using nethogs for network monitoring
https://hub.docker.com/r/k3ck3c/nethogs/
docker run -it --net=host -- --rm k3ck3c/nethogs
I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)