I’ve had a working setup with a docker-compose and especially a wildfly image running in network mode = host.
Since the company stopped the internet connection, the startup of the container is extremely slow and end with a timeout.
I found out that when I run the container in network mode = bridge, it is working just normally.
I tried with a docker-hub wildfly empty image to be sure the issue is not on my side and it’s the same problem.
It starts in 5s in bridge, and 33 in host…
I use the command :
docker run --network host jboss/wildfly:18.0.1.Final
to start the container in network host mode.
My docker version is 19.03.15 and it’s running in a VM in bridge mode.
I need the network mode host because we access the containers from outside the VM and they need to communicate with each other.
I can’t use internet anymore on the VM neither the host machine because of the security policy of the company.
So I’m looking for a solution to still use network host without this not-understandable slowness…
I’m not sure if it’s coming from wildfly or the docker itself ?
Thanks by advance,
Loïc.
Related
I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.
I have Docker running on a couple of Ubuntu 18.04 vms and also on a Synology DS218+. There is inconsistent behaviour between the two with regards to the ability for containers to access other hosts on the local network. I am confused as to what exactly is happening here:
My method of testing this is to launch an interactive shell with bash into a container that has ping installed. From that I can test connectivity to other containers and to the docker hosts. The containers are using default bridge networks. I understand this should mean that they can only communicate with other containers that are on the same bridge network.
On the Synology, when I bash into a container, I am not able to ping the host that the machine is running on. That seems to be desired behaviour. However, if I try to ping other machines on the host network, the container receives a response. This shouldn't happen as the default bridge network is supposed to be isolated from the host network. I tried taking down all containers, stopping the docker daemon, flushing iptables and bring everything back up again. Same result.
On the Ubuntu hosts, I can not only ping other machines on the host network, but the host that the container is running on.
Is there something I am missing here?
So I see host mentioned a few times in the docs. There's also networking_mode=host you can add in the yml file.
So what I assume the host is, is the machine the VM (Docker) is run on?
So if I set networking mode to host, the port mapping etc will be handled on my local machine. Where in the yml i could do 3001:3000 that'll map port 3001 to the container port of 3000. With networking mode host that mapping will be handled on my local machine.
Now, when we're hosting containers on rancher. And we set the networking_mode=host. What's host in that context? Is it the VM or ec2 or whatever that is running my rancher? Or the VM/ec2 that's running my host stack?
I can't grasp it from the docs.
A container runs on a single server, a.k.a host, running Docker.
Host can be either be a bare metal server, Virtual machine running on your laptop or an EC2 instance.
Rancher itself is a container running on a host. Now when you build a cluster, you can add the host that's running the Rancher container or you can choose to keep things isolated and start adding totally different hosts.
If you choose networking_mode=host, the container is using the host networking stack and if you don't the container gets it's own networking stack. When running in host networking mode, the application running inside the container binds directly to the host network interfaces, so there is no port mapping happening.
In case you are interested in more details, I have discussed a lot about networking in the first half of this talk: https://www.youtube.com/watch?v=GXq3FS8M_kw. Let me know if you have more questions.
I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.
I have some smart wifi devices on my network I can see from a script on my Mac. But running the same script from within a Docker container those devices are not visible.
I assume this is related to Docker for Mac's inability to connect to the host's network using --network host or network_mode: host. I also assume this issue wouldn't exist on a Linux machine but I don't have one to test on.
What is the workaround?
Edit:
Confirmed this worked fine when running inside an Ubuntu virtualbox, but I'd really not have to develop inside it.
If you start the container with network option as host, the container will share the network stack of the host. Thus any device reachable from you host should be reachable by the container.
docker run --network host ...
Adding containers to a network would allow them to communicate with each other but if you want to access other services running on host then host.docker.internal (from 18.03+). I had to do the same in a mac mini setup to access external service.
[https://docs.docker.com/docker-for-mac/networking/]
If you have to access a service on another host then you can setup an nginx server on the docker host and a proxy pass rule to direct it to the remote service.