I have a docker container that has the NAT mapping 0.0.0.0:9055->80/tcp. From what I can tell, this should mean I can go to http://localhost:9055/ on my host machine, and it will be redirected to port 80 on the running Docker image. However, when I try this it times out.
If I connect to the instance and run docker exec -i 52806ceaf166 "ipconfig" to see what the image's private IP is, I get 172.28.27.31. When I try going to http://172.28.27.31/ on the host machine, it works!
I'd like to get the NAT mapping working since that's what all the tools assume works (such as Visual Studio, Kitematic, etc) and plus I don't want to have to worry about which containers use which IPs. Is there a way to fix this? Thanks!
PS: I'm new to Docker (just installed it today) so if any more info is needed (settings, versions, etc) just let me know how to get them and I'll add them to the post.
Was looking at the Docker Image I'm using, and I think this is what I'm running into:
This is a known issue that'll be addressed in the near future. The work around is fairly easy though.
Update: This was fixed in a recent Windows patch available through Windows Update.
Related
I am somewhat new to Docker. I'm trying to get it set up on my machine, but I can't seem to connect from the host.
My run command
docker run -p 8080:80 drupal:9.1-php7.4-fpm-alpine3.13
Expected result
Based on the image documentation, I would expect to see some kind of default Drupal page on 8080.
Actual result
$ curl http://localhost:8080
curl: (52) Empty reply from server
In Firefox this renders as, "The connection was reset."
What I've tried
There are other questions that have similar symptoms, but the solutions don't seem to work for me.
One common suggestion is to curl to a different IP such as 0.0.0.0:8080. I'm a little skeptical because that conflicts with the image-specific instructions above, but tried it and didn't find evidence there's anything listening there. Also, when the container isn't running, I'm not able to connect to that URL at all, which is slightly different from not getting a response, so I think I'm on the right track with http://localhost:8080/
The other common suggestion is to make sure I'm binding a port outside the container, but in my case it's right there as -p 8080:80.
Always double-check that image tag, kids!
There are a ton of variants of the official docker image, and I accidentally pulled the wrong one. You'll notice the image tag inclues "ftm." I meant Apache. When I run an Apache version of the image, it works out of the box. Facepalm.
That'll do it!
I am leaving this here as a monument to my shame.
If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.
I have docker-compose version 1.11.2 on Windows and using a version 2.1 docker-compose.yml but whenever I try to run something like docker-compose up or docker-compose run a subsequent time, I get an error that the network needs to be recreated because configuration options changed (even if I didn't change anything). I can docker network rm to remove the network, but from other documentation and posts about docker-compose on Linux it seems this is unnecessary.
I can reproduce this reliably but can't really find any further information. Can anyone explain why I keep getting errors to recreate the network (using a transparent driver to download some stuff when building the image, but even using the nat driver gives me a similar error) or at least how to work around it? One of my scenarios is to be able to use docker-compose run on one of the services a couple of times on the same machine as part of cloud build/test.
Turns out this was a bug and was fixed in a subsequent update several weeks ago. I was told by one of the Docker developers that Windows 10 Creators Update was required as well.
I'm not sure if this is an issue with the current version of Windows Docker network or poor configuration and misunderstanding on my part, but I have the following setup:
2 Docker containers (built using the Microsoft/ASP.NET image as a base) running a .NET MVC application in each.
1 Docker container running SQL server (built using the Microsoft/mssql-server-windows image)
When I create all 3 containers everything works great, I can attach and ping all other the other containers using their names without any issue. The applications run and can communicate with each other as I hoped.
However, when I reboot my machine and start all the containers again they can no longer ping/communicate with each other using their names (using IP addresses is fine).
I've tried this on the default NAT network and also tried replacing the NAT network with my own custom NAT network.
To resolve the issue I have to run the force network disconnect command for each container as such:
docker network disconnect nat <containername> --force
And then I have to reconnect each container to the network before starting them up. All containers can then ping/communicate with each other using their names as well as their IP addresses.
FYI, this is a development environment but I was hoping to do something similar in Azure using a Windows Server 2016 VM, although I don't quite know what the best network configuration is for live production yet as I need to have multiple applications (in separate containers) on the same node accessed via their own subdomains.
Any help or guidance would be great.
I'm not sure, in part because this question was asked several months before any other example I've run into, but this sounds very similar to the problem described at https://github.com/docker/for-win/issues/1038.
Basically, there appears to be a problem introduced with the 1709 update to Windows 10 which results in a scenario where Hyper-V networking doesn't work the way it ought to.
There appear to be two common ways of working around this problem: Turning off "Fast Start" in the Control Panel => Power Options => System Settings, or restarting Docker for Windows and any containers after booting. I also thought I saw something on a Microsoft blog post indicating that the underlying problem has now been resolved and will be included in an update to Windows 10, but alas I can no longer find that information or the specific version number in which the problem was (theoretically) resolved. It may well be the delayed 1803 "Spring Creators Update" release.
I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.