Docker-compose can't connect to jupyter notebook on WSL - docker

I run docker-compose on my WSL with a jupyter notebook, it gives me following information:
[I 00:28:20.921 NotebookApp] Jupyter Notebook 6.1.3 is running at:
[I 00:28:20.921 NotebookApp] http://docker-desktop:3000/?token=...
[I 00:28:20.921 NotebookApp] or http://127.0.0.1:3000/?token=...
[I 00:28:20.921 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
as docker is running on WSL I can't access it via localhost on my windows machine. I looked up the ip of the network adapter which is 172.23.16.1 and tried to access the notebook via 172.23.16.1:3000, but I get an error connection refused.
I also opened incoming and outgoing port 3000 on my windows machine
What have I missed?

Have you map your container port so the host machine can reach?
Another common problem is: by default jupiter notebook will only allow traffic coming from localhost (notice that this localhost is the container itself), therefore you can't access from anywhere outside of the container. So to resolve this, make sure you start jupiter notebook and allow traffic coming from all IPs:
jupyter notebook --ip 0.0.0.0

Long story short, you are almost certainly running into the same problem documented in this, this, and this question, among others. The last one is most similar, since it is about accessing a WSL2 instance from a Docker container, but they are all the same root cause. To quote my answer (slightly modified) from one of those:
The core issue here is that WSL2 operates in a Hyper-V VM with its own virtual NIC, running NAT'd behind the Windows host. WSL1, on the other hand, ran bridged with the Windows NIC.
On localhost, Windows does seem to do an automatic mapping, but for the host IP address (and thus, on the local network - Including Docker containers, since they are on their own network), it does not. Even with the Docker network in bridged mode, it still does not see the WSL2 IP without additional effort.
You'll find a lot of information on this particular topic on this Github thread, along with several workarounds that I documented in answers to the other questions.
In your case, I would propose running the Jupyter notebook in a WSL1 instance, rather than WSL2. To my knowledge, there's nothing special in Jupyter which would require WSL2 capabilities, right?
Again, with a copy/paste here -- You can convert the WSL2 instance to WSL1 by either doing (from PowerShell) a wsl --set-version <distroname> 1 or by cloning the existing with a wsl --export <distroname> <archivename>.tar and then wsl --import <distroname> <installlocation) <archivename>.tar. I prefer cloning since it gives you a backup.

Related

Updated windows docker and now it doesn't work outside localhost

I have been running a media cluster for sometime without any issues. I have everything networked into two different docker networks... the first network just bridges the docker instance to the local machine, the second network is a docker VPN container that I use for the other media services (an earlier version of what I am working on can be found here: https://github.com/Xander-Rudolph/MediaDocker)
The strangest thing happened today though. I ran the docker update for windows and now docker spools up without any errors or issues, however none of the services work outside of the machine running docker. Usually I have a poke through for a couple of the services in my router (namely wordpress/joomla which is on the bridge) and they work outside of my local network, but none of them are working anymore. I was able to confirm its not the DNS A record because I'm able to use the RDP ports I have mapped for my router, and when I test on another machine in the same network, it can't access the services via the internal IP (but it can RDP).
Anyone have any idea what could have changed to break this? I've already updated all my docker images and even rebuilt my VPN container (before I realized its a networking issue). What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?
Update
I've been able to rule out the docker update as the root cause... I upgraded docker on my laptop (which was previously running the same version as my desktop) and its not having the same issue... this configuration must be localized to this desktop... No idea what the issue is... Will try a linux VM on the desktop instead of docker for windows...
Update 2
After a lot of screwing around in both a VM and in WSL, I'm still only able to access the docker services from localhost but not a different machine on my network or via the IP on the host machine (perhaps something similar to this: Can't access localhost via IP address). RDP does work so the computer is accessible but the services are not.
I'm not sure if this is a result of a docker networking config or a windows network config (I'm using WSL with docker installed on ubuntu 20.08) but I'm not seeing anything stick out. I'm going to remove the tag for docker windows but this is definitely an issue with networking and I suspect it has something to do with the fact that the containers are running behind a VPN... although I don't know why I would be able to access them on localhost but not the IP on another VM...
When I run
netstat -a -o
on WSL I can see the established ports on localhost... EX:
tcp 0 0 localhost:7878 localhost:37520 ESTABLISHED
but when I look on the host machine (for wsl) I don't see the connection. I tried to use netsh to create a firewall rule to see if that would help:
netsh advfirewall firewall add rule name="TCP Port 7878" dir=in localport=7878 protocol=TCP action=allow
but it didn't have any effect.
Any suggestions for ways to trace the network to see where/how its failing/getting blocked would be extremely helpful.
Your question: "...What are some steps I can do to try to troubleshoot what is going wrong in docker to prevent access outside of localhost?..."
Troubleshooting help for you, first do you have multiple networking adapters (Ethernet, Wi-Fi, etc.) present on the host. First ensure, the priority of these adapters needs to be configured in correct order so the Windows networking stack can correctly choose gateway routes.
Now, to fix this set your primary internet-connected networking adapter to have the lowest InterfaceMetric value, use can use these Powershell commands from an elevated console:
Get-NetIPInterface -AddressFamily IPv4 | Sort-Object -Property InterfaceMetric -Descending
Please ensure that the host's primary internet-connected network adapter has the lowest InterfaceMetric value.
// Use this command to make the change for e.g. lets say your
// primary adapter InterfaceAlias is 'Wi-Fi'
Set-NetIPInterface -InterfaceAlias 'Wi-Fi' -InterfaceMetric 3
Now step two, if your host's primary network adapter is bridged because you have an External virtual switch setup in Hyper-V, then you will set the external virtual switch to have the lowest InterfaceMetric value.
Lastly, confirm/verify your routing tables, when you run this, the last line should show the primary adapter's gateway address along with it's ifMetric value):
Get-NetRoute -AddressFamily IPv4
If you’re using Docker Toolbox then any port you publish with docker run -p will be published on the Toolbox VM’s private IP address.
docker-machine ip will tell you.
It is frequently
192.168.99.100
Taken from: https://forums.docker.com/t/cant-connect-to-container-on-localhost-with-port-mapping/52716/25
After several attempts using the references below, I was still not getting anywhere. The recommendation by #derple didn't get me anywhere (since I was in wsl) but the article he linked someone had said they switched to linux and uninstalled and reinstalled docker desktop... and for some stupid reason that works.
These are my exact steps I took to fix it:
Uninstall docker desktop
Install WSL and docker inside an ubuntu18.04 instance in wsl
Test docker in wsl with localhost (worked only on localhost still)
Uninstall WSL using windows add/remove features
reinstall docker desktop
Oddly the get-netipinterface and get-netroute look exactly the same as they did before I did the uninstall and reinstall but things seem to be working now... I have no idea why the above worked...

How can I access a service running on WSL2 from inside a Docker container?

I am using Windows 10 1909 and have installed WSL2, using Ubuntu 20.04, the 19.03.13-beta2 docker version, having installed Docker for Windows Edge version using the WSL2 option. The integration is working pretty great, but I have one issue which I cannot solve.
On the WSL2 instance, there are services running, exposing some ports (3000, 3001, 3002,...). From one of the docker containers, I need to access the services for a specific development scenario (API Gateway), and this I cannot get to work.
I have tried using the WSL2 IP address directly, but then the connect just times out. I have also tried using host.docker.internal, which resolves to something else than the WSL2 IP address, but it still doesn't work.
Is there a special trick I need to pull, or is this kind of routing currently not supported, but will be, or is this for some other reason not possible?
This illustrates what I am trying to achieve:
The other routings work - i.e. I can access all the service ports coming from the node.js processes inside WSL2 from the Windows browser, and also I can access the exposed service ports from the containers both from inside WSL2 and from Windows. It's just this missing link I cannot make work.
So what you need to do in the windows machine port forward the port you are running on the WSL machine, this script port forwards the port 4000
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
And on the container docker run command you have to add
--add-host=host.docker.internal:host-gateway
or if you are using docker-compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Then inside the container you should be able to curl to
curl host.docker.internal:4000
and get a response!
For what it's worth: This scenario is working if you use the WSL2 subsystem IP address.
It does not work if you use host.docker.internal - this DNS alias is defined in the containers, but it maps to the IP address of the Windows host, not of the WSL2 host, and that routing back inside the WSL2 host does not work.
The reason why this (probably temporarily) did not work is somewhat unclear - I will revisit this answer if the problem should reappear and I manage to track down what the actual problem may have been.
I ran into this problem with the latest Docker Desktop. I rolled it back to 4.2 and it worked.
Docker Desktop 4.2
Windows 19044.1466
Ubuntu 20.04
I have a java service running on a linux local host (accessing the IP address using ifconfig command), my other containers running on docker desktop using the WSL2 based engine, which can communicate to my java service using the IP address.
This sounds like the issue which is discussed here. For me the only thing that worked was running the docker container with --net=host and then using [::1] instead of localhost in the container to access other containers running in WSL.
So for example, container1 is started with docker run --net=host and then calls container2 like this: http://[::1]:8000/container2 (adjust port and path to your specific application)

Docker: able to telnet to remote machines from host but not from container

We have a couple docker containers deployed on ECS. The application inside the container uses remote service, so it needs to access them using their 10.X.X.X private IPs.
We are using Docker 1.13 with CentOS 7 and docker/alpine as our base image. We are also using netwokMode: host for our containers. The problem comes when we can successfully run telnet 10.X.X.X 9999 from the host machine but if we run the same command from inside the container, it just hangs and it's not able to connect.
In addition, we have net.ipv4.ip_forward enabled in the host machines (where the container runs) but disabled in the remote machine.
Not sure what could be the issue, maybe iptables?
I have spent the day with the same problem (tried with both network mode 'bridge' and 'host'), and it looks like an issue with using busybox's telnet inside ECS - Alpine's telnet is a symlink to busybox. I don't know enough about busybox/networking to suggest what the root cause is, but I was able to prove the network path was clear by using other tools.
My 'go to' for testing a network path is using netcat as follows. The 'success' or 'failure' message varies from version to version, but a refusal or a timeout (-w#) is pretty obvious. All netcat does here is request a socket - it doesn't actually talk to the listening application, so you need something else to test that.
nc -vz -w2 HOST PORT
My problem today was troubleshooting an app's mongo connection. nc showed the path was clear, but telnet had the same issue as you reported. I ended up installing the mongo client and checking with that, and I could connect properly.
If you need to actually run commands over telnet from inside your ECS container, perhaps try installing a different telnet tool and avoiding the busybox inbuilt one.

How do I give an own ip address to docker for Windows container?

I want to export the complete ip connectivity (UDP and TCP) from a docker container with a Linux app (ie give it's own ip address (in the same subnet as the host), that can be accessed from the host and from other physical machines on the network).
What do I need to configure in Windows, what in docker, what inside the container?
(NB: I don NOT want to expose ports as part of the host).
I finally solved the problem (for me) by installing Ubuntu in Virtual Box and using the docker containers from there. Not the most elegant solution but working on first try.

Apache Kafka in docker AND VirtualBox VM

I'm trying to use Apache Kafka, i.e. a version usable in connection with docker (wurstmeister/kafka-docker), in a Virtual Machine and connect to the kafka broker in the dockers over the host system of the VM.
I will describe the setup in more detail:
My host system is an Ubuntu 64 bit 14.04.3 LTS (Kernel 3.13) running on an "usual" computer. I have a complete and complex structure of various docker-containers interacting with each other. In order to not disturb, or better said, to encapsulate this whole structure, it is not an option to run the docker images directly on the host system. Another reason for that is the need for different python libraries on the host, which interfere with the python-lib version required by docker-compose (which is used to start the different docker images).
Therefore the desired solution should be to setup a virtual machine over VirtualBox (guest system: Ubuntu 16.04.1 LTS) and to completely run the docker environment in this VM. This has the distinct advantage that the VM itself can be configured exactly according to the requirements of the docker structure.
As mentioned above one of the docker images provides kafka and zookeeper functionality to use for communication and messaging. That means the .yaml file, which sets up a container running this image, forwards all necessary ports of kafka and zookeeper to the host system of the docker environment (which is the guest system of the virtual box VM). In order to also make the docker environment visible in the host system, I forwarded all the ports over network settings in VirtualBox (network -> adapter NAT -> advanced -> port forwarding). The behaviour of the system is as follows:
When I run the docker-environment (including kafka) I can connect, consume and produce from the VM-kafka, which uses the recommended standard kafka shell scripts (producer & consumer api).
When I run a Kafka and zookeeper server ON THE VM guest system, I can connect from outside the VM (host), produce and consume over producer & consumer api.
When I run Kafka in the docker environment I can connect from the host system outside the VM, meaning I can see all topics, get infos about topics, while also seeing some debug output from kafka and zookeeper in the docker.
What is not possible is unfortunately to produce or consume any message from the host system to/from the docker-kafka. From the producer API I get a "Batch expired" exception, the consumer returns with a "ClosedChannelException".
I have done a lot of googling and found a lot of hints how to solve similiar problems. Most of them refer to the advertised.host.name-parameter in kafka-server.properties, which is accessible over KAFKA_ADVERTISED_HOST_NAME in .yaml. https://stackoverflow.com/a/35374057/3770602 e.g. refers to that parameter when both the errors mentioned above occur. Unfortunately none of the scenarios feature both docker AND VM functionality.
Further more trying to modify this parameter does not have an affect at all. Although I am not very familiar with docker and kafka, I can see a problem here, as the kafka consumer and producer would get the local IP of the docker environment, which is 10.0.2.15 in the NAT case, to use as broker. But unfortunately this IP is not visibile from outside the VM. Consequently the solution would probably be a changed setup, where one should use the bridged mode in VirtualBox networking. Strange thing here is, that a bridged connection (of course) leads to an own IP of the VM over DHCP, which then leads to a non-accessible docker-kafka both from the VM AND the host system. This last behaviour seems to be very awkward to me.
Concluding my question would be, if somebody has some experiences with this or a similiar scenario and can tell me, how to configure docker-kafka, VirtualBox VM and host system. I have really tried a lot of different settings for the consumer and producer call with no success. Of course some docker, kafka or both docker & kafka experts are welcome to answer as well.
If you need additional information or you can provide some hints, please let me know.
Thanks in advance

Resources