Xdebug inside Colima docker container doesn't connect to PhpStorm debugger on Mac - docker

I am trying to use Colima to run an apache-php docker container. My uni provides docker images derived from upstream ones configured for our course using docker-compose.
The container works as it should but I can't get its Xdebug to connect to my PhpStorm.
This is what it says in the Xdebug log:
Creating socket for 'host.docker.internal:9003', poll success, but error: Operation now in progress (29).
This tells me absolutely nothing.
The setup is admittedly quite complex (x86 Apache ran via QEMU in Docker in Linux VM in macOS on ARM CPU) but I can do nc host.docker.internal 9003 from any docker container, so I have no idea why Xdebug isn't able to reach my host. (Only works when the IDE is running and on no other ports, so it's definitely connecting to PhpStorm.)
Any idea what could be going on here?

On Colina, the IP address is hard coded to "192.168.5.2", so setting xdebug.client_host=192.168.5.2 should do the trick. There is now also an alias for it, called host.lima.internal.
As per this documentation page.

The problem is the uni's docker-compose.yml which configured the container with:
extra_hosts:
- "host.docker.internal:host-gateway"
and apparently that can break host.docker.internal in some situations: https://github.com/docker/for-linux/issues/264#issuecomment-759737542
The solution is to remove those two lines.

Related

How can I access a service running on WSL2 from inside a Docker container?

I am using Windows 10 1909 and have installed WSL2, using Ubuntu 20.04, the 19.03.13-beta2 docker version, having installed Docker for Windows Edge version using the WSL2 option. The integration is working pretty great, but I have one issue which I cannot solve.
On the WSL2 instance, there are services running, exposing some ports (3000, 3001, 3002,...). From one of the docker containers, I need to access the services for a specific development scenario (API Gateway), and this I cannot get to work.
I have tried using the WSL2 IP address directly, but then the connect just times out. I have also tried using host.docker.internal, which resolves to something else than the WSL2 IP address, but it still doesn't work.
Is there a special trick I need to pull, or is this kind of routing currently not supported, but will be, or is this for some other reason not possible?
This illustrates what I am trying to achieve:
The other routings work - i.e. I can access all the service ports coming from the node.js processes inside WSL2 from the Windows browser, and also I can access the exposed service ports from the containers both from inside WSL2 and from Windows. It's just this missing link I cannot make work.
So what you need to do in the windows machine port forward the port you are running on the WSL machine, this script port forwards the port 4000
netsh interface portproxy delete v4tov4 listenport="4000" # Delete any existing port 4000 forwarding
$wslIp=(wsl -d Ubuntu -e sh -c "ip addr show eth0 | grep 'inet\b' | awk '{print `$2}' | cut -d/ -f1") # Get the private IP of the WSL2 instance
netsh interface portproxy add v4tov4 listenport="4000" connectaddress="$wslIp" connectport="4000"
And on the container docker run command you have to add
--add-host=host.docker.internal:host-gateway
or if you are using docker-compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Then inside the container you should be able to curl to
curl host.docker.internal:4000
and get a response!
For what it's worth: This scenario is working if you use the WSL2 subsystem IP address.
It does not work if you use host.docker.internal - this DNS alias is defined in the containers, but it maps to the IP address of the Windows host, not of the WSL2 host, and that routing back inside the WSL2 host does not work.
The reason why this (probably temporarily) did not work is somewhat unclear - I will revisit this answer if the problem should reappear and I manage to track down what the actual problem may have been.
I ran into this problem with the latest Docker Desktop. I rolled it back to 4.2 and it worked.
Docker Desktop 4.2
Windows 19044.1466
Ubuntu 20.04
I have a java service running on a linux local host (accessing the IP address using ifconfig command), my other containers running on docker desktop using the WSL2 based engine, which can communicate to my java service using the IP address.
This sounds like the issue which is discussed here. For me the only thing that worked was running the docker container with --net=host and then using [::1] instead of localhost in the container to access other containers running in WSL.
So for example, container1 is started with docker run --net=host and then calls container2 like this: http://[::1]:8000/container2 (adjust port and path to your specific application)

Accessing Local Kafka from within Services deployed in Local Docker For Mac (incl. Kubernetes extension)

I'm working locally on OSX.
I'm using Kafka and zookeeper in local mode. Meaning zookeeper from my Kafka installation. One node cluster.
Both work on the loopback localhost
zookeeper.connect=localhost:2181
My /etc/host looks as follows:
127.0.0.1 localhost MaatPro.local
255.255.255.255 broadcasthost
::1 localhost MaatPro.local
fe80::1%lo0 localhost MaatPro.local
I have docker for Mac set on my machine, with the Kubernetes extension.
My scenario
I have an Akka-stream micro-service dockerized, that reads data from an external database and write it in a Kafka topic. It uses as bootstrap server:
"localhost:9092"
Issue
When I run my service on my machine directly (e.g. command line or from within Intellij) everything works fine. When I run it on my Local Docker or Kubernetes I get the following error:
(o.a.k.clients.NetworkClient) [Producer clientId=producer-1] Connection to node 0 could not be established. The broker may not be available.
With Kubernetes I build the following YAML File to deploy my pod:
apiVersion: v1
kind: Pod
metadata:
name: fetcher
spec:
hostNetwork: true
containers:
- image: entellectextractorsfetchers:0.1.0-SNAPSHOT
name: fetcher
I took the precaution to set hostNetwork: true
With Docker daemon directly I originally tried to set the network as host too but discovered that this does not work with docker for mac. Hence, I abandoned that route. I understood that it has to do with virtualization.
1) Does the virtualization issue that happens with docker is actually the same as my local kubernetes? Basically, the host network is the virtual machine and not my mac?
2) I try to change my code and add as a bootstrap server the following address: host.docker.internal as per the documentation. But the problem persists. Is the fundamental problem the fact that I am working on a loopback address? Shall I work on my network address indeed? To which address does host.docker.internal point to? How can I make it work with the loopback address? If I'm completely off, any idea of what I need to implement to get this working?
Thank you so much for any help with this.
Based on #cricket_007 guidance Kafka Listeners - Explained and the many read I have had here and there over this issue, including the official docker documentation I Want to connect from a container to a servicer on the host
I came up with the following solution.
I added the following to my default local kafka configuration (i.e. server.properties)
listeners=EXTERNAL://0.0.0.0:19092,INTERNAL://0.0.0.0:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://127.0.0.1:9092,EXTERNAL://docker.for.mac.localhost:19092
inter.broker.listener.name=INTERNAL
In fact External here, is expected to be the docker network. This config is only for my OSX machine for my local development purpose. I do not expect people connecting to my laptop to use my local kafka hence i can use EXTERNAL://docker.for.mac.localhost:19092. This is what is advertised to my container in docker/kube. From within that network, docker.for.mac.localhost is reachable.
Note this would probably not work with Minikube. This is specific to Docker For Mac. The kubernetes that I run on my machine is the one coming with docker for mac and not minikube.
Finally in my code i use both in a list
"localhost:9092, docker.for.mac.localhost:19092"
I use typeSafe config, so that in prod, this is erased by the env variable. When the env variable is not specified, this is what is used. When i run my micro-service from Intellij localhost:9092 is used. That’s because In that case, i am in the same network as my kafka/zookeeper in my machine. However when I run the same micro-service from docker/kube docker.for.mac.localhost:19092 is used.
Answers to the side questions I had
Yes. Docker for Mac use HyperKit as a lightweight virtual machine, running a linux on it, and Docker Engine is ran on it. The Docker for MAC Kubernetes extension is basically about running kubernetes cluster Services/infrastructure as containers in the docker daemon. Docker for Mac vs. Docker Toolbox . In other words, the host is hyperkit and not osx. But as the above doc explain, Docker for Mac implementation is all about making it appear to the user as if there were no virtualization involve between OSX and Docker.
Connecting to the host using loopback address is an issue that has not been solved. I'm not even sure that it works perfect even if the host is Linux. Not sure, might have been resolved at this point. Nonetheless, it would require to run an image by stating that the container or the pod in the case of kube are on the host network. But in docker for mac, that functionality will never work based on my readings online. Hence the solution of using docker.for.mac.localhost or host.docker.internal, that Docker for Mac did set up to refer to the mac host and not the hyperkit host.
host.docker.internal and docker.for.mac.localhost are one of the same and the late recommendation at this point is host.docker.internal. This being said, this address did not originally work for me because my Kafka Set up was not good. It is worth readying #criket_007 link to understand that well http://rmoff.net/2018/08/02/kafka-listeners-explained.
Following #MaatDeamon approach I only did the following and it worked for me.
advertised.listeners=PLAINTEXT://:9092
And in your application config properties
localhost:9092,host.docker.internal:9092

Docker for Windows swarm overlay networking, connecting to the swarm from outside or localhost

I cannot connect to the published port on the swarm that uses overlay networking. I am using Docker for Windows with Windows containers. Both Windows and Docker are fully upgraded. After Windows' 1709 update, I was hoping this issue would be resolved. I looked for information on the Internet to see if I was doing something wrong to no avail. I would like to know if anyone was successfully able to get it working.
On a side note, when I direct the port on my machine in docker run -p 80:80 without using swarm, "localhost" does not work as well. I think this is a known limitation though. Both issues work when I switch to Linux containers.
Expected behavior
I am running a dotnet kestrel web server service. I should be able to connect to my service using the published port.
Actual behavior
Firefox gives me timeout, opera straight away returns connection refused. Cannot telnet into it either. Container IP's assigned by the overlay network do not work either.
Information
docker service ls gives me this:
Ports cannot be seen there, is it because publish mode is host? Ports information is available in the output of docker service ps
And when I change the publish mode, I can scale it as well and the port information is seen in docker service ls albeit still cannot connect. the one below is without the publish mode=host parameter:
For more info, this is the output of the docker network ls I wonder if i need some sort of bridge network like in Linux.
Steps to reproduce the behavior
Initialise swarm
Start the service, in my case: a simple web service built using aspnetcore:latest image. I tried different parameters, even used a docker-stack.yml:
docker service create --name=web --publish mode=host,published=80,target=80 web:aspnetcorelatest in the case above, I was unable to scale it on the same machine, which is normal i guess
docker service create --name=web --publish published=85,target=80 web:aspnetcorelatest
Try to connect using one of http://localhost or another IP. I tried connecting over VPN, from another machine as well as Internet IP.

Docker: able to telnet to remote machines from host but not from container

We have a couple docker containers deployed on ECS. The application inside the container uses remote service, so it needs to access them using their 10.X.X.X private IPs.
We are using Docker 1.13 with CentOS 7 and docker/alpine as our base image. We are also using netwokMode: host for our containers. The problem comes when we can successfully run telnet 10.X.X.X 9999 from the host machine but if we run the same command from inside the container, it just hangs and it's not able to connect.
In addition, we have net.ipv4.ip_forward enabled in the host machines (where the container runs) but disabled in the remote machine.
Not sure what could be the issue, maybe iptables?
I have spent the day with the same problem (tried with both network mode 'bridge' and 'host'), and it looks like an issue with using busybox's telnet inside ECS - Alpine's telnet is a symlink to busybox. I don't know enough about busybox/networking to suggest what the root cause is, but I was able to prove the network path was clear by using other tools.
My 'go to' for testing a network path is using netcat as follows. The 'success' or 'failure' message varies from version to version, but a refusal or a timeout (-w#) is pretty obvious. All netcat does here is request a socket - it doesn't actually talk to the listening application, so you need something else to test that.
nc -vz -w2 HOST PORT
My problem today was troubleshooting an app's mongo connection. nc showed the path was clear, but telnet had the same issue as you reported. I ended up installing the mongo client and checking with that, and I could connect properly.
If you need to actually run commands over telnet from inside your ECS container, perhaps try installing a different telnet tool and avoiding the busybox inbuilt one.

Docker Desktop for Windows: Cannot ping google.com from windows containers

I was creating a container using microsoft/windowsservercore image. And then when I tried to ping google.com from inside the container, I got this error:
Ping request could not find host www.google.com. Please check the name
and try again.
Then I switched to Linux Container mode in docker for windows. Then tried the same in an ubuntu container but this time it worked fine. Then when I switched back to Windows Container mode and tried the same thing again, it worked this time. Although my issue was resolved but I still don't understand what caused this issue in the first place ?
Docker for windows and linux have different default network settings.
Typically, the default for linux is bridged mode while in windows you have NAT.
You can alter your configuration with Network Connection Settings for windows
See: https://docs.docker.com/docker-for-windows/#network
The first option for me is always to look at the network section when executing docker inspect *containername*. This command gives you information about your network settings for the container. Other options are to check your firewall settings.
In general I usually use ping 8.8.8.8 since www.google.com cannot be pinged even from my standard windows machine.

Resources