Xdebug 3 doesn't connect in docker - docker

I used this docker-compose file with the Xdebug options that worked for me in a local Xdebug installation, but doesn't work in Docker.
version: '3'
services:
app:
container_name: php7.3-xdebug
image: 'php7.3-xdebug-i'
ports:
- '9090:80'
build:
context: ./
dockerfile: ./dockerfile
volumes:
- ./:/var/www/html
environment:
XDEBUG_CONFIG: zend_extension=xdebug.so xdebug.mode=debug xdebug.start_with_request=yes xdebug.client_host=host.docker.internal
This is the Dockerfile:
FROM php:7.3.28-apache-buster
RUN pecl install xdebug-3.0.4 \
&& docker-php-ext-enable xdebug
COPY . /var/www/html
WORKDIR /usr/src/myapp
and this launch.json in VSCode:
{
"version": "0.2.0",
"configurations": [
{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"port": 9003,
"pathMappings": {
"/var/www/html": "${workspaceFolder}",
},
},
{
"name": "Launch currently open script",
"type": "php",
"request": "launch",
"program": "${file}",
"cwd": "${fileDirname}",
"port": 9003
}
]
}
I tried a lot of configs but there is no way to make it work.

XDEBUG_CONFIG: zend_extension=xdebug.so xdebug.mode=debug
xdebug.start_with_request=yes xdebug.client_host=host.docker.internal
This line is incorrect:
You can't load extensions through an Xdebug specific environment variable (but the docker-php-ext-enable xdebug line in your Dockerfile should have taken care of this already).
If you want to set settings through XDEBUG_CONFIG, then you should not prefix them with xdebug., which is explained in the documentation.
The documentation also says that only a select set of settings can be set through XDEBUG_CONFIG, and start_with_request is not one of these. It needs to be set in a PHP ini file.
To set Xdebug's mode, you need to use the XDEBUG_MODE environment variable instead, as is explained in the documentation again.

Related

POSTMAN in Docker GET error (connect ECONNREFUSED)

I have a UI running on 127.0.0.1:80 through a docker container. I tested this via Chrome and Postman Client app which works well.
When I try to export GET test case from Postman Client app and running it through a separate container, I receive this error:
GET 127.0.0.1:80 [errored]
connect ECONNREFUSED 127.0.0.1:80
here is my docker compose file:
version: "3.7"
services:
web:
build: ui
ports:
- 80:80
depends_on:
- api
api:
build: app
environment:
- PORT=80
ports:
- 8020:80
test:
container_name: restful_booker_checks
build: test
image: postman_checks
depends_on:
- api
- web
here is my docker file for test cases:
FROM node:10-alpine3.11
RUN apk update && apk upgrade && \
apk --no-cache --update add gcc musl-dev ca-certificates curl && \
rm -rf /var/cache/apk/*
RUN npm install -g newman
COPY ./ /test
WORKDIR /test
CMD ["newman", "run", "OCR_POC_v2_tests.json"]
here is the exported json file from Postman Client app:
{
"info": {
"_postman_id": "5e702694-fa82-4b9f-8b4e-49afd11330cc",
"name": "OCR_POC_v2_tests",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json",
"_exporter_id": "21096887"
},
"item": [
{
"name": "get",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "127.0.0.1:80",
"host": [
"127",
"0",
"0",
"1"
],
"port": "80"
}
},
"response": []
}
]
}
When I run the following command, it also works well:
newman run OCR_POC_v2_tests.json
So it's just in docker which the connection can not get through.
docker-compose creates a bridge network that it connects the containers to. Each container can be addressed by it's service name on the network.
You're trying to connect to 127.0.0.1 or localhost, which in a docker context is the testcontainer itself.
You need to connect to web instead, so your URL becomes http://web:80/. I don't know how exactly you need to change your exported Postman file to accomplish that, but hopefully you can figure it out.
Also note that on the bridge network you connect using the container ports. I.e. in your case, that would be port 80 on both web and api. If you only need to access the containers from other containers on the docker network, you don't need to map the ports.

Debug jupyter notebook within docker container

I am trying to use VSCode to debug a jupyter notebook which runs within a docker container.
I am using jupyter/scipy-notebook:python-3.9.7 as base image and running the following command:
CMD ["python", "-m", "ptvsd", "--host", "0.0.0.0", "--port", "49155", "--wait", "--multiprocess", "-m", "jupyter", "lab", "--ip", "0.0.0.0"]
after having installed ptvsd.
The launch.json looks like the following:
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "127.0.0.1",
"port": 49155
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}/notebooks",
"remoteRoot": "/home/jovyan"
}
]
where all the relevant code is within the notebooks folder.
After building and "upping" the following docker compose file:
version: '3'
services:
scipy-notebook:
ports:
- '49154:8888'
- '49155:49155'
environment:
- 'JUPYTER_RUNTIME_DIR=/tmp'
- 'JUPYTER_ENABLE_LAB=yes'
build:
context: .
dockerfile: ${dockerfile_src}
image: ${registry}/${repositoryName}:${versionNumber}
the container is correctly waiting for me to attach a visual studio debug session (i.e. "Attaching to <container_name>"), but after I start the debug session, this stops immediately and the jupyter notebook starts normally. Further attempts to attach bring the same result.
I tried to debug a simple .py file within the notebooks folder with the very same structure, and in this case everything works just fine. Here the command:
CMD ["python", "-m", "ptvsd", "--host", "0.0.0.0", "--port", "49155", "--wait", "main.py"]

Mounting a host directory with Docker Compose in Windows

I'm using Docker for Windows on WSL2. When I do this on a Dockerfile, the contents of my directory in the host are copied correctly to the new image after building:
FROM node:latest
WORKDIR /app
COPY . /app
However, this docker compose YAML doesn't mount the same directory in the same location:
version: "2.0"
services:
node:
image: node
user: node
environment:
- NODE_ENV=production
volumes:
- ./:/home/node/app
tty: true
After accessing the container, I'm not able to see the contents on my host directory in it. I've also tried with the full host path, but it didn't work.
When I run docker inspect on my container, I see this, which contains the correct information:
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/c/Users/my-user/my-dir",
"Destination": "/home/node/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
What am I missing?

Mapping port between docker-compose and docker-compose.extend for development environment

I have a docker-compose used for production which I'm hoping to incorporate with VS Code's dockerized development environment.
./docker-compose.yml
version: "3.6"
services:
django: &django-base
build:
context: .
dockerfile: backend/Dockerfile_local
restart: on-failure
volumes:
- .:/website
depends_on:
- memcached
- postgres
- redis
networks:
- main
ports:
- 8000:8000 # HTTP port
- 3000:3000 # ptvsd debugging port
expose:
- "3000"
env_file:
- variables/django_local.env
...
Note how I'm both forwarding and exposing port 3000 here. This is a result of me playing around to get what I need working. Not sure if I need one or the other or both.
My ./devcontainer then looks like the following:
./devcontainer/devcontainer.json
{
"name": "Dev Container",
"dockerComposeFile": ["../docker-compose.yml", "docker-compose.extend.yml"],
"service": "dev",
"workspaceFolder": "/workspace",
"shutdownAction": "stopCompose",
"settings": {
"terminal.integrated.shell.linux": null,
"python.linting.pylintEnabled": true,
"python.pythonPath": "/usr/local/bin/python3.8"
},
"extensions": [
"ms-python.python"
]
}
.devcontainer/docker-compose.extended.yml
version: '3.6'
services:
dev:
build:
context: .
dockerfile: ./Dockerfile
external_links:
- django
volumes:
- .:/workspace:cached
command: /bin/sh -c "while sleep 1000; do :; done"
The idea is that I want to be able to run VS code attached to the dev service, which from there I want to run the debugger attached to the django service using the following launch.json config:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/website"
}
]
},
I get an error when doing this though, where VS Code says connect ECONNREFUSED 127.0.0.1:3000. How can I get the ports mapped so this will work? Is it even possible?
Edit
Why not just attach directly to the django service?
The dev container simply contains python and node runtimes for linting and intellisense purposes while using VS Code. The idea behind creating a new service devoted specifically to debugging in the dev environment is that ./docker-compose.yml contains more than a few services that some of the devs on my team like to turn off sometimes to keep resource consumption low. By creating a container specifically for dev, it also makes it easier to setup .devcontainer-devcontainer.json to add things like extensions to one container without needing to add them after attaching to the running "non-dev" container. If this were to work, VS Code would be running within the dev container (see this).
I was able to solve this by changing the host in the launch.json from localhost to host.docker.internal. The resulting launch.json configuration then looks like this:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "host.docker.internal",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/web-portal"
}
]
},

"nginx-proxy" docker image socket volume not mounted

My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "docker-socket",
"host": {
"sourcePath": "/var/run/docker.sock"
}
}
],
"containerDefinitions": [
{
"name": "nginx",
"image": "nginx",
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "demo.local"
}
],
"essential": true,
"memory": 128
},
{
"name": "nginx-proxy",
"image": "jwilder/nginx-proxy",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "docker-socket",
"containerPath": "/tmp/docker.sock",
"readOnly": true
}
]
}
]
}
Running this locally using "eb local run" results in:
ERROR: you need to share your Docker host socket with a volume at
/tmp/docker.sock Typically you should run your jwilder/nginx-proxy
with: -v /var/run/docker.sock:/tmp/docker.sock:ro See the
documentation at http://git.io/vZaGJ
If I ssh into my docker machine and run:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock
jwilder/nginx-proxy
It creates the container and mounts the volumes correctly.
Why is the above Dockerrun.aws.json configuration not mounting the /var/run/docker.sock:/tmp/docker.sock volume correctly?
If I run the same configuration from a docker-compose.yml, it works fine locally. However, I want to deploy this same configuration to Elastic Beanstalk using a Dockerrun.aws.json:
version: '2'
services:
nginx:
image: nginx
container_name: nginx
cpu_shares: 100
volumes:
- /var/www/html
environment:
- VIRTUAL_HOST=demo.local
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
cpu_shares: 100
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
My local setup is using:
VirtualBox 5.0.22 r108108
docker-machine version 0.7.0, build a650a40
EB CLI 3.7.7 (Python 2.7.1)
Your Dockerrun.aws.json file works fine in AWS EB for me (only changed it slightly to use our own container / hostname in place of the 'nginx' container). Is it just a problem with the 'eb local run' setup, perhaps?
As you said are on Mac, try upgrading to the new docker 1.12 that runs docker natively on osx, or at least a newer version of docker-machine - https://docs.docker.com/machine/install-machine/#/installing-machine-directly

Resources