I'm running unit tests using googletests on embedded C software and I'm using a docker container to be able to run them easily on any platform. Now I would like to debug these unit tests from vscode connecting to my docker container and running gdb in it.
I managed to configure launch.json and tasks.json to start and run the debug session.
launch.json :
{
"version": "0.2.0",
"configurations": [
{
"name": "tests debug",
"type": "cppdbg",
"request": "launch",
"program": "/project/build/tests/bin/tests",
"args": [],
"cwd": "/project",
"environment": [],
"sourceFileMap": {
"/usr/include/": "/usr/src/"
},
"preLaunchTask": "start debugger",
"postDebugTask": "stop debugger",
"pipeTransport": {
"debuggerPath": "/usr/bin/gdb",
"pipeProgram": "docker",
"pipeArgs": ["exec", "-i", "debug", "sh", "-c"],
"pipeCwd": "${workspaceRoot}"
},
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "start debugger",
"type": "shell",
"command": "docker run --privileged -v /path/to/my/project:/project --name debug -it --rm gtest-cmock",
"isBackground": true,
"problemMatcher": {
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": ".",
"endsPattern": "."
}
}
},
{
"label": "stop debugger",
"type": "shell",
"command": "docker stop -t 0 debug",
}
]
}
When I hit the debugger restart button, stop debugger task is run and the docker container stops but start debugger is not run. The debug session hangs and I have to close vscode do be able to run another debug session.
I'm looking for a way to either run both tasks on debugger restart, or to run none (if I start my docker from another terminal and deactivate both tasks, restart works with no problem).
Related
I'm using Remote-Containers to debug an FastApi app. The container has all the dependecies installed. When I try to debug using the vscode debugger I got the error No module named uvicorn. But if I run uvicorn api.main:app it works.
/usr/bin/env /usr/bin/python3 /Users/juracylopes/.vscode/extensions/ms-python.python-2022.10.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 59746 -- -m uvicorn api.main:app
/Library/Developer/CommandLineTools/usr/bin/python3: No module named uvicorn
My launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: FastAPI",
"type": "python",
"request": "launch",
"module": "uvicorn",
"args": [
"api.main:app"
],
"jinja": true,
"justMyCode": true
}
]
} ```
I am setting up debugging of FastAPI running in a container with VS Code. When I launch the debugger, the FastAPI app runs in the container. But when I access the webpage from host, there is no response from server as the following:
However, if I start the container from command line with the following command, I can access the webpage from host.
docker run -p 8001:80/tcp with-batch:v2 uvicorn main:app --host 0.0.0.0 --port 80
Here is the tasks.json file:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--port",
"80"
],
"module": "uvicorn"
}
},
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "with-batch:v2"
}
}
]
}
here is the launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Debug Flask App",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/app",
"remoteRoot": "/app"
}
],
"projectType": "fastapi"
}
}
]
}
here is the debug console output:
here is the docker-run: debug terminal output:
here is the Python Debug Console terminal output:
Explanation
The reason you are not able to access your container at that port, is because VSCode builds your image with a random, unique localhost port mapped to the running container.
You can see this by running docker container inspect {container_name} which should print out a JSON representation of the running container. In your case you would write docker container inspect withbatch-dev
The JSON is an array of objects, in this case just the one object, with a key of "NetworkSettings" and a key in that object of "Ports" which would look similar to:
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "55016"
}
]
}
That port 55016 would be the port you can connect to at localhost:55016
Solution
With some tinkering and documentation it seems the "projectType": "fastapi" should be launching your browser for you at that specific port. Additionally, your debug console output shows Uvicorn running on http://127.0.0.1:80. 127.0.0.1 is localhost (also known as the loopback interface), which means your process in the docker container is only listening to internal connections. Think of docker containers being in their own subnetwork relative to your computer (there are exceptions to this, but that's not important). If they want to listen to outside connections (your computer or other containers), they would need to tell the container's virtual network interface to do so. In the context of a server, you would use the address 0.0.0.0 to indicate you want to listen on all ipv4 addresses referencing this interface.
That got a little deep, but suffice it to say, you should be able to add --host 0.0.0.0 to your run arguments and you would be able to connect. You would add this to tasks.json, in the docker-run object, where your other python args are specified:
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--host",
"0.0.0.0",
"--port",
"80"
],
"module": "uvicorn"
}
},
question
I get an error back from the remote container plugin of vscode which I don't understand:
[...] Command failed: docker run [...] -c echo Container started
trap "exit 0" 15
while sleep 1 & wait $!; do :; done
I presume something is not set up right with Docker, but from this I just don't know where to go.
I build the underlying container on macos, where it worked fine. Now testing it on a manjaro linux it doesn't.
set up
devcontainer.json contents
{
"name": "MLSC",
"image": "ezraeisbrenner/misu-course-modelling-large-scale-circulation:2021.1",
"settings": {
"editor.formatOnSave": true,
"python.pythonPath": "/usr/local/bin/python",
"python.languageServer": "Pylance",
"python.formatting.provider": "black",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint",
"files.associations": {
"namelist": "FortranFreeForm",
},
"[FortranFreeForm]": {
"editor.defaultFormatter": "Blamsoft.fprettify"
},
},
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"editorconfig.editorconfig",
"krvajalm.linter-gfortran",
"blamsoft.fprettify"
],
"remoteUser": "vscode"
}
I have the following packer config file:
{
"builders":[
{
"type": "docker",
"image": "ubuntu:18.04",
"commit": true
}
],
"post-processors": [
[
{
"type": "shell-local",
"inline": ["$(aws ecr get-login --no-include-email --region us-east-2)"]
},
{
"type": "docker-tag",
"repository": "localhost/my_image",
"tag": "latest"
},
{
"type": "docker-tag",
"repository": "123456789.dkr.ecr.us-east-2.amazonaws.com/my_image",
"tag": "latest"
},
"docker-push"
]
]
}
This gives me the following error
==> docker: Running post-processor: shell-local
==> docker (shell-local): Running local shell script: /var/folders/zh/wsr6wlx11v9703__rn7f3b080000gn/T/packer-shell756682313
==> docker (shell-local): WARNING! Using --password via the CLI is insecure. Use --password-stdin.
docker (shell-local): Login Succeeded
==> docker: Running post-processor: docker-tag
Build 'docker' errored: 1 error(s) occurred:
* Post-processor failed: Unknown artifact type:
Can only tag from Docker builder artifacts.
It works if I remove the shell-local post-processor.
It also doesn't matter what kind of command I execute in the shell-local post-processor.
I tried to add "keep_input_artifact": true to the shell-local post-processor but this did not change anything.
How can I execute a shell-local post-processor before a docker-tag / docker-push post-processor?
I figured it out. I have to put the shell-local post-processor in a separate list, i.e. I have to add another list to the list of post-processors, like so:
"post-processors": [
[
{
"type": "shell-local",
"inline": ["$(aws ecr get-login --no-include-email --region us-east-2)"]
}
],
[
{
"type": "docker-tag",
"repository": "localhost/my_image",
"tag": "latest"
},
{
"type": "docker-tag",
"repository": "123456789.dkr.ecr.us-east-2.amazonaws.com/my_image",
"tag": "latest"
},
"docker-push"
]
]
}
How do you run systemd in a Docker managed plugin? With a normal container I can run centos/systemd and run an Apache server using their example Dockerfile
FROM centos/systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
And running it as follows
docker build --rm --no-cache -t httpd .
docker run --privileged --name httpd -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 -d httpd
However, when I try to make a managed plugin, there are some issues with the cgroups
I've tried putting in the config.json
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"rprivate"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"rbind",
"ro",
"rprivate"
]
}
I also tried the following which damages the host's cgroup which may require a hard reboot to recover.
{
"destination": "/sys/fs/cgroup/systemd",
"source": "/sys/fs/cgroup/systemd",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
It looks to be something to do with how opencontainer and moby interact https://github.com/moby/moby/issues/36861
This is how I did it on my https://github.com/trajano/docker-volume-plugins/tree/master/centos-mounted-volume-plugin
The key thing to do is preserve the /run/docker/plugins before systemd gets started and wipes the /run folder. Then make sure you create the socket in the new folder.
mkdir -p /dockerplugins
if [ -e /run/docker/plugins ]
then
mount --bind /run/docker/plugins /dockerplugins
fi
The other thing is that Docker managed plugins add an implicit /sys/fs/cgroup AFTER the defined mounts in config so creating a readonly mount will not work unless it was rebound before starting up systemd.
mount --rbind /hostcgroup /sys/fs/cgroup
With the mount defined in config.json as
{
"destination": "/hostcgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
Creating the socket needs to be customized since the plugin helpers write to /run/docker/plugins
l, err := sockets.NewUnixSocket("/dockerplugins/osmounted.sock", 0)
if err != nil {
log.Fatal(err)
}
h.Serve(l)
The following shows the process above on how I achieved it on my plugin
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/init.sh
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/config.json
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/main.go#L113
You can run httpd in a centos container without systemd - atleast to the tests with the docker-systemctl-replacement script.