Related
recently I need to do debug for a single file go binary application which is contained in the docker under k8s environment with its source code. When I package the docker I use the
/dlv --listen=:40000 --headless=true --api-version=2 exec /singleExeFile
and expose the 40000 port to the outer VM like
ports:
- 40000:40000
When I use my dev environment to connect to outer vm with dlv command. It seems that it can be connected. Like the following
foo#foo-vm:~$ dlv connect 110.123.123.123:40000
Type 'help' for list of commands.
(dlv)
But when use vscode to attach to the code, it meets two error(The vscode has installed go extension)
When use legacy connect, there is my launch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Connect to server",
"type": "go",
"debugAdapter": "legacy",
"request": "attach",
"mode": "remote",
"port": 40000,
"host": "110.123.123.123",
"substitutePath": [
{
"from": "${workspaceFolder}/cmd/maine.go",
"to": "/singleExeFile"
}
]
}
]
}
But the vscode raises error and I haven't found similar error in google. Error: Socket connection to remote was closed
Use the dlv-dap method to connect
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Delve into Docker",
"type": "go",
"debugAdapter": "dlv-dap",
"request": "attach",
"mode": "remote",
"port": 40000,
"host": "110.123.123.123",
"substitutePath": [
{
"from": "${workspaceFolder}/cmd/maine.go",
"to": "/singleExeFile"
}
]
}
]
}
And when try to connect, there is no error raised by vscode. Just try to connect and stop by vscode. Even don't know what's the error.
With verbose param, there still isn't any output in the DEBUG console for dlv method. But for legacy method, the following error is outputted. Please check.
By the way, add verbose in legacy method and it raises some detailed message in DEBUG CONSOLE.
AttachRequest
Start remote debugging: connecting 110.123.123.123:40000
To client: {"seq":0,"type":"event","event":"initialized"}
InitializeEvent
To client: {"seq":0,"type":"response","request_seq":2,"command":"attach","success":true}
From client: configurationDone(undefined)
ConfigurationDoneRequest
Socket connection to remote was closed
To client: {"seq":16,"type":"response","request_seq":2,"command":"attach","success":false,"message":"Failed to continue: Check the debug console for details.","body":{"error":{"id":3000,"format":"Failed to continue: Check the debug console for details.","showUser":true}}}
Sending TerminatedEvent as delve is closed
To client: {"seq":0,"type":"event","event":"terminated"}
From client: disconnect({"restart":false})
DisconnectRequest
New Update in 9 July
I made another try to create a simple docker using the following dockerfile
FROM golang:1.16.15
RUN mkdir -p /var/lib/www && mkdir -p /var/lib/temp
WORKDIR /var/lib/temp
COPY . ./
RUN go env -w GOPROXY="https://goproxy.cn,direct"
RUN go install github.com/go-delve/delve/cmd/dlv#latest
RUN go mod tidy
RUN go build
RUN mv ./webproj /var/lib/www/ && rm -rf /var/lib/temp
WORKDIR /var/lib/www
COPY ./build.sh ./
EXPOSE 8080
EXPOSE 2345
RUN chmod 777 ./webproj
RUN chmod 777 ./build.sh
ENTRYPOINT ["/bin/bash","./build.sh"]
And the build.sh code is like
dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec ./webproj
After that, it works with GoLand debug. GoLand can debug when I send the GET api with designed. But it still can't be work with VSCode. When I use the vscode, it did connect to the docker. But when I add the break point. It shows that this is an unverified BreakPoint and can't stop.
Here is my launch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Connect to server",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "${fileDirname}",
"port": 2345,
"host": "127.0.0.1"
}
]
}
So currently this is blocked. Help is very needed. Thanks.
I am setting up debugging of FastAPI running in a container with VS Code. When I launch the debugger, the FastAPI app runs in the container. But when I access the webpage from host, there is no response from server as the following:
However, if I start the container from command line with the following command, I can access the webpage from host.
docker run -p 8001:80/tcp with-batch:v2 uvicorn main:app --host 0.0.0.0 --port 80
Here is the tasks.json file:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--port",
"80"
],
"module": "uvicorn"
}
},
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "with-batch:v2"
}
}
]
}
here is the launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Debug Flask App",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/app",
"remoteRoot": "/app"
}
],
"projectType": "fastapi"
}
}
]
}
here is the debug console output:
here is the docker-run: debug terminal output:
here is the Python Debug Console terminal output:
Explanation
The reason you are not able to access your container at that port, is because VSCode builds your image with a random, unique localhost port mapped to the running container.
You can see this by running docker container inspect {container_name} which should print out a JSON representation of the running container. In your case you would write docker container inspect withbatch-dev
The JSON is an array of objects, in this case just the one object, with a key of "NetworkSettings" and a key in that object of "Ports" which would look similar to:
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "55016"
}
]
}
That port 55016 would be the port you can connect to at localhost:55016
Solution
With some tinkering and documentation it seems the "projectType": "fastapi" should be launching your browser for you at that specific port. Additionally, your debug console output shows Uvicorn running on http://127.0.0.1:80. 127.0.0.1 is localhost (also known as the loopback interface), which means your process in the docker container is only listening to internal connections. Think of docker containers being in their own subnetwork relative to your computer (there are exceptions to this, but that's not important). If they want to listen to outside connections (your computer or other containers), they would need to tell the container's virtual network interface to do so. In the context of a server, you would use the address 0.0.0.0 to indicate you want to listen on all ipv4 addresses referencing this interface.
That got a little deep, but suffice it to say, you should be able to add --host 0.0.0.0 to your run arguments and you would be able to connect. You would add this to tasks.json, in the docker-run object, where your other python args are specified:
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--host",
"0.0.0.0",
"--port",
"80"
],
"module": "uvicorn"
}
},
I'm running unit tests using googletests on embedded C software and I'm using a docker container to be able to run them easily on any platform. Now I would like to debug these unit tests from vscode connecting to my docker container and running gdb in it.
I managed to configure launch.json and tasks.json to start and run the debug session.
launch.json :
{
"version": "0.2.0",
"configurations": [
{
"name": "tests debug",
"type": "cppdbg",
"request": "launch",
"program": "/project/build/tests/bin/tests",
"args": [],
"cwd": "/project",
"environment": [],
"sourceFileMap": {
"/usr/include/": "/usr/src/"
},
"preLaunchTask": "start debugger",
"postDebugTask": "stop debugger",
"pipeTransport": {
"debuggerPath": "/usr/bin/gdb",
"pipeProgram": "docker",
"pipeArgs": ["exec", "-i", "debug", "sh", "-c"],
"pipeCwd": "${workspaceRoot}"
},
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "start debugger",
"type": "shell",
"command": "docker run --privileged -v /path/to/my/project:/project --name debug -it --rm gtest-cmock",
"isBackground": true,
"problemMatcher": {
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": ".",
"endsPattern": "."
}
}
},
{
"label": "stop debugger",
"type": "shell",
"command": "docker stop -t 0 debug",
}
]
}
When I hit the debugger restart button, stop debugger task is run and the docker container stops but start debugger is not run. The debug session hangs and I have to close vscode do be able to run another debug session.
I'm looking for a way to either run both tasks on debugger restart, or to run none (if I start my docker from another terminal and deactivate both tasks, restart works with no problem).
How do you run systemd in a Docker managed plugin? With a normal container I can run centos/systemd and run an Apache server using their example Dockerfile
FROM centos/systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
And running it as follows
docker build --rm --no-cache -t httpd .
docker run --privileged --name httpd -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 -d httpd
However, when I try to make a managed plugin, there are some issues with the cgroups
I've tried putting in the config.json
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"rprivate"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"rbind",
"ro",
"rprivate"
]
}
I also tried the following which damages the host's cgroup which may require a hard reboot to recover.
{
"destination": "/sys/fs/cgroup/systemd",
"source": "/sys/fs/cgroup/systemd",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
{
"destination": "/sys/fs/cgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
It looks to be something to do with how opencontainer and moby interact https://github.com/moby/moby/issues/36861
This is how I did it on my https://github.com/trajano/docker-volume-plugins/tree/master/centos-mounted-volume-plugin
The key thing to do is preserve the /run/docker/plugins before systemd gets started and wipes the /run folder. Then make sure you create the socket in the new folder.
mkdir -p /dockerplugins
if [ -e /run/docker/plugins ]
then
mount --bind /run/docker/plugins /dockerplugins
fi
The other thing is that Docker managed plugins add an implicit /sys/fs/cgroup AFTER the defined mounts in config so creating a readonly mount will not work unless it was rebound before starting up systemd.
mount --rbind /hostcgroup /sys/fs/cgroup
With the mount defined in config.json as
{
"destination": "/hostcgroup",
"source": "/sys/fs/cgroup",
"type": "bind",
"options": [
"bind",
"ro",
"private"
]
}
Creating the socket needs to be customized since the plugin helpers write to /run/docker/plugins
l, err := sockets.NewUnixSocket("/dockerplugins/osmounted.sock", 0)
if err != nil {
log.Fatal(err)
}
h.Serve(l)
The following shows the process above on how I achieved it on my plugin
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/init.sh
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/config.json
https://github.com/trajano/docker-volume-plugins/blob/v1.2.0/centos-mounted-volume-plugin/main.go#L113
You can run httpd in a centos container without systemd - atleast to the tests with the docker-systemctl-replacement script.
I am using chef version 11.16.4 and packer v 0.7.1 with docker v1.3.0
I am having trouble getting chef-solo to run the chef-solo provisioner after it installs chef-solo.
I am getting the following error:
ERROR: Unable to determine node name: configure node_name or configure the system's hostname and fqdn
I poked around on the internet trying to figure what was happening, and this error seems rare since node_name is usually given a default value by the system, or is assigned in solo.rb, which seemed to me can not be overwritten directly in the packer config template.
Am I doing something wrong with my packer config or is this an incompatiblity issue between chef-solo and docker provisioning?
I am using the following packer config:
{
"variables": {
"version": "",
"base-image-version": ""
},
"builders":[{
"type": "docker",
"image": "centos:latest",
"pull": true,
"export_path": "zookeeper-base-{{user `version`}}.tar"
}],
"provisioners":[
{
"type": "chef-solo",
"cookbook_paths": ["../chef-simple/cookbooks"],
"install_command":"curl -L https://www.opscode.com/chef/install.sh | bash",
"execute_command":"chef-solo --no-color -c {{.ConfigPath}} -j {{.JsonPath}}",
"run_list":["recipe[zookeeper::install]"],
"json":{"node_name":"zookeeper-box","env_name":"dev","ip":"10.10.10.10"},
"prevent_sudo":true
}],
"post-processors": [{
"type": "docker-import",
"repository": "ed-sullivan/zookeeper-base",
"tag": "{{user `version`}}"
}]
}
I solved this by adding a Docker hostname to the execute_command in the json file.
"run_command": ["-d", "--hostname=foobar", "-i", "-t", "{{.Image}}", "/bin/bash"]
I also had to install the hostname package (I think chef uses that to look up the hostname) and the curl package.
"inline": ["yum -y update; yum -y install curl; yum -y install hostname"]
Hopefully that helps!
I ended up solving this by creating a config template, that defines the node_name, and installing the chef files using the file provisioner.
Here is the updated config
{
"variables": {
"version": "0.1",
"base-image-version": "",
"chef_dir" : "/tmp/packer-chef-client",
"chef_env" : "dev"
},
"builders":[{
"type": "docker",
"image": "centos:latest",
"pull": true,
"export_path": "zookeeper-base-{{user `version`}}.tar"
}],
"provisioners":[
{ "type": "shell", "inline": [ "mkdir -p {{user `chef_dir`}}", "yum install -y tar" ] },
{ "type": "file", "source": "../chef/cookbooks", "destination": "{{user `chef_dir`}}" },
{
"type": "chef-solo",
"install_command":"curl -L https://www.opscode.com/chef/install.sh | bash",
"execute_command":"chef-solo --no-color -c {{.ConfigPath}} -j {{.JsonPath}}",
"run_list":["recipe[zookeeper::install]"],
"prevent_sudo":true,
"config_template":"./solo.rb.template"
}],
}
and the corresponding config template file
log_level :info
log_location STDOUT
local_mode true
ssl_verify_mode verify_peer
role_path "{{user `chef_dir`}}/roles"
data_bag_path "{{user `chef_dir`}}/data_bags"
environment_path "{{user `chef_dir`}}/environments"
cookbook_path [ "{{user `chef_dir`}}/cookbooks" ]
node_name "packer-docker-build"