Remote debug with VSCode, go & nakama - docker
i have the following problem. I try to run a nakama gameserver in go with docker.
For the debugging purposes i want to use delve.
I am not really sure if i am proceeding right, so maybe my problem is actually a complete different. But i think my delve does not connect to the nakama build.
What have i done so far?
I created a new go project and put a bit code in the main.go. After that i created a Dockerfile and an docker-compose.yml.
I think in one of these two files is the mistake.
My Dockerfile looks like this:
FROM golang
ENV GOPATH /home/marc/go_projects
ENV PATH ${GOPATH}/bin:/usr/local/go/bin:$PATH
RUN go install github.com/go-delve/delve/cmd/dlv#latest
FROM heroiclabs/nakama-pluginbuilder:3.3.0 AS go-builder
ENV GO111MODULE on
ENV CGO_ENABLED 1
WORKDIR $GOPATH/gamedev
COPY go.mod .
COPY main.go .
COPY vendor/ vendor/
RUN go build --trimpath --mod=vendor --buildmode=plugin -o ./backend.so
FROM heroiclabs/nakama:3.3.0
COPY --from=go-builder /backend/backend.so /nakama/data/modules/
COPY local.yml /nakama/data/
An my docker-compose.yml
version: '3'
services:
postgres:
container_name: postgres
image: postgres:9.6-alpine
environment:
- POSTGRES_DB=nakama
- POSTGRES_PASSWORD=localdb
volumes:
- data:/var/lib/postgresql/data
expose:
- "8080"
- "5432"
ports:
- "5432:5432"
- "8080:8080"
nakama:
container_name: nakama
image: heroiclabs/nakama:3.12.0
entrypoint:
- "/bin/sh"
- "-ecx"
- >
/nakama/nakama migrate up --database.address postgres:localdb#postgres:5432/nakama &&
exec /nakama/nakama --name nakama1 --database.address postgres:localdb#postgres:5432/nakama --logger.level DEBUG --session.token_expiry_sec 7200
restart: always
links:
- "postgres:db"
depends_on:
- postgres
volumes:
- ./:/nakama/data
expose:
- "7349"
- "7350"
- "7351"
- "2345"
ports:
- "2345:2345"
- "7349:7349"
- "7350:7350"
- "7351:7351"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7350/"]
interval: 10s
timeout: 5s
retries: 5
volumes:
data:
When i build and run the docker image, it runs with no complains. I can open the webinterface of nakama, so this is running fine.
But when i try to connect the debugger, it looks like he create a succesfull connection but closes it right away.
SO my launch.json config is the following:
"name": "Connect to server",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "/home/marc/go_projects/bin/dlv",
"port": 2345,
"host": "127.0.0.1",
"trace": "verbose"
This is what i get in /tmp/vs-code-debug.txt:
[07:24:05.882 UTC] From client: initialize({"clientID":"vscode","clientName":"Visual Studio Code","adapterID":"go","pathFormat":"path","linesStartAt1":true,"columnsStartAt1":true,"supportsVariableType":true,"supportsVariablePaging":true,"supportsRunInTerminalRequest":true,"locale":"de","supportsProgressReporting":true,"supportsInvalidatedEvent":true,"supportsMemoryReferences":true})
[07:24:05.882 UTC] InitializeRequest
[07:24:05.882 UTC] To client: {"seq":0,"type":"response","request_seq":1,"command":"initialize","success":true,"body":{"supportsConditionalBreakpoints":true,"supportsConfigurationDoneRequest":true,"supportsSetVariable":true}}
[07:24:05.883 UTC] InitializeResponse
[07:24:05.883 UTC] From client: attach({"name":"Connect to server","type":"go","request":"attach","mode":"remote","remotePath":"/home/marc/go_projects/bin/dlv","port":2345,"host":"127.0.0.1","trace":"verbose","__configurationTarget":5,"packagePathToGoModPathMap":{"/home/marc/go_projects/gamedev":"/home/marc/go_projects/gamedev"},"debugAdapter":"legacy","showRegisters":false,"showGlobalVariables":false,"substitutePath":[],"showLog":false,"logOutput":"debugger","dlvFlags":[],"hideSystemGoroutines":false,"dlvLoadConfig":{"followPointers":true,"maxVariableRecurse":1,"maxStringLen":64,"maxArrayValues":64,"maxStructFields":-1},"cwd":"/home/marc/go_projects/gamedev","dlvToolPath":"/home/marc/go_projects/bin/dlv","env":{"ELECTRON_RUN_AS_NODE":"1","GJS_DEBUG_TOPICS":"JS ERROR;JS LOG","USER":"marc","SSH_AGENT_PID":"1376","XDG_SESSION_TYPE":"x11","SHLVL":"0","HOME":"/home/marc","DESKTOP_SESSION":"ubuntu","GIO_LAUNCHED_DESKTOP_FILE":"/usr/share/applications/code.desktop","GTK_MODULES":"gail:atk-bridge","GNOME_SHELL_SESSION_MODE":"ubuntu","MANAGERPID":"1053","DBUS_SESSION_BUS_ADDRESS":"unix:path=/run/user/1000/bus","GIO_LAUNCHED_DESKTOP_FILE_PID":"6112","IM_CONFIG_PHASE":"1","MANDATORY_PATH":"/usr/share/gconf/ubuntu.mandatory.path","LOGNAME":"marc","_":"/usr/share/code/code","JOURNAL_STREAM":"8:44286","DEFAULTS_PATH":"/usr/share/gconf/ubuntu.default.path","XDG_SESSION_CLASS":"user","USERNAME":"marc","GNOME_DESKTOP_SESSION_ID":"this-is-deprecated","WINDOWPATH":"2","PATH":"/home/marc/.nvm/versions/node/v17.8.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/usr/local/go/bin:/usr/local/go/bin","SESSION_MANAGER":"local/mobile:#/tmp/.ICE-unix/1458,unix/mobile:/tmp/.ICE-unix/1458","INVOCATION_ID":"fe605ca56aa646859602b81e264bf01b","XDG_RUNTIME_DIR":"/run/user/1000","XDG_MENU_PREFIX":"gnome-","DISPLAY":":0","LANG":"de_DE.UTF-8","XDG_CURRENT_DESKTOP":"Unity","XAUTHORITY":"/run/user/1000/gdm/Xauthority","XDG_SESSION_DESKTOP":"ubuntu","XMODIFIERS":"#im=ibus","SSH_AUTH_SOCK":"/run/user/1000/keyring/ssh","SHELL":"/bin/bash","QT_ACCESSIBILITY":"1","GDMSESSION":"ubuntu","GPG_AGENT_INFO":"/run/user/1000/gnupg/S.gpg-agent:0:1","GJS_DEBUG_OUTPUT":"stderr","QT_IM_MODULE":"ibus","PWD":"/home/marc","XDG_DATA_DIRS":"/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop","XDG_CONFIG_DIRS":"/etc/xdg/xdg-ubuntu:/etc/xdg","_WSREP_START_POSITION":"","CHROME_DESKTOP":"code-url-handler.desktop","ORIGINAL_XDG_CURRENT_DESKTOP":"ubuntu:GNOME","VSCODE_CWD":"/home/marc","GDK_BACKEND":"x11","VSCODE_NLS_CONFIG":"{\"locale\":\"de\",\"availableLanguages\":{\"*\":\"de\"},\"_languagePackId\":\"b61d3f473b0358bc955527db7340fd23.de\",\"_translationsConfigFile\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de/tcf.json\",\"_cacheRoot\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de\",\"_resolvedLanguagePackCoreLocation\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de/30d9c6cd9483b2cc586687151bcbcd635f373630\",\"_corruptedFile\":\"/home/marc/.config/Code/clp/b61d3f473b0358bc955527db7340fd23.de/corrupted.info\",\"_languagePackSupport\":true}","VSCODE_CODE_CACHE_PATH":"/home/marc/.config/Code/CachedData/30d9c6cd9483b2cc586687151bcbcd635f373630","VSCODE_IPC_HOOK":"/run/user/1000/vscode-432c1660-1.68.1-main.sock","VSCODE_PID":"6112","NVM_INC":"/home/marc/.nvm/versions/node/v17.8.0/include/node","LS_COLORS":"","NVM_DIR":"/home/marc/.nvm","LESSCLOSE":"/usr/bin/lesspipe %s %s","LESSOPEN":"| /usr/bin/lesspipe %s","NVM_CD_FLAGS":"","NVM_BIN":"/home/marc/.nvm/versions/node/v17.8.0/bin","GOPATH":"/home/marc/go_projects","VSCODE_AMD_ENTRYPOINT":"vs/workbench/api/node/extensionHostProcess","VSCODE_PIPE_LOGGING":"true","VSCODE_VERBOSE_LOGGING":"true","VSCODE_LOG_NATIVE":"false","VSCODE_HANDLES_UNCAUGHT_ERRORS":"true","VSCODE_LOG_STACK":"false","VSCODE_IPC_HOOK_EXTHOST":"/run/user/1000/vscode-ipc-8cf508cc-d427-4616-b6b5-61d3c3e5d99f.sock","APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL":"1","GOMODCACHE":"/home/marc/go_projects/pkg/mod","GOPROXY":"https://proxy.golang.org,direct"},"__sessionId":"1893ab9e-5a19-45e7-8b39-46db079cdbe3"})
[07:24:05.883 UTC] AttachRequest
[07:24:05.884 UTC] Start remote debugging: connecting 127.0.0.1:2345
[07:24:06.191 UTC] To client: {"seq":0,"type":"event","event":"initialized"}
[07:24:06.192 UTC] InitializeEvent
[07:24:06.192 UTC] To client: {"seq":0,"type":"response","request_seq":2,"command":"attach","success":true}
[07:24:06.194 UTC] [Error] Socket connection to remote was closed
[07:24:06.194 UTC] Sending TerminatedEvent as delve is closed
[07:24:06.194 UTC] To client: {"seq":0,"type":"event","event":"terminated"}
[07:24:06.201 UTC] From client: configurationDone(undefined)
[07:24:06.201 UTC] ConfigurationDoneRequest
[07:24:06.225 UTC] From client: disconnect({"restart":false})
[07:24:06.225 UTC] DisconnectRequest
I tried to change the remote path i the launch.json multiple times, try to math the paths in the docker files.
Maybe i have to change the implementation of delve in docker, but tbh i dont really know how. I dont really find a good documentation on how to do this.
I was having the same problem.
I solved it like this:
adding line: "debugAdapter": "dlv-dap" on my launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Delve into Docker",
"type": "go",
"request": "attach",
"debugAdapter": "dlv-dap",
"mode": "remote",
"substitutePath": [
{
"from": "${workspaceFolder}/",
"to": "/app",
},
],
"port": 2345,
"host": "127.0.0.1",
"showLog": false,
"apiVersion": 2,
"trace": "verbose"
}
]
}
Related
Selenium Selenoid File Server not running in Browser Container
selenium is unable to download any files from the browsers due to a 502 error on my coworkers machine, none of my other coworkers are seeing the issue, just this one dude. We are using Firefox. After looking at the Selenoid code a bit I learned that the containers the Browser runs in uses a File Server on port 8080 to allow downloading files from the container, but I discovered that this File Server is not running within these containers. I verified this through this command: docker exec -it <browser_container> curl 127.0.0.1:8080 On my machine I get a 200 response: <pre> test.xlsx </pre> But when I run this command on his machine I get this error: Failed to connect to 127.0.0.1 port 8080 after 8 ms: Connection refused This is indicative that the File Server is not running within his Browser Containers. I've been trying many different firefox arguments and I've restart selenoid and the docker containers and still can't figure out what's going on, I'm completely lost right now. If anyone knows what might be going on I would be appreciative, or even if anyone has any idea how to gain more information into what's going on. Here is the Firefox options we are using options = webdriver.FirefoxOptions() options.add_argument('--width=1600') options.add_argument('--height=900') options.set_preference('browser.download.dir', '/home/selenium/Downloads') And our browsers.json file { "chrome": { "default": "105.0", "versions": { "105.0": { "image": "selenoid/vnc_chrome:105.0", "port": "4444", "path": "/", "env": ["TZ=America/Denver"] } }, "caps": { "loggingPrefs": {"browser": "ALL"}, "enableVNC": true, "browserName": "chrome", "timeZone": "America/Denver", "sessionTimeout": "1m30s" } }, "firefox": { "default": "latest", "versions": { "latest": { "image": "selenoid/firefox", "port": "4444", "path": "/wd/hub", "env": ["TZ=America/Denver"] } }, "caps": { "loggingPrefs": {"browser": "ALL"}, "enableVNC": true, "browserName": "firefox", "timeZone": "America/Denver", "sessionTimeout": "1m30s" } } } We do have a custom docker-compose.yml file for starting the selenoid and selenoid_ui containers, here is the file just in case that setup matters, I doubt the issue lies here. version: "3.9" networks: selenoid_net: name: selenoid_net attachable: true ipam: config: - subnet: 172.198.1.0/24 services: selenoid: image: aerokube/selenoid restart: always networks: selenoid_net: ports: - "4444:4444" environment: - OVERRIDE_VIDEO_OUTPUT_DIR=${VIDEO_OUTPUT}/video - TZ=America/Denver volumes: - "/etc/selenoid:/etc/selenoid" - "/var/run/docker.sock:/var/run/docker.sock" - "${VIDEO_OUTPUT}/video:${VIDEO_OUTPUT}/video" - "${VIDEO_OUTPUT}/logs:${VIDEO_OUTPUT}/logs" - "${PWD}:/etc/browsers" command: ["-conf", "/etc/browsers/browsers.json", "-video-output-dir", "${VIDEO_OUTPUT}/video", "-log-output-dir", "${VIDEO_OUTPUT}/logs", "-limit", "6", "-timeout", "1m30s","-container-network", 'selenoid_net'] selenoid-ui: image: "aerokube/selenoid-ui:latest" restart: always networks: selenoid_net: links: - "selenoid" ports: - "8080:8080" command: ["--selenoid-uri", "http://selenoid:4444"]
Caddy as reverse proxy in docker refuses to connect to other containers
I wanted to try out Caddy in a docker environment but it does not seem to be able to connect to other containers. I created a network "caddy" and want to run a portainer alongside it. If I go into the volume of caddy, I can see, that there are certs generated, so that seems to work. Also portainer is running and accessible via the Server IP (http://65.21.139.246:1000/). But when I access via the url: https://smallhetzi.fading-flame.com/ I get a 502 and in the log of caddy I can see this message: { "level": "error", "ts": 1629873106.715402, "logger": "http.log.error", "msg": "dial tcp 172.20.0.2:1000: connect: connection refused", "request": { "remote_addr": "89.247.255.231:15146", "proto": "HTTP/2.0", "method": "GET", "host": "smallhetzi.fading-flame.com", "uri": "/", "headers": { "Accept-Encoding": [ "gzip, deflate, br" ], "Accept-Language": [ "de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7" ], "Cache-Control": [ "max-age=0" ], "User-Agent": [ "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" ], "Sec-Fetch-Site": [ "none" ], "Accept": [ "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" ], "Sec-Fetch-Mode": [ "navigate" ], "Sec-Fetch-User": [ "?1" ], "Sec-Fetch-Dest": [ "document" ], "Sec-Ch-Ua": [ "\"Chromium\";v=\"92\", \" Not A;Brand\";v=\"99\", \"Google Chrome\";v=\"92\"" ], "Sec-Ch-Ua-Mobile": [ "?0" ], "Upgrade-Insecure-Requests": [ "1" ] }, "tls": { "resumed": false, "version": 772, "cipher_suite": 4865, "proto": "h2", "proto_mutual": true, "server_name": "smallhetzi.fading-flame.com" } }, "duration": 0.000580828, "status": 502, "err_id": "pq78d9hen", "err_trace": "reverseproxy.statusError (reverseproxy.go:857)" } But two compose files: Caddy: version: '3.9' services: caddy: image: caddy:2-alpine container_name: caddy restart: unless-stopped ports: - "80:80" - "443:443" volumes: - ./Caddyfile:/etc/caddy/Caddyfile - certs-volume:/data - caddy_config:/config volumes: certs-volume: caddy_config: networks: default: external: name: caddy Caddyfile: { email simonheiss87#gmail.com # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory } smallhetzi.fading-flame.com { reverse_proxy portainer:1000 } and my portainer file: version: '3.9' services: portainer: image: portainer/portainer-ce container_name: portainer restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock - portainer_data:/data portainer/portainer entrypoint: /portainer -p :80 ports: - "1000:80" volumes: portainer_data: networks: default: external: name: caddy What I think happens is, that those two containers are somehow not in the same network, but I dont get why. What works as a workaround right now is, when i make this change to my Caddyfile: smallhetzi.fading-flame.com { reverse_proxy 65.21.139.246:1000 } Then I get a valid certificate and the portainer ui. But i would rather not spread the IPs over my Caddyfile. Do I have to configure something else for caddy to run in docker?
I just got help from the forum and it turns out, that caddy redirects to the port INSIDE the container, not the public one. In my case, portainer runs on 80 internally, so changing the Caddyfile to this: smallhetzi.fading-flame.com { reverse_proxy portainer:80 } or this smallhetzi.fading-flame.com { reverse_proxy http://portainer } does the job. This also means, that I could get rid of exposing portainer directly over the port 1000. Now I can only access it via the proxy. Hope someone gets some help from that :)
In VSCode, 'Python: Remote Attach' fails to connect to a running Docker Container
Good evening, I have a container which is running and ready to connect. In VSCode I've tried to 'Attach Visual Studio Code' to open a new Dev Container, select the sources, hoping I can debug. I'm unable to select breakpoints and the code isn't running. Nothing happens. I've also tried 'Python: Attach Remote'. Nothing happens and there's no errors. Launch.json: { "name": "Python: Remote Attach", "type": "python", "request": "attach", "connect": { "host": "0.0.0.0", "port": 3000 }, "pathMappings": [ { "localRoot": "${workspaceFolder}", "remoteRoot": "." }, ] } Docker Compose.yml services: sfunc: image: sfunc build: context: . dockerfile: ./Dockerfile command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --log-to-stderr --wait-for-client --listen 127.0.0.1:3000 home/site/wwwroot/TimerTrigger/__init__.py "] ports: - 3000:3000 How could I troubleshoot this ? Thank you
Those hostnames didn't work for me. Using localhost in the launch.json and 0.0.0.0 as the host in the --listen option worked.
Error consume route ApiGateway with ocelot and docker service
I am creating an ApiGateway with ocelot that consume an Api service in net core. The ApiGateway and ApiService are deployed on docker with docker compose of this way: Docker-compose: tresfilos.webapigateway: image: ${DOCKER_REGISTRY-}tresfilosapigateway build: context: . dockerfile: tresfilos.ApiGateway/ApiGw-Base/Dockerfile tresfilos.users.service: image: ${DOCKER_REGISTRY-}tresfilosusersservice build: context: . dockerfile: tresfilos.Users.Service/tresfilos.Users.Service/Dockerfile Docker-compose.override: tresfilos.webapigateway: environment: - ASPNETCORE_ENVIRONMENT=Development - IdentityUrl=http://identity-api ports: - "7000:80" - "7001:443" volumes: - ./tresfilos.ApiGateway/Web.Bff:/app/configuration tresfilos.users.service: environment: - ASPNETCORE_ENVIRONMENT=Development - ASPNETCORE_URLS=https://+:443;http://+:80 ports: - "7002:80" - "7003:443" volumes: - ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro - ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro In configuration ocelot apigateway i define .json like: "ReRoutes": [ { "DownstreamPathTemplate": "/api/{version}/{everything}", "DownstreamScheme": "http", "DownstreamHostAndPorts": [ { "Host": "tresfilos.users.service", "Port": 7002 } ], "UpstreamPathTemplate": "/api/{version}/user/{everything}", "UpstreamHttpMethod": [ "POST", "PUT", "GET" ] }, ], "GlobalConfiguration": { "BaseUrl": "https://localhost:7001" } When i consume the ApiGateway from the url: http://localhost:7000/api/v1/user/Login/authentication I have the error in terminal docker: Why does the above error occur and how to fix it?
What version of Ocelot are you running? I found another thread which has a similar looking problem and apparently from version 16.0.0 of Ocelot 'ReRoutes' was changed to 'Routes' in the Ocelot configuration file. The thread I found was - 404 trying to route the Upstream path to downstream path in Ocelot
I fix it of this way: Change ReRoutes to Routes because ocelot version is 16.0.1 Define config like: "Routes": [ { "DownstreamPathTemplate": "/api/{version}/{everything}", "DownstreamScheme": "http", "DownstreamHostAndPorts": [ { "Host": "tresfilos.users.service", "Port": 7002 } ], "UpstreamPathTemplate": "/api/{version}/User/{everything}" }, ], "GlobalConfiguration": { "BaseUrl": "https://localhost:7001" } In postman i send the data in Body like json and not like paramaters. (JasonS thanks)...
how to initial setup consul with defined key/value
I have setup docker config using docker compose. this is part of docker compose file version: '3' networks: pm: services: consul: container_name: consul image: consul:latest restart: unless-stopped ports: - 8300:8300 - 8301:8301 - 8302:8302 - 8400:8400 - 8500:8500 - 8600:8600 environment: CONSUL_LOCAL_CONFIG: >- { "bootstrap": true, "server": true, "node_name": "consul1", "bind_addr": "0.0.0.0", "client_addr": "0.0.0.0", "bootstrap_expect": 1, "ui": true, "addresses" : { "http" : "0.0.0.0" }, "ports": { "http": 8500 }, "log_level": "DEBUG", "connect" : { "enabled" : true } } volumes: - ./data:/consul/data command: agent -server -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1 Then set the key value via browser I would like to add the key/value as initial at new environment, so that additional setup steps at browser could be avoided. this is the configuration i export by using consul kv command: # consul kv export config/ [ { "key": "config/", "flags": 0, "value": "" }, { "key": "config/drug2/", "flags": 0, "value": "" }, { "key": "config/drug2/data", "flags": 0, "value": "e30=" } ]
To my knowledge Docker Compose does not have a way to run a custom command/script after the containers have started. As a workaround you could write a shell script which executes docker-compose up and then either runs consul kv import or a curl command against Consul's Transaction API to add the data you're trying to load.