Debugging jest from docker - docker

I have a node project running in a docker container that chrome-debugger connects to via port 9229. When using the actual application (as opposed to running the test suite), debugger shows errors/break points just fine.
However, when I run npm run test:e2e from within the docker container, debugger commands, break points, etc are completely ignored. The test suite runs, but it doesn't pick up any breaks.
Admittedly I'm new to both docker & node, but the fact that the app (as opposed to jest) break points are working has me thoroughly confused. If anyone has any ideas on how to get jest break points from a docker container working in chrome-debugger (or vs code for that matter), I'd be really appreciative. Config details below:
docker-compose.yml
pf_debugger:
build: ./pf
image: pf_debugger
container_name: pf_debugger
working_dir: /www
ports:
- "9229:9229"
command: "npm run start:debug"
volumes:
- ./pf:/www
- node_modules:/www/node_modules
depends_on:
- "indy_pool"
- "pf"
networks:
- pf_network
package.json
# ...
"scripts":
"start:debug": "nodemon --config nodemon-debug.json",
"test:e2e": "jest --config ./test/jest-e2e.json",
jest-e2e.json
{
"moduleFileExtensions": ["js", "json", "ts"],
"rootDir": ".",
"testEnvironment": "node",
"testRegex": ".e2e-spec.ts$",
"transform": {
"^.+\\.(t|j)s$": "ts-jest"
}
}
nodemon-debug.json
{
"watch": ["src"],
"ext": "ts",
"inspect": "0.0.0.0:9229",
"exec": "node --inspect=0.0.0.0:9229 --debug -r ts-node/register src/main.ts"
}
launch.json
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Node: Nodemon",
"restart": true,
"sourceMaps": true,
"protocol": "inspector",
"address": "127.0.0.1",
"port": 9229,
"localRoot": "${workspaceRoot}/",
"remoteRoot": "/www/"
},
{
"type": "node",
"name": "e2e-tests",
"request": "launch",
"program": "${workspaceFolder}/node_modules/jest/bin/jest",
"cwd": "${workspaceFolder}",
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen",
"args": [
"--runInBand",
"--config=test/jest-e2e.json"
]
}
]

For reference, I solved my own problem. Appending --runInBand to the npm:test command ensures breakpoints are obeyed

Related

VS Code debugger for rails docker: Breakpoints are not working

I have installed the Ruby plugin in VS Code and the following gems in docker and configured 3000 and 1234 ports.
Gems:
gem 'debase'
gem 'ruby-debug-ide'
Command:
bundle exec rdebug-ide --debug --host 0.0.0.0 --port 1234 -- bin/rails server -p 3000 -b 0.0.0.0
Launch.json: I have tried all possible configurations I found online and was able to connect the debugger with many but I could never catch the breakpoints in VS Code.
{
"version": "0.2.0",
"configurations": [
{
"name": "Rails Docker Debug",
"type": "Ruby",
"request": "attach",
"cwd": "${workspaceRoot}",
"remoteHost": "localhost",
"remotePort": "1234",
"remoteWorkspaceRoot": "${workspaceRoot}",
}
]
}
You can try this
{
"version": "0.2.0",
"configurations": [
{
"name": "Rails Docker Debug",
"type": "Ruby",
"request": "attach",
"cwd": "${workspaceRoot}/foldername",
"remoteHost": "localhost",
"remotePort": "1234",
"remoteWorkspaceRoot": "${workspaceRoot}/foldername",
}
]
}

Ho to debug Go app with DLV and MODD on Docker

I'm running a Go app on the Docker and want to use VSCode to debug it via DLV at the same time using MODD for app rebuild. So far I cannot figure out how to connect to the debugger.
Docker:
FROM golang:1.18 as dev
WORKDIR /root
RUN GO111MODULE=on go install github.com/cortesi/modd/cmd/modd#latest
RUN go install github.com/go-delve/delve/cmd/dlv#latest
COPY . .
CMD modd
MODD:
**/*.go !**/*_test.go {
prep: go build -o app main.go
prep: dlv exec --headless --continue --listen localhost:2345 --accept-multiclient ./app
daemon +sigterm: ./app
}
DOCKER_COMPOSE (expose port):
ports:
- "5000:5000"
- "2345:2345"
VSCode configuration:
{
"name": "Connect to Go server",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "${workspaceFolder}",
"port": 2345,
"host": "127.0.0.1",
}
Q: How to make DLV work on Docker with MODD?
Thanks!
Well, afaik it is not trivial to do what you want, because you have to watch files being changed in your host to trigger dlv inside the container, messing up with vscode's ongoing debug session.
Here is hacky way to setup vscode to debug app in container and use modd to restart debug inside the container on file change.
(Make sure you have modd installed in your host machine)
.vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "/root",
"port": 2345,
"host": "127.0.0.1",
"trace": "verbose",
"preLaunchTask": "setup docker debug",
"postDebugTask": "teardown docker debug"
}
]
}
.vscode/tasks.json - This file will instruct vscode to run your container via docker compose and to run modd in background
{
"version": "2.0.0",
"tasks": [
{
"label": "setup docker debug",
"dependsOn": [
"show app console"
]
},
{
"label": "teardown docker debug",
"dependsOrder": "sequence",
"dependsOn": [
"stop all containers"
]
},
{
"label": "show app console",
"command": "docker logs app --follow",
"type": "shell",
"isBackground": true,
"presentation": {
"reveal": "always",
"panel": "dedicated",
"clear": true,
"showReuseMessage": true
},
"problemMatcher": [
{
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": ".",
"endsPattern": ".",
}
}
],
"dependsOn":[
"start all containers",
"modd"
]
},
{
"label": "start all containers",
"type": "shell",
"command": "docker-compose up --build --force-recreate --detach",
"presentation": {
"reveal": "always",
"panel": "shared",
"clear": true,
"showReuseMessage": true
},
"dependsOn":[
"stop all containers"
]
},
{
"label": "stop all containers",
"type": "shell",
"command": "docker-compose down",
"presentation": {
"panel": "shared",
"clear": true
},
},
{
"label": "modd",
"type": "shell",
"isBackground": true,
"command": "modd",
"presentation": {
"panel": "new",
"clear": true
},
"problemMatcher": [
{
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": ".",
"endsPattern": ".",
}
}
],
}
]
}
docker-compose.yml
version: "3.8"
services:
app:
container_name: app
build:
context: .
restart: on-failure
ports:
- 5000:5000
- 2345:2345
security_opt:
- apparmor=unconfined
cap_add:
- SYS_PTRACE
Dockerfile
FROM golang
WORKDIR /root
RUN go install github.com/go-delve/delve/cmd/dlv#latest
COPY . .
ENTRYPOINT dlv --listen=:2345 --api-version=2 --headless --accept-multiclient debug .
dlv.txt - This file will be used to call dlv to rebuild and continue
rebuild
continue
modd.conf - modd will copy all files back to the container and issue a rebuild and continue command to the running dlv
**/*.go !**/*_test.go {
prep +onchange: docker cp ./ app:/root/
prep +onchange: timeout 1 dlv connect localhost:2345 --init dlv.txt
}
You'll be able to set breakpoints and everything, but you'll notice that sometimes you'll need to manually pause and continue to recover the debugging session.

Override parent image docker container command to do something after

I want to change a user password and run a SQL script against a DB2 container image. How do I do whatever the parent image called for, but then run a few commands after that completed? I need this to run using docker compose because the database will be used to support an acceptance test. In my docker-compose.yml file, I have a command property, but I checked the container and do not see the result of the touch statement, so it never ran.
My docker-compose.yml file is:
version: "3.2"
services:
ssc-file-generator-db2-test:
container_name: "ssc-file-generator-db2-test"
image: ibmcom/db2:latest
command: /bin/bash -c "touch /command-run && echo \"db2inst1:db2inst1\" | chpasswd && su db2inst1 && db2 -tvf /db2-test-scaffolding/init.sql"
hostname: db2server
privileged: true
# entrypoint: ["/bin/sh -c ]
ports:
- 50100:50000
- 55100:55000
networks:
- back-tier
restart: "no"
volumes:
- db2-test-scaffolding:/db2-test-scaffolding
env_file:
- acceptance-run.environment
# ssc-file-generator:
# container_name: "ssc-file-generator_testing"
# image: ssc-file-generator
# depends_on: ["ssc-file-generator-db2-test]
# command:
# env_file: ["acceptance-run.environment"]
networks:
back-tier: {}
volumes:
db2-test-scaffolding:
driver: local
driver_opts:
o: bind
type: none
device: ./db2-test-scaffolding
acceptance-run.environment
BCUPLOAD_DATASOURCE_DIALECT=org.hibernate.dialect.DB2Dialect
BCUPLOAD_DATASOURCE_DRIVER=com.ibm.db2.jcc.DB2Driver
BCUPLOAD_DATASOURCE_PASSWORD=bluecost
BCUPLOAD_DATASOURCE_URL=jdbc:db2://localhost:50100/mydb:currentSchema=FILE_GENERATOR
BCUPLOAD_DATASOURCE_USERNAME=bluecost
B2INSTANCE=db2inst1
DB2INST1_PASSWORD=db2inst1
DBNAME=MYDB
DEBUG_SECRETS=true
file-generator.test.files.path=src/test/acceptance/resources/files/
# Needed for DB2 container
LICENSE=accept
The docker image is
ibmcom/db2:latest
For convenience, this is the docker inspect ibmcom/db2:latest
[
{
"Id": "sha256:e304e217603b80b31c989574081b2badf210b4466c7f74cf32087ee0a1ba6e04",
"RepoTags": [
"ibmcom/db2:latest"
],
"RepoDigests": [
"ibmcom/db2#sha256:77da4492bf18c49a1012aa6071a16aee0039dca9c0a2a492345b6b030714a54f"
],
"Parent": "",
"Comment": "",
"Created": "2021-03-29T18:54:36.94484751Z",
"Container": "e59bda8065b72a0e440d145d6d90ba77231a514e811e66651d4fa6da98a34910",
"ContainerConfig": {
"Hostname": "6125cd0dc6e6",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"50000/tcp": {},
"55000/tcp": {},
"60006/tcp": {},
"60007/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=oci",
"STORAGE_DIR=/database",
"HADR_SHARED_DIR=/hadr",
"DBPORT=50000",
"TSPORT=55000",
"SETUPDIR=/var/db2_setup",
"SETUPAREA=/tmp/setup",
"NOTVISIBLE=in users profile",
"LICENSE_NAME=db2dec.lic"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"ENTRYPOINT [\"/var/db2_setup/lib/setup_db2_instance.sh\"]"
],
"Image": "sha256:e65b35603167c75a86515ef4af101a539cbbdf561bcb9efd656d17b8d867c7da",
"Volumes": {
"/database": {},
"/hadr": {}
},
"WorkingDir": "",
"Entrypoint": [
"/var/db2_setup/lib/setup_db2_instance.sh"
],
"OnBuild": [],
"Labels": {
"architecture": "x86_64",
"build-date": "2021-03-10T06:09:00.139818",
"com.redhat.build-host": "cpt-1007.osbs.prod.upshift.rdu2.redhat.com",
"com.redhat.component": "ubi7-container",
"com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
"description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"distribution-scope": "public",
"io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"io.k8s.display-name": "Red Hat Universal Base Image 7",
"io.openshift.tags": "base rhel7",
"name": "ubi7",
"release": "338",
"summary": "Provides the latest release of the Red Hat Universal Base Image 7.",
"url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi7/images/7.9-338",
"vcs-ref": "a4e710a688a6374670ecdd56637c3f683d11cbe3",
"vcs-type": "git",
"vendor": "Red Hat, Inc.",
"version": "7.9"
}
},
"DockerVersion": "19.03.6",
"Author": "db2_download_and_go",
"Config": {
"Hostname": "6125cd0dc6e6",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"50000/tcp": {},
"55000/tcp": {},
"60006/tcp": {},
"60007/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=oci",
"STORAGE_DIR=/database",
"HADR_SHARED_DIR=/hadr",
"DBPORT=50000",
"TSPORT=55000",
"SETUPDIR=/var/db2_setup",
"SETUPAREA=/tmp/setup",
"NOTVISIBLE=in users profile",
"LICENSE_NAME=db2dec.lic"
],
"Cmd": null,
"Image": "sha256:e65b35603167c75a86515ef4af101a539cbbdf561bcb9efd656d17b8d867c7da",
"Volumes": {
"/database": {},
"/hadr": {}
},
"WorkingDir": "",
"Entrypoint": [
"/var/db2_setup/lib/setup_db2_instance.sh"
],
"OnBuild": [],
"Labels": {
"architecture": "x86_64",
"build-date": "2021-03-10T06:09:00.139818",
"com.redhat.build-host": "cpt-1007.osbs.prod.upshift.rdu2.redhat.com",
"com.redhat.component": "ubi7-container",
"com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
"description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"distribution-scope": "public",
"io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"io.k8s.display-name": "Red Hat Universal Base Image 7",
"io.openshift.tags": "base rhel7",
"name": "ubi7",
"release": "338",
"summary": "Provides the latest release of the Red Hat Universal Base Image 7.",
"url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi7/images/7.9-338",
"vcs-ref": "a4e710a688a6374670ecdd56637c3f683d11cbe3",
"vcs-type": "git",
"vendor": "Red Hat, Inc.",
"version": "7.9"
}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 2778060115,
"VirtualSize": 2778060115,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/49d83ba2eb50cdbfc5a9e3a7b4baf907a9b4326aa0710689f602bb3cff01d820/diff:/var/lib/docker/overlay2/ba54659cc4ec10fa84edc49d5480ebe4897629f841d76ae79a4fb0c2edb791a5/diff:/var/lib/docker/overlay2/2238ae349d70686609b990b63c0066d6e51d94be59801a81c7f5b4d97da1fe02/diff:/var/lib/docker/overlay2/704708b72448f8a4750db3aabd43c12f23ad7e6d3f727aa5977bd7ac4db8e8cb/diff:/var/lib/docker/overlay2/1b47e1515517af553fd8b986c841e87d8ba813d53739344c9b7350ad36b54b0b/diff:/var/lib/docker/overlay2/0a580802a7096343aa5d8de5039cf5a011e66e481793230dced8769b024e5cd2/diff:/var/lib/docker/overlay2/4da91655770b0e94236ea8da2ea8ff503467161cf85473a32760f89b56d213ff/diff:/var/lib/docker/overlay2/401c640771a27c70f20abf5c48b0be0e2f42ed5b022f81f58ebc0810831283ea/diff:/var/lib/docker/overlay2/8985c59d1ab32b8d8eaf4c11890801cb228d47cc7437b3e9b4f585e7296e4b6a/diff:/var/lib/docker/overlay2/ec66f9872de7b5310bac2bd5fd59552574df56bb06dcd5dd61ff2b63002d77ed/diff:/var/lib/docker/overlay2/fcf40217c8477dcf4e5fafc8c83408c3c788f367ed67c78cb0bc312439674fcf/diff",
"MergedDir": "/var/lib/docker/overlay2/8bcf7bf60181d555a11fb8df79a28cb2f9d8737d28fe913a252694ba2165c1d1/merged",
"UpperDir": "/var/lib/docker/overlay2/8bcf7bf60181d555a11fb8df79a28cb2f9d8737d28fe913a252694ba2165c1d1/diff",
"WorkDir": "/var/lib/docker/overlay2/8bcf7bf60181d555a11fb8df79a28cb2f9d8737d28fe913a252694ba2165c1d1/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:87e96a33b6fb724886ccda863dcbf85aab1119d380dc8d60fc7eeace293fc3a8",
"sha256:7dfef4d05d0afc0383f5ebd8d9f3f7f7e17406f7e9e5744bead1a65e5ab47d0e",
"sha256:51a646f7fd864ded24db2d87aaef69767cec8cfa63117bdca1a80cc4e0a77329",
"sha256:9e2474c7feefaf8fe58cdb4d550edf725288c109f7842c819c734907406e9095",
"sha256:d4d38bb7d4b3e7ea2b17acce63dd4b9ed926c7c0bbe028393228caf8933a4482",
"sha256:4ec8c6264294fc505d796e17187c4c87099ff8f76ac8f337653e4643a9638d9e",
"sha256:84a0a1068d25a8fa7b0f3e966b0313d31bc9e7405484da2a9ebf0fe1ebaf40dc",
"sha256:956ab4664636dcce9d727ed0580f33ec510c8903ee827ce3ce72d4ba1184139b",
"sha256:55f8b1bcde6acbd521024e3d10ed4a3a3bdf567cfd029b1876bd646ff502270b",
"sha256:8c2496f1c442c3303273991e9cd5c4a5ffc0ab2ad7e2547976fe451095798390",
"sha256:583acd9a453ded660462a120737ffec2def4416a573c6ea7ed2b132e403d9c08",
"sha256:604c94797d42c86bfbc3d25e816a105b971805ae886bec8bc69bdae4ff20e1b6"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
I solved the problem by debugging the bash script and learning that it offers a custom directory which by which it runs scripts. I believe these get run after the db2 service starts. While I suspect that I could have specified an entrypoint, it would not necessarily work well with the ibmcom/db2 container. Posted below is my docker compose file showing volume mounts which I used for this particular container.
Note I think this defines mounts in a "bind" format, I believe, as opposed to a more common volume. My approach allowed me to pick where the source data is stored, whereas had I specified a volume, it would have picked some bizarre place on my WSL system to persist the volume data. There's probably a better approach, but I'm still learning Docker builds.
version: "3.2"
services:
ssc-file-generator-db2-test:
container_name: "ssc-file-generator-db2-test"
image: ibmcom/db2:latest
hostname: db2server
privileged: true
ports:
- 50100:50000
- 55100:55000
networks:
- back-tier
restart: "no"
volumes:
- setup-sql:/setup-sql
- db2-shell-scripts:/var/custom
- host-dirs:/host-dirs
# Uncomment below to use database outside the container
# - database:/database
env_file:
- acceptance-run.environment
networks:
back-tier: {}
volumes:
setup-sql:
driver: local
driver_opts:
o: bind
type: none
device: ./setup-sql
db2-shell-scripts:
driver: local
driver_opts:
o: bind
type: none
device: ./db2-shell-scripts
host-dirs:
driver: local
driver_opts:
o: bind
type: none
device: ./host-dirs

In VSCode, 'Python: Remote Attach' fails to connect to a running Docker Container

Good evening,
I have a container which is running and ready to connect. In VSCode I've tried to 'Attach Visual Studio Code' to open a new Dev Container, select the sources, hoping I can debug.
I'm unable to select breakpoints and the code isn't running.
Nothing happens.
I've also tried 'Python: Attach Remote'.
Nothing happens and there's no errors.
Launch.json:
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "0.0.0.0",
"port": 3000
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "."
},
]
}
Docker Compose.yml
services:
sfunc:
image: sfunc
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --log-to-stderr --wait-for-client --listen 127.0.0.1:3000 home/site/wwwroot/TimerTrigger/__init__.py "]
ports:
- 3000:3000
How could I troubleshoot this ?
Thank you
Those hostnames didn't work for me. Using localhost in the launch.json and 0.0.0.0 as the host in the --listen option worked.

RabbitMQ with Docker and Rancher produces "TypeError: Cannot read property 'name' of undefined"-errors in the management UI

As the title suggests. When I deploy my RabbitMQ image via Rancher I get the following errors in the management interface. I have no clue what is causing this problem (even after extensively searching the internet).
Dockerfile
FROM rabbitmq:3.7.7-management-alpine
COPY definitions.json /etc/rabbitmq/
COPY rabbitmq.config /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.config
/etc/rabbitmq/definitions.json
CMD ["rabbitmq-server"]
rabbitmq.config
[
{rabbit, [
{loopback_users, []}
]},
{rabbitmq_management, [
{load_definitions, "/etc/rabbitmq/definitions.json"}
]}
].
definitions.json
{
"bindings": [],
"exchanges": [],
"global_parameters": [],
"parameters": [],
"policies": [],
"queues": [],
"rabbit_version": "3.7.7",
"topic_permissions": [],
"users": [{
"hashing_algorithm": "rabbit_password_hashing_sha256",
"name": "username1",
"password_hash": "hash1",
"tags": "administrator"
}, {
"hashing_algorithm": "rabbit_password_hashing_sha256",
"name": "username2",
"password_hash": "hash2",
"tags": "administrator"
}
],
"vhosts": [],
"permissions": []
}
docker-compose.yml
version: "2"
services:
rabbitmq:
image: myname/imagename
hostname: rabbitmq
ports:
- 15672:15672
- 5672:5672
Ensure that current user has enough rights to access Exchanges and Queues tabs.

Resources