I was previously able to run Remote Containers inside VS Code without any errors. For some unknown reason, today it stopped working with this error when I try to reopen in a container. Error is: Docker Desktop WSL 2 backend required.
In the Windows Powershell, I can see that docker is indeed installed.
PS C:\Users\Me> wsl docker --version
Docker version 20.10.14, build a224086
I tried to reinstall and follow the instructions in Docker Desktop WSL 2 backend setup.
Use WSL 2 based engine is checked
Integration with the Ubuntu distro is enabled
Inside the Ubuntu command line, I can also run docker ps with no problem at all.
It's just inside VS Code that's having issues. Here's the trace log to help out more:
[1491047 ms] Host server: Error: spawn /usr/local/bin/docker ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:282:19)
at onErrorNT (node:internal/child_process:477:16)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
[1491049 ms] Start: Run in Host: /usr/local/bin/docker version --format {{.Server.APIVersion}}
[1491049 ms] Host server: (node:3238) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
(Use `node --trace-warnings ...` to show where the warning was created)
[1491051 ms] Host server: Error: spawn /usr/local/bin/docker ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:282:19)
at onErrorNT (node:internal/child_process:477:16)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
[1491052 ms] Stop (3 ms): Run in Host: /usr/local/bin/docker version --format {{.Server.APIVersion}}
[1491052 ms] spawn /usr/local/bin/docker ENOENT
Related
I'm trying to setup Airflow debugging environment with VS code using Remote - containers plugin. What I did so far:
docker-compose up the image apache/airflow:2.2.4 with Docker Desktop community 2.4 on macOS 10.13.6
Attach to the running airflow-scheduler container with Remote - containers plugin
Compose a launch.json file with
"version": "0.2.0",
"configurations": [
{
"name": "Airflow Test",
"type": "python",
"request": "launch",
// $ which airflow
"program": "/home/airflow/.local/bin/airflow",
"console": "integratedTerminal",
"args": [
"dags",
"test",
"task_of_middleware",
"2022-04-08"
],
"justMyCode": true
}
]
VS code pops up error below when I start debugging:
The terminal process failed to launch: Path to shell executable "/sbin/nologin" does not exist.
The attached running container shows error:
[1382 ms] Start: Run in container: for pid in `cd /proc && ls -d [0-9]*`; do { echo $pid ; readlink /proc/$pid/cwd ; readlink /proc/$pid/ns/mnt ; cat /proc/$pid/stat | tr "
[1518 ms] Container server: Error: spawn /sbin/nologin ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:282:19)
at onErrorNT (node:internal/child_process:477:16)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
[1525 ms] Error: spawn /sbin/nologin ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:282:19)
at onErrorNT (node:internal/child_process:477:16)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
[1530 ms] Container server: (node:11888) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 1)
(Use `node --trace-warnings ...` to show where the warning was created)
I think the container runs normally since I'm able to connect to container shell via Docker Desktop and run Airflow test command. Any advice would be appreciated.
Not sure if this is the best practice, I found a way to ease the error and make debugger run normally. Put a config into the Attached container configuration files by select Remote-Containers: Open Container Configuration File from the Command Palette after attaching.
// Container user VS Code should use when connecting
"remoteUser": "root"
Reference here.
I'm using Window Linux Subsystem (Debian stretch). Followed the instruction on Docker website, I installed docker-ce, but it cannot start. Here is the info:
$ sudo service docker start
grep: /etc/fstab: No such file or directory
[ ok ] Starting Docker: docker.
$ sudo service docker status
[FAIL] Docker is not running ... failed!
What should I do with /etc/fstab not found?
to fix fstab
touch /etc/fstab
if you run dockerd, it will give you the failed message:
INFO[2022-01-27T17:55:14.100489400+07:00] Loading containers: start.
WARN[2022-01-27T17:55:14.191666800+07:00] Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING`, error: exit status 4
INFO[2022-01-27T17:55:14.493716300+07:00] stopping event stream following graceful shutdown error="<nil>" module=libcontainerd namespace=moby
INFO[2022-01-27T17:55:14.494906600+07:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2022-01-27T17:55:14.495048400+07:00] stopping healthcheck following graceful shutdown module=libcontainerd
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING
(exit status 4)
that is Debian nat issue, fix it with:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
now you can start the service again
you can follow this to make it start on startup https://askubuntu.com/a/1356147/138352
Edited:
if the issue with IP table still persisted try to set WSL version to 2, run the command from Windows shell:
wsl --set-version <distribution name> 2
the distribution list can be found with command wsl -l
I was getting the same error. Apparently on my install of WSL with Debian, I didn't have an etc/fstab file. Surprisingly, just creating the file via 'touch' worked:
sudo touch /etc/fstab
Perhaps a good signal https://learn.microsoft.com/en-us/windows/wsl/release-notes#build-17093
WSL now processes the /etc/fstab file during instance start [GH 2636].
For anybody stumbling across this years later like me, Docker doesn't work inside WSL.
But you can use Docker for Windows and WSL2 to run native containers inside your Linux Distro and the install and config is quite painless https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers
I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message.
/bin/bash: line 1: 40 Killed python -u scripts/server.py start go
I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error:
Error response from daemon: driver "devicemapper" failed to remove root filesystem.
After googling, I tried a bunch of things:
docker rm -f <container>
rm -f <pth to mount>
umount <pth to mount>
All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution.
Any ideas?
I had the same problem and the solution was a real surprise.
So here is the error om docker rm:
$ docker rm 08d51aad0e74
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy
Then I did the following (basically go through all processes and look for docker in mountinfo):
$ grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac
/proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota
This got be the PID of the offending process keeping it busy - 20416 (the item after /proc/)
So I did a ps -p and to my surprise find:
[devops#dp01app5030 SeGrid]$ ps -p 20416
PID TTY TIME CMD
20416 ? 00:00:19 ntpd
A true WTF moment. So I pair problem solved with Google and found this:
Then found this https://github.com/docker/for-linux/issues/124
Turns out I had to restart ntp daemon and that fixed the issue!!!
I've tried to live migrate a wildfly-container to another host like described here. The example with the np container works well. When I replace the example with a simple jboss/wildfly container, I just received this error when criu tries to restore the container on the other host :
Error response from daemon: Cannot restore container <CONTAINER-ID>: criu failed: type NOTIFY errno 0
Error: failed to restore one or more containers
Because I didn't found a solution to this error, I've compiled the linux kernel like described on the criu website and here.
After that sudo criu check prints:
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/sockets.c:711): The current kernel doesn't support packet_diag
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/sockets.c:721): The current kernel doesn't support netlink_diag
Info prctl: PR_SET_MM_MAP_SIZE is not supported
Looks good.
criu --version
Version: 2.11
docker --version
Docker version 1.6.2, build 7c8fca2
Checkpoint/Restore for an example shell script example worked very well. But when I want to checkpoint a container
docker run -d --name looper busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
with
criu dump -t $PID --images-dir /tmp/looper
I receive this output
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/mount.c:701): mnt: 87:./etc/hosts doesn't have a proper root mount
Error (criu/cr-dump.c:1641): Dumping FAILED.`
I can't find some solutions with these errors. Is there any known solution to live migrate a wildfly-container?
Thanks in advance
I'm trying to use Packer to build a docker image of the webapp I'm working on. Whenever I run packer build, when it gets to the step that it runs the runit recipe, I would get Build 'docker' errored: Error executing Chef: Non-zero exit status: 137
I looked into 137, and found out this is the exit code commonly associated with a kill -9. In most cases this would imply that the system is running critically low on memory, and the system is attempting to compensate.
I tried to find the smallest possible reproduction, and I came up with this packer configuration:
{
"builders":[{
"type": "docker",
"pull": false,
"image": "silkstart/basic_server",
"export_path": "image.tar",
"run_command":[
"-d",
"-i",
"-t",
"--memory-reservation",
"1G",
"{{.Image}}",
"/bin/bash"
]
}],
"provisioners":[
{
"type": "chef-solo",
"cookbook_paths": ["cookbooks", "vendor/cookbooks"],
"data_bags_path": "data_bags",
"roles_path": "roles",
"environments_path": "environments",
"run_list": [
"recipe[runit]"
]
}
],
"post-processors": [
{
"type": "docker-import",
"repository": "silkstart/docker_test",
"tag": "0.1"
}
]
}
When I run packer build on this configuration, this is my output:
TMPDIR=/opt/shared packer build packer_files/docker_test.json
docker output will be in this color.
==> docker: Creating a temporary directory for sharing data...
==> docker: Starting docker container...
docker: Run command: docker run -v /opt/shared/packer-docker484290992:/packer-files -d -i -t --memory-reservation 1G silkstart/basic_server /bin/bash
docker: Container ID: 1f87b0cf1fe71f07b580ae6b18415a79c23a1a32a40f5f0366be90f160977a50
==> docker: Provisioning with chef-solo
docker: Installing Chef...
docker: % Total % Received % Xferd Average Speed Time Time Time Current
docker: Dload Upload Total Spent Left Speed
docker: 100 20022 100 20022 0 0 45092 0 --:--:-- --:--:-- --:--:-- 45196
docker: Getting information for chef stable for ubuntu...
docker: downloading https://omnitruck-direct.chef.io/stable/chef/metadata?v=&p=ubuntu&pv=14.04&m=x86_64
docker: to file /tmp/install.sh.23/metadata.txt
docker: trying curl...
docker: url https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/14.04/x86_64/chef_12.6.0-1_amd64.deb
docker: md5 5cfc19d5a036b3f7860716bc9795a85e
docker: sha256 e0b42748daf55b5dab815a8ace1de06385db98e29a27ca916cb44f375ef65453
docker: version 12.6.0downloaded metadata file looks valid...
docker: downloading https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/14.04/x86_64/chef_12.6.0-1_amd64.deb
docker: to file /tmp/install.sh.23/chef_12.6.0-1_amd64.deb
docker: trying curl...
docker: Comparing checksum with sha256sum...
docker:
docker: WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
docker:
docker: You are installing an omnibus package without a version pin. If you are installing
docker: on production servers via an automated process this is DANGEROUS and you will
docker: be upgraded without warning on new releases, even to new major releases.
docker: Letting the version float is only appropriate in desktop, test, development or
docker: CI/CD environments.
docker:
docker: WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
docker:
docker: Installing chef
docker: installing with dpkg...
docker: Selecting previously unselected package chef.
docker: (Reading database ... 17195 files and directories currently installed.)
docker: Preparing to unpack .../chef_12.6.0-1_amd64.deb ...
docker: Unpacking chef (12.6.0-1) ...
docker: Setting up chef (12.6.0-1) ...
docker: Thank you for installing Chef!
docker: Creating directory: /tmp/packer-chef-solo
docker: Creating directory: /tmp/packer-chef-solo/cookbooks-0
docker: Creating directory: /tmp/packer-chef-solo/cookbooks-1
docker: Creating directory: /tmp/packer-chef-solo/roles
docker: Creating directory: /tmp/packer-chef-solo/data_bags
docker: Creating directory: /tmp/packer-chef-solo/environments
docker: Creating configuration file 'solo.rb'
docker: Creating JSON attribute file
docker: Executing Chef: sudo chef-solo --no-color -c /tmp/packer-chef-solo/solo.rb -j /tmp/packer-chef-solo/node.json
docker: [2016-01-29T06:42:48+00:00] INFO: Forking chef instance to converge...
docker: [2016-01-29T06:42:48+00:00] INFO: *** Chef 12.6.0 ***
docker: [2016-01-29T06:42:48+00:00] INFO: Chef-client pid: 207
docker: [2016-01-29T06:42:50+00:00] INFO: Setting the run_list to ["recipe[runit]"] from CLI options
docker: [2016-01-29T06:42:50+00:00] INFO: Run List is [recipe[runit]]
docker: [2016-01-29T06:42:50+00:00] INFO: Run List expands to [runit]
docker: [2016-01-29T06:42:50+00:00] INFO: Starting Chef Run for 1f87b0cf1fe7
docker: [2016-01-29T06:42:50+00:00] INFO: Running start handlers
docker: [2016-01-29T06:42:50+00:00] INFO: Start handlers complete.
docker: [2016-01-29T06:42:52+00:00] INFO: Processing service[runit] action nothing (runit::default line 20)
docker: [2016-01-29T06:42:52+00:00] INFO: Processing execute[start-runsvdir] action nothing (runit::default line 24)
docker: [2016-01-29T06:42:52+00:00] INFO: Processing execute[runit-hup-init] action nothing (runit::default line 33)
docker: [2016-01-29T06:42:52+00:00] INFO: Processing apt_package[runit] action install (runit::default line 64)
docker: [2016-01-29T06:42:55+00:00] INFO: Processing cookbook_file[/var/chef/cache/preseed/runit/runit-2.1.1-6.2ubuntu3.seed] action create (dynamically defined)
docker: [2016-01-29T06:42:55+00:00] INFO: cookbook_file[/var/chef/cache/preseed/runit/runit-2.1.1-6.2ubuntu3.seed] created file /var/chef/cache/preseed/runit/runit-2.1.1-6.2ubuntu3.seed
docker: [2016-01-29T06:42:55+00:00] INFO: cookbook_file[/var/chef/cache/preseed/runit/runit-2.1.1-6.2ubuntu3.seed] updated file contents /var/chef/cache/preseed/runit/runit-2.1.1-6.2ubuntu3.seed
docker: [2016-01-29T06:42:55+00:00] INFO: apt_package[runit] pre-seeding package installation instructions
==> docker: Killing the container: 1f87b0cf1fe71f07b580ae6b18415a79c23a1a32a40f5f0366be90f160977a50
Build 'docker' errored: Error executing Chef: Non-zero exit status: 137
I'm not entirely sure what is causing the code 137, and any help would be appreciated.
Update 1
I'm including a gist of the full debug output from Chef. It's much more verbose, mainly due it would seem to all of the attempts Ohai makes to get information.
https://gist.github.com/jrstarke/4c5f3b432aaee70c7f77
No references in here seem to suggest an out of memory error, at least on the docker host.
After much much digging, I found the problem. The underlying problem, and the solution were both found on an issue in cloudfoundry-incubator/garden-linux.
Apparently as part of the setup process one of the post init scripts for runit executes a kill -s HUP 1. Why I'm not entirely sure, but as they noted there, doing a trap '' HUP right before my apt-get install runit and a trap HUP afterwards totally solved my problem.
Check the OOM log on the host machine. Also you can use the execute_command configuration value to turn the log level to debug.
This answer seemed to work for me: https://stackoverflow.com/a/42398166/2878244
I had to increase the memory resources assigned to docker by going to the Docker Tab > Preferences > Advanced