I'm using Manjaro Linux and Kernerl 5.10.13.
I'm not sure what happened, maybe something was updated, but Docker stopped working for me.
When I try to do docker run hello-world, I see the following message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367:
starting container process caused: process_linux.go:495: container init caused: apply apparmor
profile: apparmor failed to apply profile: write /proc/self/attr/exec: invalid argument: unknown.
ERRO[0000] error waiting for container: context canceled
If I switch to kernel 5.9.16, it seems to be fine. Am I missing something here?
You may need to enable apparmor in your kernel parameters (apparmor=1 lsm=lockdown,yama,apparmor,bpf)
See https://www.reddit.com/r/archlinux/comments/ldhx0v/cant_start_docker_containers_on_latest_kernel/
I'm not sure what happened there, but the next morning (around 7 hours after I posted this), there was an update on my system, which seems to have resolved the issue
Related
I attempted to use sysbox-runc as the runtime for Docker on Ubuntu. sysbox-runc is operational. Nevertheless, an error occurred when I tried to create a container using Docker.
The command I was using: docker run --runtime=sysbox-runc nginx
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: container_linux.go:425: starting container process caused: process_linux.go:607: container init caused: process_linux.go:578: handleReqOp caused: rootfs_init_linux.go:366: failed to mkdirall /var/lib/sysbox/shiftfs/2e6d4302-28cd-4d9d-827e-6088b8b34e89/var/lib/kubelet: mkdir /var/lib/sysbox/shiftfs/2e6d4302-28cd-4d9d-827e-6088b8b34e89/var/lib/kubelet: value too large for defined data type caused: mkdir /var/lib/sysbox/shiftfs/2e6d4302-28cd-4d9d-827e-6088b8b34e89/var/lib/kubelet: value too large for defined data type: unknown.
ERRO[0000] error waiting for container: context canceled
Notes:
The same works fine with the default runtime runc.
Running docker and sysbox-runc as root.
Has anyone come across this before, please?
Is it ubuntu 22.04 ? Do you use kernel 5.15.(>=48) ? plz take a look at
Unfortunately there isn't much we can do with Ubuntu kernels 5.15.(>=48) as they are apparently missing a Ubuntu-patch on overlayfs that breaks interaction with shiftfs.
If you can, please upgrade to newer kernels (e.g., 5.19, 6.0, etc.).
If you must use kernel 5.15, try using 5.15.47 or earlier.
If you must use kernel 5.15.(>=48), you can work-around the problem by either:
Removing the shiftfs module from the kernel (e.g., rmmod) or
Configuring Sysbox to not use shiftfs. You do this by configuring the systemd service unit for sysbox-mgr, and passing the --disable-shiftfs flag to Sysbox. See here for more.
https://github.com/nestybox/sysbox/issues/596#issuecomment-1291235140
I installed docker on ArchLinux and I tried to run the daemon. It failed. When I ran it manually using dockerd I got this explanation.
failed to start daemon: error initializing graphdriver: loopback attach failed
I didn't find many things relevant so far. Only this topic which didn't get the answers I was looking for.
Docker fails to start after install with "loopback attach failed"
https://github.com/moby/moby/issues/15243 was the answer. It was the fact that I updated my kernel without restarting.
Let me try to describe my situation here (trying best to capture whatever information I have).
We have a production level service which consists of many dockers containing multiple services running in cloud (asuzre) VM.
Now if we keep on running it for long (long >= 5 days) as a part of Longivity testing, we can see -- sometimes (i.e. not always after 5 days, sometimes) - services start failing, denial of services to our clients.
ERROR: for health-checker Cannot start service health-checker: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"failed to write 1 to memory.kmem.limit_in_bytes: write /sys/fs/cgroup/memory/docker/ad4926b8e5b583ce3ae30d4e3d1f1379ee89fc2735d83a87b127ef4e1e7089db/memory.kmem.limit_in_bytes: cannot allocate memory\"": unknown {}
ERROR: for credentials Cannot start service credentials: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"failed to write 1 to memory.kmem.limit_in_bytes: write /sys/fs/cgroup/memory/docker/5b2cef0997776af7265fcc41bd640059a29fc723375e43acde63514f58ec6055/memory.kmem.limit_in_bytes: cannot allocate memory\"": unknown {}
ERROR: for occm Cannot start service occm: runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"failed to write 1 to memory.kmem.limit_in_bytes: write /sys/fs/cgroup/memory/docker/9d5912c7459a514c6f9bdaa3a170b1bf0ba4fa3189b482b72c2013a85cf5b8ba/memory.kmem.limit_in_bytes: cannot allocate memory\"": unknown {}
failed to perform container upgrade task. java.lang.RuntimeException: Failed to deploy containers {akkaAddress=akka://some-manager, akkaSource=akka://some-manager/user/service-deployer, sourceActorSystem=some-manager}
So as a consequence, none of our services are accessible, all the https calls are denied:
Name does not resolve {}\n","stream":"stdout","time":"2021-07-02T03:38:29.720361925Z"}
Name does not resolve {}\n","stream":"stdout","time":"2021-07-02T03:38:29.744298675Z"}
I was trying doing lots of google and try to get something actionable and meaningful from where to start.
Any pointer / insight / clue will be highly appreciated.
(I understand I may not be very detailed or very pin pointing the issue - actually I am bit clueless as it's failing sometimes after 5 days of run.)
Seeking guidance.
Pradip
Rebuild your docker & containerd after upgrading the kernel.
This happened to me after upgrading 5.4.6 -> 5.18.5 in one go. Rebuilding docker & containerd packages solved it.
Hi all I am using hyperledger cello framework to create blockchain containers
I have problem when I create chain with this error message:
ERROR: for explorer Cannot start service explorer: oci runtime error:
container_linux.go:247: starting container process caused
"process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting
\\\"/opt/cello/fabric-1.0/local/explorer-artifacts/config.json\\\"
to rootfs
\\\"/var/lib/docker/overlay2/c0942a0b749ad436d6f4480fb43623dbd44575fd17f0adfcdbea9390df2c4d8c/merged\\\"
at \\\"/var/lib/docker/overlay2/c0942a0b749ad436d6f4480fb43623dbd44575fd17f0adfcdbea9390df2c4d8c/merged/blockchain-explorer/config.json\\\"
caused \\\"not a directory\\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I already gave permission to docker
i solved the problem
the problem with me is that the folder in /opt/cell which create from command
make setup-worker
the it's create folder config.json not file so run command again solve problem with me
Can anyone help me make sense of the below error and others like it? I've Googled around, but nothing makes sense for my context. I download my Docker Image, but the container refuses to start. The namespace referenced is not always 26, but could be anything from 20-29. I am launching my Docker container onto an EC2 instance and pulling the image from AWS ECR. The error is persistent no matter if I re-launch the instance completely or restart docker.
docker: Error response from daemon: oci runtime error:
container_linux.go:247: starting container process caused
"process_linux.go:334: running prestart hook 0 caused \"error running
hook: exit status 1, stdout: , stderr: time=\\\"2017-05-
11T21:00:18Z\\\" level=fatal msg=\\\"failed to create a netlink handle:
failed to set into network namespace 26 while creating netlink socket:
invalid argument\\\" \\n\"".
Update from my Github issue: https://github.com/moby/moby/issues/33656
It seems like the DeepSecurity agent (ds_agent) running on a container with Docker can cause this issue invariably. A number of other users reported this problem, causing me to investigate. I previously installed ds_agent on these boxes, before replacing it with other software as a business decision, which is when the problem went away. If you are having this problem, might be worthwhile to check if you are running the ds_agent process, or other similar services that could be causing a conflict using 'htop' as the user in the issue above did.
Did you try running it with the --privileged option?
If it still doesn't run, try adding --security-opts seccomp=unconfined and either --security-opts apparmor=unconfined or --security-opts selinux=unconfined depending whether you're running Ubuntu or a distribution with SELinux enabled, respectively.
If it works, try substituting the --privileged option with --cap-add=NET_ADMIN` instead, as running containers in privileged mode is discouraged for security reasons.