Ubuntu container failed to start - lxc

I have a Ubuntu Xenial container with an amd64 architecture setup in my Arch Linux computer. The container works properly. However, when I tried to start the container a second time I got this error:
The container failed to start.
To get more details, run the container in foreground mode.
Additional information can be obtained by setting the --logfile and --logpriority options.
What could have caused that?
Got this after running with -F, --logfile and --logpriority options.
lxc-start: ubuntu: network.c: lxc_ovs_attach_bridge: 1893 Failed to
attach "virbr0" to openvswitch bridge "veth3PI00B": lxc-start: ubuntu:
utils.c: run_command: 2280 failed to exec command
lxc-start: ubuntu: network.c: instantiate_veth: 198 Failed to attach
"veth3PI00B" to bridge "virbr0": Operation not permitted
lxc-start: ubuntu: network.c: lxc_create_network_priv: 2452 Failed to
create network device
lxc-start: ubuntu: start.c: lxc_spawn: 1579 Failed to create the
network
lxc-start: ubuntu: start.c: __lxc_start: 1887 Failed to spawn
container "ubuntu"
Got this after running it without foreground mode:
lxc-start: ubuntu: lxccontainer.c: wait_on_daemonized_start: 834
Received container state "STOPPING" instead of "RUNNING"

I faced a similar issue, and it got resolved upon creating the bridge. Following command used to create a bridge:
sudo brctl addbr brs1s2
Where "brs1s2" is the bridge-name in my case.

Related

Minikube start stuck in waiting for pods and timeout

I try to run a sample application in my Ubuntu 18 vm.
I have installed Docker client and server version of 18.06.1-ce. I already have VirtualBox running.
I use below link and install kubectl 1.14 too: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
I have Minikube v1.0.1 also installed. But Minikube start command stuck in Waiting for pods: apiserver and timeout
harshana#-Virtual-Machine:~$ sudo minikube start
๐Ÿ˜„ minikube v1.0.1 on linux (amd64)
๐Ÿคน Downloading Kubernetes v1.14.1 images in the background ...
โš ๏ธ Ignoring --vm-driver=virtualbox, as the existing "minikube" VM was created using the none driver.
โš ๏ธ To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=virtualbox`
โš ๏ธ Alternatively, you may delete the existing VM using `minikube delete -p minikube`
๐Ÿ”„ Restarting existing none VM for "minikube" ...
โŒ› Waiting for SSH access ...
๐Ÿ“ถ "minikube" IP address is xxx.xxx.x.xxx
๐Ÿณ Configuring Docker as the container runtime ...
๐Ÿณ Version of container runtime is 18.06.1-ce
โŒ› Waiting for image downloads to complete ...
โœจ Preparing Kubernetes environment ...
๐Ÿ’พ Downloading kubeadm v1.14.1
๐Ÿ’พ Downloading kubelet v1.14.1
๐Ÿšœ Pulling images required by Kubernetes v1.14.1 ...
๐Ÿ”„ Relaunching Kubernetes v1.14.1 using kubeadm ...
โŒ› Waiting for pods: apiserver
sudo minikube logs:
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.825465 10572 kubelet.go:2244] node "minikube" not found
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.895848 10572 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I got the same behaviour because I have created a first VM using kvm. I have followed the instructions and deleted the VM. Run the below :
1- minikube delete -p minikube
2- minikube start

docker - start failed because /etc/fstab not found

I'm using Window Linux Subsystem (Debian stretch). Followed the instruction on Docker website, I installed docker-ce, but it cannot start. Here is the info:
$ sudo service docker start
grep: /etc/fstab: No such file or directory
[ ok ] Starting Docker: docker.
$ sudo service docker status
[FAIL] Docker is not running ... failed!
What should I do with /etc/fstab not found?
to fix fstab
touch /etc/fstab
if you run dockerd, it will give you the failed message:
INFO[2022-01-27T17:55:14.100489400+07:00] Loading containers: start.
WARN[2022-01-27T17:55:14.191666800+07:00] Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING`, error: exit status 4
INFO[2022-01-27T17:55:14.493716300+07:00] stopping event stream following graceful shutdown error="<nil>" module=libcontainerd namespace=moby
INFO[2022-01-27T17:55:14.494906600+07:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2022-01-27T17:55:14.495048400+07:00] stopping healthcheck following graceful shutdown module=libcontainerd
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING
(exit status 4)
that is Debian nat issue, fix it with:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
now you can start the service again
you can follow this to make it start on startup https://askubuntu.com/a/1356147/138352
Edited:
if the issue with IP table still persisted try to set WSL version to 2, run the command from Windows shell:
wsl --set-version <distribution name> 2
the distribution list can be found with command wsl -l
I was getting the same error. Apparently on my install of WSL with Debian, I didn't have an etc/fstab file. Surprisingly, just creating the file via 'touch' worked:
sudo touch /etc/fstab
Perhaps a good signal https://learn.microsoft.com/en-us/windows/wsl/release-notes#build-17093
WSL now processes the /etc/fstab file during instance start [GH 2636].
For anybody stumbling across this years later like me, Docker doesn't work inside WSL.
But you can use Docker for Windows and WSL2 to run native containers inside your Linux Distro and the install and config is quite painless https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers

How to register a mesos-slave with docker containerizer option?

I have a mesos master running at IP
192.168.99.100:5050
I would like to register my mesos-slave.
However, from my mesos-slave machine when I run following command
./mesos-slave.sh --master=192.168.99.100:5050
I get error
I1017 21:47:20.751700 594 main.cpp:190] Build: 2015-10-16 08:02:34 by
I1017 21:47:20.756986 594 main.cpp:192] Version: 0.26.0
I1017 21:47:20.757683 594 main.cpp:199] Git SHA: 6d90b3b926f3eabbec4f9e2ff627a3eeae368d84
I1017 21:47:20.878522 594 containerizer.cpp:143] Using isolation: posix/cpu,posix/mem,filesystem/posix
Failed to create a containerizer: Could not create MesosContainerizer: Failed to create launcher: Failed to create Linux launcher: Failed to create root cgroup /sys/fs/cgroup/freezer/mesos: Failed to create directory '/sys/fs/cgroup/freezer/mesos': Read-only file system**strong text**
Could I start my mesos-slave with docker as the containerizer?
How could I do it?
NOTE - Make sure docker is installed on the mesos-slave
Yes you can start your mesos-slave with docker as the containerizer.
Following is how to do it
./mesos-slave.sh --master=192.168.99.100:5050 --containerizers=docker

How can I share a network interface with docker without setns error?

I want to fire up 2 docker containers on the same interface, so I tried the following from the docker docs:
First container:
bash-4.1$ docker run -ti --name=target ubuntu /bin/bash
root#45edefd42404:/#
Second container:
bash-4.1$ docker run -ti --rm --net=container:target ubuntu /bin/bash
setup networking failed to setns current network namespace: invalid argumentFATA[0002] Error response from daemon: Cannot start container ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461: setup networking failed to setns current network namespace: invalid argument
I've googled for failures related to setns and can't find anything relevant. Is there anyplace else I can look to debug this?
My docker daemon log contains this related to the failure (full log https://gist.github.com/paulweb515/990a1a9edeef1e73b752);
time="2015-04-23T09:17:59-04:00" level="error" msg="Warning: error unmounting device ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461: UnmountDevice: device not-mounted id ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461\n"
time="2015-04-23T09:17:59-04:00" level="info" msg="+job log(die, ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461, ubuntu:14.04)"
time="2015-04-23T09:17:59-04:00" level="info" msg="-job log(die, ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461, ubuntu:14.04) = OK (0)"
Cannot start container ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461: setup networking failed to setns current network namespace: invalid argument

Docker on RHEL 6 Cgroup mounting failing

I'm trying to get my head around something that's been working on a Centos+Vagrant, but not on our providers RHEL (Red Hat Enterprise Linux Server release 6.5 (Santiago)). A sudo service docker restart hands this:
Stopping docker: [ OK ]
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
The service starts okey enough, but images cannot run. A mounting failed error is shown when I try. And the startup-log also gives a warning or two. Regarding the kernelwarning, centos gives the same and has no problems as Epel should resolve this:
WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/08/07 08:58:29 docker daemon: 1.1.2 d84a070; execdriver: native; graphdriver:
[1233d0af] +job serveapi(unix:///var/run/docker.sock)
[1233d0af] +job initserver()
[1233d0af.initserver()] Creating server
2014/08/07 08:58:29 Listening for HTTP on unix (/var/run/docker.sock)
[1233d0af] +job init_networkdriver()
[1233d0af] -job init_networkdriver() = OK (0)
2014/08/07 08:58:29 WARNING: mountpoint not found
Anyone had any success overcoming this problem or should I throw in the towel and wait for the provider to update to RHEL 7?
I have the same issue.
(1) check cgconfig status
# /etc/init.d/cgconfig status
if it stopped, restart it
# /etc/init.d/cgconfig restart
check cgconfig is running
(2) check cgconfig is on
# chkconfig --list cgconfig
cgconfig 0:off 1:off 2:off 3:off 4:off 5:off 6:off
if cgconfig is off, turn it on
(3) if still does not work, may be some cgroups modules is missing. In the kernel .config file, make menuconfig, add those modules into kernel and recompile and reboot
after that, it should be OK
I ended up asking the same question at Google Groups and in the end finding a solution with some help. What worked for me was this:
umount cgroup
sudo service cgconfig start
The project of making Docker work was put on halt all the same. Later a problem of network connection for the containers. This took to much time to solve and had to give up.
So I spent the whole day trying to rig docker to work on my vps. I was running into this same error. Basically what it came down to was the fact that OpenVZ didn't support docker containers up until a couple months ago. Specifically this RHEL update:
https://openvz.org/Download/kernel/rhel6/042stab105.14
Assuming this is your problem, or some variation of it, the burden of solving it is on your host. They will need to follow these steps:
https://openvz.org/Docker_inside_CT
In my case
/etc/rc.d/rc.cgconfig start
was generating
Starting cgconfig service: Error: cannot mount cpu,cpuacct,memory to
/cgroup/cpu_and_mem: Device or resource busy /usr/sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to
parse /etc/cgconfig.conf
i had to use:
/etc/rc.d/rc.cgconfig restart
and it automagicly umouted and mounted groups
Stopping cgconfig service: Starting cgconfig service:
it seems like the cgconfig service not running,so check it!
# /etc/init.d/cgconfig status
# mkdir -p /cgroup/cpuacct /cgroup/memory /cgroup/devices /cgroup/freezer net_cls /cgroup/blkio
# cat /etc/cgconfig.conf |tail|grep "="|awk '{print "mount -t cgroup -o",$1,$1,$NF}'>cgroup_mount.sh
# sh ./cgroup_mount.sh
# /etc/init.d/cgconfig restart
# /etc/init.d/docker restart
This situation occurs when the kernel is booted with cgroup_disable=memory and /etc/cgconfig.conf contains memory = /cgroup/memory;
This causes only /cgroup/cpuset to be mounted instead of the full set.
Solution: either remove cgroup_disable=memory from your kernel boot options or comment out memory = /cgroup/memory; from cgconfig.conf.
The cgconfig service startup uses mount and umount which requires an extra privilege bump from docker.
See the --privileged=true flag here for more info.
I was able to overcome this issue by starting my container with:
docker run -it --privileged=true my-image.
Tested in Centos6, Centos6.5.

Resources