Docker container auto healing is Kubernetes suitable for one instance? - docker

I have one docker container what is running pyppeteer.
It have memory leak, so it will stoped in 24 hours.
I need some auto healing system, I think Kubernetes can do that. No loadbalance, just one instance, one container. It is suitable?
++++
Finally, I selected docker-py, managed by using containers.run, containers.prune.
It is working for me.

If your container has no state, and you know it is going to run out of memory every 24 hours, I would say cronjob is the best option.
You can do what you want on k8s, but that's overkilling. Entire k8s cluster for one container, doesn't sound right to me.
Another thing is if you have more apps, or containers as k8s can run lots of services independent one from another, so you would not be wasting resources.

There are several options for your use case, one of them is running kubernetes. But you should consider the overhead on resources and maintenance burden when running kubernetes just for a single container.
I suggest you explore having systemd restart your container in case it crashes or just simple use docker itself: With the --restart=always parmeter the docker daemon ensures the container is running. Note: Even after restarting the system docker will ensure the container is restarted in that case. So a --restart=on-failure might be a better option.
See this page for more information: https://docs.docker.com/config/containers/start-containers-automatically/#use-a-restart-policy

I didn't work with Puppeteer but after short research found this:
By default, Docker runs a container with a /dev/shm shared memory space 64MB. This is typically too small for Chrome and will cause Chrome to crash when rendering large pages. To fix, run the container with docker run --shm-size=1gb to increase the size of /dev/shm. Since Chrome 65, this is no longer necessary. Instead, launch the browser with the --disable-dev-shm-usage flag:
const browser = await puppeteer.launch({
args: ['--disable-dev-shm-usage']
});
This will write shared memory files into /tmp instead of /dev/shm.
Hope this help.

It is possible to use Kubernetes auto-healing feature without creating full-scale Kubernetes cluster. It's only required to install compatible versions of docker and kubelet packages. It could be helpful to install kubeadm package also.
Kubelet is the part of Kubernetes control-plane that takes care of keeping Pods in healthy condition. It runs as a systemd service, and creates static pods using YAML manifest files from /etc/kubernetes/manifests (location is configurable).
All other application troubleshooting can be done using regular docker commands:
docker ps ...
docker inspect
docker logs ...
docker exec ...
docker attach ...
docker cp ...
A good example of this approach from the official documentation is running external etcd cluster instances. (Note: Kubelet configuration part may not work as expected with recent kubelet versions. I've put more details on that below.)
Also kubelet can take care of pod resource usage by applying limits part of a pod spec. So, you can set the memory limit and when container reach this limit kubelet will restart it.
Kubelet can make a health-check of the application in the pod, if liveness probe section is included in the Pod spec. If you can create a command to check your application condition more precisely, kubelet can restart the container when the command return non zero exit code several times in a row (configurable).
If kubelet refuses to start, you can check kubelet logs using the following command:
journalctl -e -u kubelet
Kubelet can refuse to start mostly because of:
absence of kubelet initial config. It can be generated using kubeadm command: kubeadm init phase kubelet-start. (You may also need to generate CA certificate /etc/kubernetes/pki/ca.crt mentioned in the kubelet config. It can be done using kubadm: kubeadm init phase certs ca)
different cgroups driver settings for docker and kubelet. Kubelet works fine with both cgroupsfs and systemd drivers. Docker default driver is cgroupfs. Kubeamd also generates kubelet config with cgroupsfs driver, so just ensure that they are the same. Docker cgroups driver can be specified in the service definition file, e.g /lib/systemd/system/docker.service or /usr/lib/systemd/system/docker.service:
#add cgroups driver option to ExecStart:
ExecStart=/usr/bin/dockerd \
--exec-opt native.cgroupdriver=systemd # or cgroupfs
To change cgroups driver for recent kubelet version it's required to specify kubelet config file for the service, because such command line options are deprecated now:
sed -i 's/ExecStart=\/usr\/bin\/kubelet/ExecStart=\/usr\/bin\/kubelet --config=\/var\/lib\/kubelet\/config.yaml/' /lib/systemd/system/kubelet.service
Then change the cgroups line in the kubelet config. Couple more options also require changes. Here is the kubelet config that I've used for same purpose:
address: 127.0.0.1 # changed, was 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: false # changed, was true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt # kubeadm init phase certs ca
authorization:
mode: AlwaysAllow # changed, was Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs # could be changed to systemd or left as is, as docker default driver is cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
Restart docker/kubelet services:
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

Related

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

Failed to get kubelets cgroup

Am trying to setup kubernetes in centos machine, kubelets start is giving me this error.
Failed to get kubelets cgroup: cpu and memory cgroup hierarchy not
unified. Cpu:/, memory: /system.slice/kubelet.service.
The cgroup driver I mentioned is systemd for both docker and kubernetes
Docker version 1.13.1
Kubernetes version 1.15.2
Can any one suggest the solution.
This issue is fixed in a commit but still not merged see this
you may try this work around:
sudo vim /etc/sysconfig/kubelet
add at the end of DAEMON_ARGS string:
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
restart:
sudo systemctl restart kubelet
or :
adding a file in : /etc/systemd/system/kubelet.service.d/11-cgroups.conf
which contains:
[Service]
CPUAccounting=true
MemoryAccounting=true
then reload and restart
systemctl daemon-reload && systemctl restart kubelet

Delete kubernetes cluster on docker-for-desktop OSX?

What is the equivalent command for minikube delete in docker-for-desktop on OSX
As I understand, minikube creates a VM to host its kubernetes cluster but I do not understand how docker-for-desktop is managing this on OSX.
Tear down Kubernetes in Docker for OS X is quite an easy task.
Go to Preferences, open Reset tab, and click Reset Kubernetes cluster.
All object that have been created with Kubectl before that will be deleted.
You can also reset docker VM image (Reset disk image) and all settings (Reset to factory defaults) or even uninstall Docker.
In recent Docker Edge versions for Mac ( 2.1.7 ) Preferences design has been changed. Now you can reset Kubernetes cluster and other docker aspects by switching to the bug plane in the top right of Preferences window:
Note: You are able to reset Kubernetes cluster only if it's enabled. If you uncheck "Enable Kubernetes" checkbox, "Reset Kubernetes cluster" button becomes inactive.
For convenience "Reset Kubernetes cluster" is also present on the Kubernetes tab in the main Preferences plane:
To reset Docker-desktop Kubernetes cluster using command line, put the following content to a file (dd-reset.sh) and mark it executable ( chmod a+x dd-reset.sh )
#!/bin/bash
dr='docker run -it --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i'
${dr} sh -c 'export PATH=$PATH:/containers/services/docker/rootfs/usr/bin:/containers/services/docker/rootfs/usr/local/bin:/var/lib/kube-binary-cache/ && \
if [ ! -e /var/run/docker.sock ] ; then ln -s /containers/services/docker/rootfs/var/run/docker.sock /var/run/docker.sock ; fi && \
kube-reset.sh'
sleep 3
echo "cluster resetted. restarting docker-desktop..."
osascript -e 'quit app "Docker"'
open --background -a Docker
echo "docker-desktop started. Wait 3-5 mins for kubernetes to start."
Explanation:
This method uses internal scripts from Docker-desktop VM. To make it work, some preparation of user environment is required.
I wasn't able to start Kubernetes cluster using kube-start.sh script from inside the VM, so I've used MacOS commands to restart Docker application instead.
This method works even if your Kubernetes cluster is not enabled in Docker preferences at the moment, but it's required to enable Kubernetes at least once to use the script.
It was tested on Docker Edge for MacOS v2.2.2.0 (43066)
There is no guarantee that it will be compatible with earlier or later versions.
This version of Docker uses kubeadm to initialize Kubernetes cluster. Scripts are located in the folder /containers/services/docker/rootfs/usr/bin:
kube-pull.sh (brings kubernetes binaries to VM)
kube-reset.sh (runs kube-stop.sh and do kubeadm reset + some rm stuff)
kube-restart.sh (runs kube-stop.sh and kube-start.sh)
kube-start.sh (runs kube-pull.sh and kubelet.sh)
kube-stop.sh (kills kubelet and kube-apiserver processes, and all k8s containers)
kubeadm-init.sh (initializes Kubernetes cluster)
kubelet.sh (runs kubeadm-init.sh and starts kubelet binary)
Cluster configuration is located in the file /containers/services/docker/lower/etc/kubeadm/kubeadm.yaml
Resources used:
Restart Docker from command line
Use nsenter in priviledged container
It's really under the hood in the code. Docker for Mac uses these components: Hyperkit, VPNkit and DataKit
Kubernetes runs in the same Hyperkit VM created for docker and the kube-apiserver is exposed.
You can connect to the VM with this:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Then you can see all the Kubernetes processes in the VM:
linuxkit-025000000001:~# ps -Af | grep kube
1251 root 0:00 /usr/bin/logwrite -n kubelet /usr/bin/kubelet.sh
1288 root 0:51 kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cgroups-per-qos=false --enforce-node-allocatable= --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cadvisor-port=0 --kube-reserved-cgroup=podruntime --system-reserved-cgroup=systemreserved --cgroup-root=kubepods --hostname-override=docker-for-desktop --fail-swap-on=false
3564 root 0:26 kube-scheduler --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.conf
3616 root 1:45 kube-controller-manager --cluster-signing-key-file=/run/config/pki/ca.key --address=127.0.0.1 --root-ca-file=/run/config/pki/ca.crt --service-account-private-key-file=/run/config/pki/sa.key --kubeconfig=/etc/kubernetes/controller-manager.conf --cluster-signing-cert-file=/run/config/pki/ca.crt --leader-elect=true --use-service-account-credentials=true --controllers=*,bootstrapsigner,tokencleaner
3644 root 1:59 kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --service-account-key-file=/run/config/pki/sa.pub --secure-port=6443 --insecure-port=8080 --insecure-bind-address=0.0.0.0 --requestheader-client-ca-file=/run/config/pki/front-proxy-ca.crt --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=192.168.65.3 --service-cluster-ip-range=10.96.0.0/12 --tls-private-key-file=/run/config/pki/apiserver.key --enable-bootstrap-token-auth=true --requestheader-allowed-names=front-proxy-client --tls-cert-file=/run/config/pki/apiserver.crt --proxy-client-key-file=/run/config/pki/front-proxy-client.key --proxy-client-cert-file=/run/config/pki/front-proxy-client.crt --allow-privileged=true --client-ca-file=/run/config/pki/ca.crt --kubelet-client-certificate=/run/config/pki/apiserver-kubelet-client.crt --kubelet-client-key=/run/config/pki/apiserver-kubelet-client.key --authorization-mode=Node,RBAC --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/run/config/pki/etcd/ca.crt --etcd-certfile=/run/config/pki/apiserver-etcd-client.crt --etcd-keyfile=/run/config/pki/apiserver-etcd-client.key
3966 root 0:01 /kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
4190 root 0:05 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
4216 65534 0:03 /sidecar --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
4606 root 0:00 /compose-controller --kubeconfig --reconciliation-interval 30s
4905 root 0:01 /api-server --kubeconfig --authentication-kubeconfig --authorization-kubeconfig --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/etc/docker-compose/etcd/ca.crt --etcd-certfile=/etc/docker-compose/etcd/client.crt --etcd-keyfile=/etc/docker-compose/etcd/client.key --secure-port=9443 --tls-ca-file=/etc/docker-compose/tls/ca.crt --tls-cert-file=/etc/docker-compose/tls/server.crt --tls-private-key-file=/etc/docker-compose/tls/server.key
So if you uncheck the following box (unclear from the docs what command it uses):
You can see that the processes are removed:
linuxkit-025000000001:~# [ 6616.856404] cni0: port 2(veth5f6c8b28) entered disabled state
[ 6616.860520] device veth5f6c8b28 left promiscuous mode
[ 6616.861125] cni0: port 2(veth5f6c8b28) entered disabled state
linuxkit-025000000001:~#
linuxkit-025000000001:~# [ 6626.816763] cni0: port 1(veth87e77142) entered disabled state
[ 6626.822748] device veth87e77142 left promiscuous mode
[ 6626.823329] cni0: port 1(veth87e77142) entered disabled state
linuxkit-025000000001:~# ps -Af | grep kube
linuxkit-025000000001:~#
On docker desktop version 3.5.2 (engine version 20.10.7), the reset button has been moved inside the docker preferences.
You can get there by following the below steps:
Click on the docker icon in the menu bar and choose 'Preferences'.
Go to the Kubernetes tab.
Click on the Reset Kubernetes CLuster button. This is the red color button.
This will delete all pods and reset the kubernetes. You can execute the docker ps command at terminal to verify that there are no containers running.
Just delete the vm that holds the kubernetes resources.
$ minikube delete

minikube 0.30.0 DNS not working on CentOS 7 with Docker 18.06.1-ce and vm-driver=none

I am experimenting with minikube for learning purposes, on a CentOS 7 Linux machine with Docker 18.06.010ce installed
I installed minikube using
minikube start --vm-driver=none"
I deployed a few applications but only to discover they couldn't talk to each other using their hostnames.
I deleted minikube using
minikube delete
I re-installed minikube using
minikube start --vm-driver=none
I then followed the instructions under "Debugging DNS Resolution"
(https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/)
but only to find out that the DNS system was not functional
More precisely, I run:
1.
kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
2.
# kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
3.
# kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local contabo.host
options ndots:5
4.
# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-c4cffd6dc-dqtbt 1/1 Running 1 4m
kube-dns-86f4d74b45-tr8vc 2/3 Running 5 4m
surprisingly both kube-dns and coredns are running
should this be a concern?
I have looked for a solution anywhere without success
step 2 always returns error
I simply cannot accept that something so simple has become such a huge trouble for me
Please assist
Mine is working with coredns enabled and kube-dns disabled.
C02W84XMHTD5:ucp iahmad$ minikube addons list
- addon-manager: enabled
- coredns: enabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- heapster: disabled
- ingress: disabled
- kube-dns: disabled
- metrics-server: disabled
- nvidia-driver-installer: disabled
- nvidia-gpu-device-plugin: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
you may disable the kube-dns:
minikube addons disable kube-dns
Please note the output of kube-dns pod below, it has only 2 of 3 containers running.
kube-dns-86f4d74b45-tr8vc 2/3 Running 5 4m
The last time I encountered this was when Docker's default FORWARD policy was DROP. Changing it to ACCEPT using below fixed the problem for me.
iptables -P FORWARD ACCEPT
It might be other things too, please check the pod logs.
After deleting /etc/kubernetes and /var/lib/kubelet and /var/lig/kubeadm.yaml and restarting minikube I can now successfully reproduce the DNS resolution debugging steps (https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/)
I bet some stale settings had persisted among minikube start/top iterations leading to inconsistent configuration.
It is also worth mentioning that DNS resolution was lost after restarting the iptables.
I suspect this is iptables rules related, some rule ie being put by minikube and as it gets lost as part of iptables restart the problem re-appears
I managed to resolve the problem by re-installing Minikube after deleting all state files under /etc and /var/lib, but forgot to update.
This can be now closed.

Docker on RHEL 6 Cgroup mounting failing

I'm trying to get my head around something that's been working on a Centos+Vagrant, but not on our providers RHEL (Red Hat Enterprise Linux Server release 6.5 (Santiago)). A sudo service docker restart hands this:
Stopping docker: [ OK ]
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
The service starts okey enough, but images cannot run. A mounting failed error is shown when I try. And the startup-log also gives a warning or two. Regarding the kernelwarning, centos gives the same and has no problems as Epel should resolve this:
WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/08/07 08:58:29 docker daemon: 1.1.2 d84a070; execdriver: native; graphdriver:
[1233d0af] +job serveapi(unix:///var/run/docker.sock)
[1233d0af] +job initserver()
[1233d0af.initserver()] Creating server
2014/08/07 08:58:29 Listening for HTTP on unix (/var/run/docker.sock)
[1233d0af] +job init_networkdriver()
[1233d0af] -job init_networkdriver() = OK (0)
2014/08/07 08:58:29 WARNING: mountpoint not found
Anyone had any success overcoming this problem or should I throw in the towel and wait for the provider to update to RHEL 7?
I have the same issue.
(1) check cgconfig status
# /etc/init.d/cgconfig status
if it stopped, restart it
# /etc/init.d/cgconfig restart
check cgconfig is running
(2) check cgconfig is on
# chkconfig --list cgconfig
cgconfig 0:off 1:off 2:off 3:off 4:off 5:off 6:off
if cgconfig is off, turn it on
(3) if still does not work, may be some cgroups modules is missing. In the kernel .config file, make menuconfig, add those modules into kernel and recompile and reboot
after that, it should be OK
I ended up asking the same question at Google Groups and in the end finding a solution with some help. What worked for me was this:
umount cgroup
sudo service cgconfig start
The project of making Docker work was put on halt all the same. Later a problem of network connection for the containers. This took to much time to solve and had to give up.
So I spent the whole day trying to rig docker to work on my vps. I was running into this same error. Basically what it came down to was the fact that OpenVZ didn't support docker containers up until a couple months ago. Specifically this RHEL update:
https://openvz.org/Download/kernel/rhel6/042stab105.14
Assuming this is your problem, or some variation of it, the burden of solving it is on your host. They will need to follow these steps:
https://openvz.org/Docker_inside_CT
In my case
/etc/rc.d/rc.cgconfig start
was generating
Starting cgconfig service: Error: cannot mount cpu,cpuacct,memory to
/cgroup/cpu_and_mem: Device or resource busy /usr/sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to
parse /etc/cgconfig.conf
i had to use:
/etc/rc.d/rc.cgconfig restart
and it automagicly umouted and mounted groups
Stopping cgconfig service: Starting cgconfig service:
it seems like the cgconfig service not running,so check it!
# /etc/init.d/cgconfig status
# mkdir -p /cgroup/cpuacct /cgroup/memory /cgroup/devices /cgroup/freezer net_cls /cgroup/blkio
# cat /etc/cgconfig.conf |tail|grep "="|awk '{print "mount -t cgroup -o",$1,$1,$NF}'>cgroup_mount.sh
# sh ./cgroup_mount.sh
# /etc/init.d/cgconfig restart
# /etc/init.d/docker restart
This situation occurs when the kernel is booted with cgroup_disable=memory and /etc/cgconfig.conf contains memory = /cgroup/memory;
This causes only /cgroup/cpuset to be mounted instead of the full set.
Solution: either remove cgroup_disable=memory from your kernel boot options or comment out memory = /cgroup/memory; from cgconfig.conf.
The cgconfig service startup uses mount and umount which requires an extra privilege bump from docker.
See the --privileged=true flag here for more info.
I was able to overcome this issue by starting my container with:
docker run -it --privileged=true my-image.
Tested in Centos6, Centos6.5.

Resources