I am installing libguestfs0 and there is a dependencies called lvm2 is required to be configured.
When i run systemctl status lvm2, it showing the service is masked (/dev/null, bad). Hence i try with systemctl unmask lvm2 and systemctl restart lvm2. However it still failed to be started.
Please assist if you have any idea, thanks!
I am running in VM with Ubuntu 18 loaded.
I believe it's normal for the lvm2 service to be masked: it has been replaced by a number of more granular services:
$ systemctl list-units lvm2*
UNIT LOAD ACTIVE SUB DESCRIPTION
lvm2-lvmetad.service loaded active running LVM2 metadata daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
lvm2-lvmetad.socket loaded active running LVM2 metadata daemon socket
lvm2-lvmpolld.socket loaded active listening LVM2 poll daemon socket
Related
I'm having an error trying to have docker set iptables false when minikube start fails.
Below are my logs:
minikube v1.20.0 on Centos 7.6.1810 (amd64)
* Using the none driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...
* OS release is CentOS Linux 7 (Core)
* Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
stderr:
[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-socat]: socat not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Error you included states that you are misising bridge-nf-call-iptables.
bridge-nf-call-iptables is exported by br_netfilter.
What you need to do is issue the command
sudo modprobe br_netfilter
and then ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
cat <<EOF > /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
This should fix your problem
I'm trying to start minikube in Windows 10 using below command. minikube version v1.10.1
minikube start --vm-driver=virtualbox --no-vtx-check
But i'm getting below error
Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.8 ...
* Unable to load cached images: loading cached images: Docker load /var/lib/minikube/images/pause_3.2: loadimage docker.: docker load -i /var/lib/minikube/images/pause_3.2: Process exited with status 1
stdout:
stderr:
Error processing tar file(exit status 1): archive/tar: invalid tar header
*
* [OOM_KILL_SCP] Failed to update cluster updating node: downloading binaries: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:2506->127.0.0.1:2427: wsarecv: An existing connection was forcibly closed by the remote host.
* Suggestion: Disable dynamic memory in your VM manager, or pass in a larger --memory value
* Related issue: https://github.com/kubernetes/minikube/issues/1766
So i thought of degrading the minikube version. so i used v1.7.2 version and then v1.3.0 version but in both cases i got the same above mentioned error. Kindly suggest
Regards
It worked. Below are the steps which i have done as part of change for minikube in Windows 10 Home edition where hyper-v is not supported
Step 1: Enable virtualization and install virtualbox
step 2: add kutectl and minikube installer
step 3:
Run below command
minikube start --vm-driver=virtualbox --memory 4096
If it fails then
minikube delete and delete .minikube and .kubectl folders
Enable WSL 2
Open PowerShell as Administrator and run:
Enable WSL1
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
Enable WSL2
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
Restart the system
Install Linux Distribution Package
Click here!
Disable hypervisorlaunchtype
Open CMD
Run bcdedit to check hypervisor status
bcdedit
If hypervisorlaunchtype is set to auto then disable it:
bcdedit /set hypervisorlaunchtype off
Reboot
Again run minikube
minikube start --vm-driver=virtualbox --memory 4096
Am trying to setup kubernetes in centos machine, kubelets start is giving me this error.
Failed to get kubelets cgroup: cpu and memory cgroup hierarchy not
unified. Cpu:/, memory: /system.slice/kubelet.service.
The cgroup driver I mentioned is systemd for both docker and kubernetes
Docker version 1.13.1
Kubernetes version 1.15.2
Can any one suggest the solution.
This issue is fixed in a commit but still not merged see this
you may try this work around:
sudo vim /etc/sysconfig/kubelet
add at the end of DAEMON_ARGS string:
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
restart:
sudo systemctl restart kubelet
or :
adding a file in : /etc/systemd/system/kubelet.service.d/11-cgroups.conf
which contains:
[Service]
CPUAccounting=true
MemoryAccounting=true
then reload and restart
systemctl daemon-reload && systemctl restart kubelet
I'm trying to create a docker container with systemd enabled and install auditd on it.
I'm using the standard centos/systemd image provided in dockerhub.
But when I'm trying to start audit, it fails.
Here is the list of commands that I have done to create and get into the docker container:
docker run -d --rm --privileged --name systemd -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos/systemd
docker exec -it systemd bash
Now, inside the docker container:
yum install audit
systemctl start auditd
I'm receiving the following error:
Job for auditd.service failed because the control process exited with error code. See "systemctl status auditd.service" and "journalctl -xe" for details.
Then I run:
systemctl status auditd.service
And I'm getting this info:
auditd[182]: Error sending status request (Operation not permitted)
auditd[182]: Error sending enable request (Operation not permitted)
auditd[182]: Unable to set initial audit startup state to 'enable', exiting
auditd[182]: The audit daemon is exiting.
auditd[181]: Cannot daemonize (Success)
auditd[181]: The audit daemon is exiting.
systemd[1]: auditd.service: control process exited, code=exited status=1
systemd[1]: Failed to start Security Auditing Service.
systemd[1]: Unit auditd.service entered failed state.
systemd[1]: auditd.service failed.
Do you guys have any ideas on why this is happening?
Thank you.
See this discussion:
At the moment, auditd can be used inside a container only for aggregating
logs from other systems. It cannot be used to get events relevant to the
container or the host OS. If you want to aggregate only, then set
local_events=no in auditd.conf.
Container support is still under development.
Also see this:
local_events
This yes/no keyword specifies whether or not to include local events. Normally you want local events so the default value is yes. Cases where you would set this to no is when you want to aggregate events only from the network. At the moment, this is useful if the audit daemon is running in a container. This option can only be set once at daemon start up. Reloading the config file has no effect.
So at least at Date: Thu, 19 Jul 2018 14:53:32 -0400, this feature not support, had to wait.
I'm trying to get my head around something that's been working on a Centos+Vagrant, but not on our providers RHEL (Red Hat Enterprise Linux Server release 6.5 (Santiago)). A sudo service docker restart hands this:
Stopping docker: [ OK ]
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
The service starts okey enough, but images cannot run. A mounting failed error is shown when I try. And the startup-log also gives a warning or two. Regarding the kernelwarning, centos gives the same and has no problems as Epel should resolve this:
WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/08/07 08:58:29 docker daemon: 1.1.2 d84a070; execdriver: native; graphdriver:
[1233d0af] +job serveapi(unix:///var/run/docker.sock)
[1233d0af] +job initserver()
[1233d0af.initserver()] Creating server
2014/08/07 08:58:29 Listening for HTTP on unix (/var/run/docker.sock)
[1233d0af] +job init_networkdriver()
[1233d0af] -job init_networkdriver() = OK (0)
2014/08/07 08:58:29 WARNING: mountpoint not found
Anyone had any success overcoming this problem or should I throw in the towel and wait for the provider to update to RHEL 7?
I have the same issue.
(1) check cgconfig status
# /etc/init.d/cgconfig status
if it stopped, restart it
# /etc/init.d/cgconfig restart
check cgconfig is running
(2) check cgconfig is on
# chkconfig --list cgconfig
cgconfig 0:off 1:off 2:off 3:off 4:off 5:off 6:off
if cgconfig is off, turn it on
(3) if still does not work, may be some cgroups modules is missing. In the kernel .config file, make menuconfig, add those modules into kernel and recompile and reboot
after that, it should be OK
I ended up asking the same question at Google Groups and in the end finding a solution with some help. What worked for me was this:
umount cgroup
sudo service cgconfig start
The project of making Docker work was put on halt all the same. Later a problem of network connection for the containers. This took to much time to solve and had to give up.
So I spent the whole day trying to rig docker to work on my vps. I was running into this same error. Basically what it came down to was the fact that OpenVZ didn't support docker containers up until a couple months ago. Specifically this RHEL update:
https://openvz.org/Download/kernel/rhel6/042stab105.14
Assuming this is your problem, or some variation of it, the burden of solving it is on your host. They will need to follow these steps:
https://openvz.org/Docker_inside_CT
In my case
/etc/rc.d/rc.cgconfig start
was generating
Starting cgconfig service: Error: cannot mount cpu,cpuacct,memory to
/cgroup/cpu_and_mem: Device or resource busy /usr/sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to
parse /etc/cgconfig.conf
i had to use:
/etc/rc.d/rc.cgconfig restart
and it automagicly umouted and mounted groups
Stopping cgconfig service: Starting cgconfig service:
it seems like the cgconfig service not running,so check it!
# /etc/init.d/cgconfig status
# mkdir -p /cgroup/cpuacct /cgroup/memory /cgroup/devices /cgroup/freezer net_cls /cgroup/blkio
# cat /etc/cgconfig.conf |tail|grep "="|awk '{print "mount -t cgroup -o",$1,$1,$NF}'>cgroup_mount.sh
# sh ./cgroup_mount.sh
# /etc/init.d/cgconfig restart
# /etc/init.d/docker restart
This situation occurs when the kernel is booted with cgroup_disable=memory and /etc/cgconfig.conf contains memory = /cgroup/memory;
This causes only /cgroup/cpuset to be mounted instead of the full set.
Solution: either remove cgroup_disable=memory from your kernel boot options or comment out memory = /cgroup/memory; from cgconfig.conf.
The cgconfig service startup uses mount and umount which requires an extra privilege bump from docker.
See the --privileged=true flag here for more info.
I was able to overcome this issue by starting my container with:
docker run -it --privileged=true my-image.
Tested in Centos6, Centos6.5.