Installed the lxc container via lxc-create:
sudo lxc-create -t download -n dos1
I chose debian buster arm64 and run it:
sudo lxc-start -n dos1 -d
Outputs an error:
lxc-start: dos1: tools/lxc_start.c: main: 290 No container config specified
What is the problem? Am I doing something wrong?
PS: configs are configured /etc/lxc/default.conf:
lxc.net.0.type = veth
lxc.net.0.link = virbr0
lxc.net.0.flags = up
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
~/.config/lxc/default.conf:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
UPD
The problem is solved. You had to specify the path to the configuration file directly. For example:
sudo lxc-start -n dos1 -f /var/lib/lxc/dos1/config -d
Then all lxc-* commands must be executed with sudo
I got this error because I didn’t specify sudo. Without root permissions, lxc-start couldn’t find and read the container config to start it.
Related
I would like to use microk8s with private registry, but pull image is not working (I'm using self-signed cert):
root#master-1:/var/snap/microk8s/common/var/lib/containerd# microk8s.ctr --debug images pull priv.repo:5000/busybox/hellomicrok8s:latest
DEBU[0000] fetching image="priv.repo:5000/busybox/hellomicrok8s:latest"
DEBU[0000] resolving host="priv.repo:5000"
DEBU[0000] do request host="priv.repo:5000" request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/v1.3.4 request.method=HEAD url="https://priv.repo:5000/v2/busybox/hellomicrok8s/manifests/latest"
ctr: failed to resolve reference "priv.repo:5000/busybox/hellomicrok8s:latest": failed to do request: Head "https://priv.repo:5000/v2/busybox/hellomicrok8s/manifests/latest": x509: certificate signed by unknown authority
here is my containerd-template.tom:
root#master-1:/var/snap/microk8s/common/var/lib/containerd# cat /var/snap/microk8s/current/args/containerd-template.toml
version = 2
oom_score = 0
[grpc]
uid = 0
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
[debug]
address = ""
uid = 0
gid = 0
[metrics]
address = "127.0.0.1:1338"
grpc_histogram = false
[cgroup]
path = ""
[plugins."io.containerd.grpc.v1.cri"]
stream_server_address = "127.0.0.1"
stream_server_port = "0"
enable_selinux = false
sandbox_image = "k8s.gcr.io/pause:3.1"
stats_collect_period = 10
enable_tls_streaming = false
max_container_log_line_size = 16384
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "${SNAPSHOTTER}"
no_pivot = false
default_runtime_name = "${RUNTIME}"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime]
runtime_type = "io.containerd.runc.v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia-container-runtime.options]
BinaryName = "nvidia-container-runtime"
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "${SNAP}/opt/cni/bin"
conf_dir = "${SNAP_DATA}/args/cni-network"
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."priv.repo:5000"]
endpoint = ["https://priv.repo:5000"]
I restarted microk8s via systemctl restart snap.microk8s.daemon-containerd.service && microk8s.stop && microk8s.start.
Command docker login docker https://priv.repo:5000 is working and I can pull that image via docker pull priv.repo:5000/busybox/hellomicrok8s:latest. Do you know why it is not working?
Thanks in advance!
EDIT:
This is also set:
root#master-1:/var/snap/microk8s/common/var/lib/containerd# cat /etc/docker/daemon.json
{
"insecure-registries" : ["priv.repo:5000"]
}
EDIT1:
This is working: microk8s.ctr --debug images pull -u ???:??? --skip-verify priv.repo:5000/busybox/hellomicrok8s:latest. How should I set --skip-verify, because when I create a pod via microk8s kubectl apply -f ... still getting x509: certificate signed by unknown authority.
I added my crt file to /etc/ssl/certs (on master node) and it started working.
BTW newly added rows in containerd-template.tom file are not needed for me.
if you are using ubuntu microk8s cert-manager, you can fetch the certificate and install it like this:
Find the correct certificates name (you could have multiple)
microk8s kubectl get secrets -n cert-manager --field-selector type=kubernetes.io/tls
if the correct name is e.g. dev-da
microk8s kubectl -n cert-manager get secrets dev-ca -o jsonpath='{.data.ca\.crt}' | base64 -d > cert-manager-ca.crt
sudo cp cert-manager-ca.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
Hereafter, you can check with curl, if the certificate is installed correctly.
And when genstart microk8s.
microk8s stop && microk8s start
I had the same issue and these commands below may fix this issue for others
openssl s_client -showcerts -connect <IP>:<PORT>< /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > ca.crt
cp ca.crt /etc/ssl/certs
update-ca-certificates
I installed docker on nixos, using:
nix-env -i docker
after that, dockerd was not running, so I started the daemon manually with:
dockerd
and in the logs, I see:
WARN[2019-06-26T01:02:31.784701442Z] could not change group
/var/run/docker.sock to docker: group docker not found
should I care about this warning?
When installing docker on NixOS, it's best to enable it in the NixOS configuration. Doing so will install docker as a system service.
Snippet for /etc/nixos/configuration.nix:
virtualisation.docker.enable = true;
# ...
users.users.YOU = { # merge this with your unix user definition, "YOU" is for illustration
isNormalUser = true;
# ...
extraGroups = [
# ...
"docker"
];
};
created a group docker. Docker needs that user group to start as a service.
What is necessary to enable docker-runc to work inside a container?
The following command will list the plugins, but with PID = 0 and status = stopped. Are additional volumes needed?
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /run/docker/plugins/runtime-root/plugins.moby:/run/docker/plugins/runtime-root/plugins.moby docker docker-runc --root /run/docker/plugins/runtime-root/plugins.moby list
Result:
ID PID STATUS BUNDLE CREATED OWNER
abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200 0 stopped /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/plugins.moby/abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200 2018-11-01T16:26:26.605625462Z root
Use this guide to install Kubernetes on Vagrant cluster:
https://kubernetes.io/docs/getting-started-guides/kubeadm/
At (2/4) Initializing your master, there came some errors:
[root#localhost ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
I checked the /proc/sys/net/bridge/bridge-nf-call-iptables file content, there is only one 0 in it.
At (3/4) Installing a pod network, I downloaded kube-flannel file:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
And run kubectl apply -f kube-flannel.yml, got error:
[root#localhost ~]# kubectl apply -f kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Until here, I don't know how to goon.
My Vagrantfile:
# Master Server
config.vm.define "master", primary: true do |master|
master.vm.network :private_network, ip: "192.168.33.200"
master.vm.network :forwarded_port, guest: 22, host: 1234, id: 'ssh'
end
In order to set /proc/sys/net/bridge/bridge-nf-call-iptables by editing /etc/sysctl.conf. There you can add [1]
net.bridge.bridge-nf-call-iptables = 1
Then execute
sudo sysctl -p
And the changes will be applied. With this the pre-flight check should pass.
[1] http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Update #2019/09/02
Sometimes modprobe br_netfilter is unreliable, you may need to redo it after relogin, so use the following instead when on a systemd sytem:
echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
systemctl restart systemd-modules-load.service
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
YES, the accepted answer is right, but I faced with
cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
So I did
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
sudo sysctl -p
Then solved.
On Ubuntu 16.04 I just had to:
modprobe br_netfilter
Default value in /proc/sys/net/bridge/bridge-nf-call-iptables is already 1.
Then I added br_netfilter to /etc/modules to load the module automatically on next boot.
As mentioned in K8s docs - Installing kubeadm under the Letting iptables see bridged traffic section:
Make sure that the br_netfilter module is loaded. This can be done
by running lsmod | grep br_netfilter. To load it explicitly call
sudo modprobe br_netfilter.
As a requirement for your Linux Node's iptables to correctly see
bridged traffic, you should ensure
net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Regardng the preflight erros - you can see in Kubeadm Implementation details under the preflight-checks:
Kubeadm executes a set of preflight checks before starting the init,
with the aim to verify preconditions and avoid common cluster startup
problems..
The following missing configurations will produce errors:
.
.
if /proc/sys/net/bridge/bridge-nf-call-iptables file does not exist/does not contain 1
if advertise address is ipv6 and /proc/sys/net/bridge/bridge-nf-call-ip6tables does not exist/does not contain 1.
if swap is on
.
.
The one-liner way:
sysctl net.bridge.bridge-nf-call-iptables=1
I would like to run a docker container that requires a lot of memory on a machine that doesn't have much RAM. I have been trying to increase the swap space available for the container to no avail. Here is the last command I tried:
docker run -d -m 1000M --memory-swap=10000M --name=my_container my_image
Following these tips on how to check memory metrics I found the following:
$ boot2docker ssh
docker#boot2docker:~$ cat /sys/fs/cgroup/memory/docker/35af5a072751c7af80ce7a255a01ab3c14b3ee0e3f15341f7bb22a777091c67b/memory.stat
cache 454656
rss 65015808
rss_huge 29360128
mapped_file 208896
writeback 0
swap 0
pgpgin 31532
pgpgout 22702
pgfault 49372
pgmajfault 0
inactive_anon 28672
active_anon 65183744
inactive_file 241664
active_file 16384
unevictable 0
hierarchical_memory_limit 1048576000
hierarchical_memsw_limit 10485760000
total_cache 454656
total_rss 65015808
total_rss_huge 29360128
total_mapped_file 208896
total_writeback 0
total_swap 0
total_pgpgin 31532
total_pgpgout 22702
total_pgfault 49372
total_pgmajfault 0
total_inactive_anon 28672
total_active_anon 65183744
total_inactive_file 241664
total_active_file 16384
total_unevictable 0
Is it possible to run a container that requires 5G of memory on a machine that only has 4G of physical memory?
This GitHub issue was very helpful in figuring out how to increase the swap space available in the boot2docker-vm. Adapting it to my situation I used the following commands to ssh into the boot2docker-vm and set up a new swapfile:
boot2docker ssh
export SWAPFILE=/mnt/sda1/swapfile
sudo dd if=/dev/zero of=$SWAPFILE bs=1024 count=4194304
sudo mkswap $SWAPFILE
sudo chmod 600 $SWAPFILE
sudo swapon $SWAPFILE
exit