why a flake.nix file prevent to nixos-rebuild - nix

I'm trying to do this tutorial.
I've added these lines in /etc/nixos/configuration.nix
services.nginx.enable = true;
services.nginx.virtualHosts."test.local.cetacean.club" = {
root = "/srv/http/test.local.cetacean.club";
};
I've made these commands
sudo mkdir -p /srv/http/test.local.cetacean.club
sudo chown nixos:nginx /srv/http/test.local.cetacean.club
I've defined these file /etc/nixos/flake.nix like this
{
inputs = {
nixpkgs.url = "nixpkgs/nixos-unstable";
};
outputs = { self, nixpkgs, ... }: {
nixosConfigurations.nixos = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./configuration.nix
# add things here
];
};
};
}
the following command works. It means that my file is ok
sudo nix flake check /etc/nixos
but this command doesn't work
sudo nixos-rebuild switch
I don't understand the problem . This command worked before the definition of /etc/nixos/flake.nix. I don't understand why this prevent to rebuild /etc/nixos/configuration.nix.
Now the error message
× systemd-sysctl.service - Apply Kernel Variables
Loaded: loaded (/etc/systemd/system/systemd-sysctl.service; enabled; preset: enabled)
Drop-In: /nix/store/9mnkvlaxvwlp3iw50mf2p91rm3simizr-system-units/systemd-sysctl.service.d
└─overrides.conf
Active: failed (Result: exit-code) since Sun 2022-12-25 08:26:30 UTC; 64ms ago
Duration: 29min 7.255s
Docs: man:systemd-sysctl.service(8)
man:sysctl.d(5)
Process: 10842 ExecStart=/nix/store/9rjdvhq7hnzwwhib8na2gmllsrh671xg-systemd-252.1/lib/systemd/systemd-sysctl (code=exited, status=243/CREDENTIALS)
Main PID: 10842 (code=exited, status=243/CREDENTIALS)
IP: 0B in, 0B out
Dec 25 08:26:30 nixos systemd1: Starting Apply Kernel Variables...
Dec 25 08:26:30 nixos systemd[10842]: systemd-sysctl.service: Failed to set up credentials: Protocol error
Dec 25 08:26:30 nixos systemd[10842]: systemd-sysctl.service: Failed at step CREDENTIALS spawning /nix/store/9rjdvhq7hnzwwhib8na2gmllsrh671xg-systemd-252.1/lib/systemd/systemd-sysctl: Protocol error
Dec 25 08:26:30 nixos systemd1: systemd-sysctl.service: Main process exited, code=exited, status=243/CREDENTIALS
Dec 25 08:26:30 nixos systemd1: systemd-sysctl.service: Failed with result 'exit-code'.
Dec 25 08:26:30 nixos systemd1: Failed to start Apply Kernel Variables.
× systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev
Loaded: loaded (/etc/systemd/system/systemd-tmpfiles-setup-dev.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-12-25 08:26:30 UTC; 68ms ago
Duration: 29min 7.256s
Docs: man:tmpfiles.d(5)
man:systemd-tmpfiles(8)
Process: 10844 ExecStart=systemd-tmpfiles --prefix=/dev --create --boot (code=exited, status=243/CREDENTIALS)
Main PID: 10844 (code=exited, status=243/CREDENTIALS)
IP: 0B in, 0B out
Dec 25 08:26:30 nixos systemd1: Starting Create Static Device Nodes in /dev...
Dec 25 08:26:30 nixos systemd[10844]: systemd-tmpfiles-setup-dev.service: Failed to set up credentials: Protocol error
Dec 25 08:26:30 nixos systemd[10844]: systemd-tmpfiles-setup-dev.service: Failed at step CREDENTIALS spawning systemd-tmpfiles: Protocol error
Dec 25 08:26:30 nixos systemd1: systemd-tmpfiles-setup-dev.service: Main process exited, code=exited, status=243/CREDENTIALS
Dec 25 08:26:30 nixos systemd1: systemd-tmpfiles-setup-dev.service: Failed with result 'exit-code'.
Dec 25 08:26:30 nixos systemd1: Failed to start Create Static Device Nodes in /dev.
warning: error(s) occurred while switching to the new configuration

Related

docker: docker start fails after creating daemon.json

I am trying to setup awslogs for docker.
The docs say to add this to daemon.json:
{
"log-driver": "awslogs",
"log-opts": {
"awslogs-region": "eu-central-1"
}
}
When I create on Ubuntu /etc/docker/daemon.json with the content above docker wont start again.
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Fr 2018-07-20 10:59:53 CEST; 11s ago
Docs: https://docs.docker.com
Process: 647 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 647 (code=exited, status=1/FAILURE)
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Failed to start Docker Application Container Engine.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Unit entered failed state.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Stopped Docker Application Container Engine.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Start request repeated too quickly.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Failed to start Docker Application Container Engine.
Can anybody explain this behaviour?
Not ever use it.
But from this: https://docs.docker.com/config/containers/logging/plugins/, it seems we need to install plugin for any new log driver, check it with docker plugin ls
Maybe it just available in amazon cloud environment, not available in local pc, just in case you did not notice that.

Docker could not start after install on CentOS 7

I install docker on CentOS7(Linux version 3.10.0-327.el7.x86_64) with command yum install -y docker, but when I try to start docker with systemctl start docker, the docker failed to start, below is the error message
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-15 16:38:37 CST; 10s ago
Docs: http://docs.docker.com
Process: 5166 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
Main PID: 5166 (code=exited, status=1/FAILURE)
Mar 15 16:38:36 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...
Mar 15 16:38:36 localhost.localdomain dockerd-current[5166]: time="2018-03-15T16:38:36.570661801+08:00" level=info msg="libcontainerd... 5171"
Mar 15 16:38:37 localhost.localdomain dockerd-current[5166]: time="2018-03-15T16:38:37.585565695+08:00" level=warning msg="overlay2: the ba...
Mar 15 16:38:37 localhost.localdomain dockerd-current[5166]: Error starting daemon: SELinux is not supported with the overlay2 graph ...false)
Mar 15 16:38:37 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 15 16:38:37 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
Mar 15 16:38:37 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
Mar 15 16:38:37 localhost.localdomain systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
How to solve this issue?

How do I clear a thinpool device for docker

I am running docker on a Redhat system with devicemapper and thinpool device just as recommended for production systems. Now when I want to reinstall docker I need two steps:
1) remove docker directory (in my case /area51/docker)
2) clear thinpool device
The docker documentation states that when using devicemapper with dm.metadev and dm.datadev options, the easiest way of cleaning devicemapper would be:
If setting up a new metadata pool it is required to be valid.
This can be achieved by zeroing the first 4k to indicate empty metadata, like this:
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
Unfortunately, according to the documentation, the dm.metadatadev is deprecated, it says to use dm.thinpooldev instead.
My thinpool has been created along the lines of this docker instruction
So, my setup now looks like this:
cat /etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool",
"dm.basesize=18G"
]
}
Under the devicemapper directory i see the following thinpool devices
ls -l /dev/mapper/thinpool_VG_38401-thinpool*
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6
So, after running docker successfully I tried to reinstall as described above and clear the thinpool by writing 4K zeroes into the tmeta device and restart docker:
dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1
systemctl start docker
And endet up with
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago
Docs: https://docs.docker.com
Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 1566 (code=exited, status=1/FAILURE)
Memory: 236.0K
CGroup: /system.slice/docker.service
Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577"
Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously"
Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks
Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed.
I assumed I could get around the 'unable to take ownership of thin-pool' by doing a reboot. But after reboot and trying to start docker again I got the following error:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago
Docs: https://docs.docker.com
Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 3180 (code=exited, status=1/FAILURE)
Memory: 37.9M
CGroup: /system.slice/docker.service
Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd"
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224"
Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool
Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed.
So, obviously writing zeroes into the thinpool_meta device is not the right thing to do, it seems to destroy my thinpool device.
Anyone here that can tell me the right steps to clear the thin-pool device? Preferably the solution should not require a reboot.

kube-addons.service failed on CoreOS-libvirt installation

I have the following issue installing and provisioning my Kubernetes CoreOS-libvirt-based cluster.
When I'm logging on the master node, I see the following:
ssh core#192.168.10.1
Last login: Thu Dec 10 17:19:21 2015 from 192.168.10.254
CoreOS alpha (884.0.0)
Update Strategy: No Reboots
Failed Units: 1
kube-addons.service
Trying to debug it, I run and receive the following:
core#kubernetes-master ~ $ systemctl status kube-addons.service
● kube-addons.service - Kubernetes addons
Loaded: loaded (/etc/systemd/system/kube-addons.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2015-12-10 16:41:06 UTC; 41min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 801 ExecStart=/opt/kubernetes/bin/kubectl create -f /opt/kubernetes/addons (code=exited, status=1/FAILURE)
Process: 797 ExecStartPre=/bin/sleep 10 (code=exited, status=0/SUCCESS)
Process: 748 ExecStartPre=/bin/bash -c while [[ "$(curl -s http://127.0.0.1:8080/healthz)" != "ok" ]]; do sleep 1; done (code=exited, status=0/SUCCESS)
Main PID: 801 (code=exited, status=1/FAILURE)
Dec 10 16:40:53 kubernetes-master systemd[1]: Starting Kubernetes addons...
Dec 10 16:41:06 kubernetes-master kubectl[801]: replicationcontroller "skydns" created
Dec 10 16:41:06 kubernetes-master kubectl[801]: error validating "/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 16:41:06 kubernetes-master systemd[1]: Failed to start Kubernetes addons.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Unit entered failed state.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Failed with result 'exit-code'.
My etcd version is:
etcd --version
etcd version 0.4.9
But I have a etcd2 also:
etcd2 --version
etcd Version: 2.2.2
Git SHA: b4bddf6
Go Version: go1.4.3
Go OS/Arch: linux/amd64
And at the current moment the second one is being runned:
ps aux | grep etcd
etcd 731 0.5 8.4 329788 42436 ? Ssl 16:40 0:16 /usr/bin/etcd2
root 874 0.4 7.4 59876 37804 ? Ssl 17:19 0:02 /opt/kubernetes/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd-servers=http://127.0.0.1:2379 --kubelet-port=10250 --service-cluster-ip-range=10.11.0.0/16
core 953 0.0 0.1 6740 876 pts/0 S+ 17:27 0:00 grep --colour=auto etcd
What causes the issue and how can I solve it?
Thank you.
The relevant log line is:
/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
You should figure out what's invalid about that IP or set the flag to ignore.

After installing docker on centos7,Failed to start docker."Job for docker.service failed."

After executing yum install docker on centos7, I want to start docker by executing service docker start, then i can see the error:
Redirecting to /bin/systemctl start docker.service
Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details.
then I execute systemctl status docker.service -l, then the error is:
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: failed (Result: exit-code) since Sun 2015-03-15 03:49:49 EDT; 12min ago
Docs: http://docs.docker.com
Process: 11444 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS (code=exited, status=1/FAILURE)
Main PID: 11444 (code=exited, status=1/FAILURE)
Mar 15 03:49:48 localhost.localdomain docker[11444]: 2015/03/15 03:49:48 docker daemon: 1.3.2 39fa2fa/1.3.2; execdriver: native; graphdriver:
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] +job serveapi(fd://)
Mar 15 03:49:48 localhost.localdomain docker[11444]: [info] Listening for HTTP on fd ()
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] +job init_networkdriver()
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] -job init_networkdriver() = OK (0)
Mar 15 03:49:49 localhost.localdomain docker[11444]: 2015/03/15 03:49:49 write /var/lib/docker/init/dockerinit-1.3.2: no space left on device
Mar 15 03:49:49 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 15 03:49:49 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
Mar 15 03:49:49 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
I really have no idea, looking forward to your response, I will be very appreciative!
this error usually occurs because of missing device-mapper-event-libs package.
# yum install device-mapper-event-libs
Thanks for Ben Whaley's advice,When I check my disk space,Indeed it's not enough.I extend my disk space and solve the problem. It's the first time I put forward questions,It's really of help. thanks again.
I upgraded the CentOS 7 kernel from 3 to 4.
NOTE: I upgraded Kernel for other reasons also, first try without upgrading kernel.
delete the folder docker under /var/lib
go to cd /etc/sysconfig
vi docker (before editing copy docker docker.org)
see Line there you find OPTIONS='--selinux-disabled --log-driver=journald'
Remove --selinux-disabled should like OPTIONS='--log-driver=journald'
Now un-comment # setsebool -P docker_transition_unconfined 1 to setsebool -P docker_transition_unconfined 1
reboot the machine or you try only docker start to check for me it works :)

Resources