"A Stop Job is running for session 1 of user SeAL80s " in fedora 23 shutdown prosess - shutdown

My System is Fedora 23 x86_64 running Xfce4
When system installed , i did:
su # root to open sudo
sudo vi to disable selinux
sudo mkdir /usr/share/fonts/ to add extra fonts
sudo chmod 755 fonts folder
sudo chmod 644 fonts files
sudo mv to add mirror repos
and remove something i don't like with:
sudo apt-get remove plymouth-core-libs tigervnc-server-minimal tigervnc-license libpinyin libpinyin-data libreport ibus-* firewall-config yum system-config-printer-libs xfce4-appfinder catfish xfce4-clipman-plugin seahorse xfburn orage geany evince-libs claws-mail claws-mail-* midori pidgin transmission-common asunder parole pragha libabiword gnumeric xfce4-dict gparted abrt-* gnome-abrt xfdashboard fedora-release-notes xfce4-about plymouth-core-lib xfce4-diskperf-plugin xfce4-verve-plugin xfce4-whiskermenu-plugin xfce4-weather-plugin xfce4-mailwatch-plugin xfce4-genmon-plugin xfce4-battery-plugin xfce4-cpugraph-plugin xfce4-eyes-plugin xfce4-systemload-plugin xfce4-screenshooter-plugin xfce4-xkb-plugin xfce4-fsguard-plugin xfce4-netload-plugin xfce4-time-out-plugin xfce4-mount-plugin
sudo apt-get update
and shutdown (i don't recall if there is the bug appeared) but when i installed mariadb (or mysql) and postgresql and other things and shutdown there is always
"A Stop Job is running for session 1 of user SeAL80s "
Forums say it's a systemd bug but on systemctl command:
systemctl status mariadb.service -l
shows following info:
mariadb.service - MariaDB database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/mariadb.service.d
└─migrated-from-my.cnf-settings.conf
Active: active (running) since Wed 2015-12-16 08:13:16 CST; 5min ago
Main PID: 1107 (mysqld)
Status: "Taking your SQL requests now..."
CGroup: /system.slice/mariadb.service
└─1107 /usr/sbin/mysqld
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621793179904 [Note] InnoDB: Highest supported file format is Barracuda.
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621793179904 [Note] InnoDB: 128 rollback segment(s) are active.
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621793179904 [Note] InnoDB: Waiting for purge to start
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621793179904 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.26-74.0 started; log sequence number 1617005
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621044315904 [Note] InnoDB: Dumping buffer pool(s) not yet started
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621793179904 [Note] Plugin 'FEEDBACK' is disabled.
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621793179904 [Note] Server socket created on IP: '::'.
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: 2015-12-16 8:13:16 139621793179904 [Note] /usr/sbin/mysqld: ready for connections.
Dec 16 08:13:16 localhost.localdomain mysqld[1107]: Version: '10.1.9-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
Dec 16 08:13:16 localhost.localdomain systemd[1]: Started MariaDB database server.
systemctl status postgresql -l shows
postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled; vendor preset: disabled)
Active: inactive (dead)
I don't thing it's mariadb or postgresql problem .
When shutdown, it only shows
"A Stop job is running..."
not saying what job.
How can i debug this nasty bug ?
What tools or information do i need?

It seems like a bug. It affect me too. First time I've seen this bug on Fedora 20 or 21. It was patched, but returned in 23. It's somehow related to systemd and it's not only on Fedora - I've seen posts about it ex. on Arch Linux forums.
You can share with this, describing problems in this bug on bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1088619 or https://bugzilla.redhat.com/show_bug.cgi?id=1272635

Related

why a flake.nix file prevent to nixos-rebuild

I'm trying to do this tutorial.
I've added these lines in /etc/nixos/configuration.nix
services.nginx.enable = true;
services.nginx.virtualHosts."test.local.cetacean.club" = {
root = "/srv/http/test.local.cetacean.club";
};
I've made these commands
sudo mkdir -p /srv/http/test.local.cetacean.club
sudo chown nixos:nginx /srv/http/test.local.cetacean.club
I've defined these file /etc/nixos/flake.nix like this
{
inputs = {
nixpkgs.url = "nixpkgs/nixos-unstable";
};
outputs = { self, nixpkgs, ... }: {
nixosConfigurations.nixos = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./configuration.nix
# add things here
];
};
};
}
the following command works. It means that my file is ok
sudo nix flake check /etc/nixos
but this command doesn't work
sudo nixos-rebuild switch
I don't understand the problem . This command worked before the definition of /etc/nixos/flake.nix. I don't understand why this prevent to rebuild /etc/nixos/configuration.nix.
Now the error message
× systemd-sysctl.service - Apply Kernel Variables
Loaded: loaded (/etc/systemd/system/systemd-sysctl.service; enabled; preset: enabled)
Drop-In: /nix/store/9mnkvlaxvwlp3iw50mf2p91rm3simizr-system-units/systemd-sysctl.service.d
└─overrides.conf
Active: failed (Result: exit-code) since Sun 2022-12-25 08:26:30 UTC; 64ms ago
Duration: 29min 7.255s
Docs: man:systemd-sysctl.service(8)
man:sysctl.d(5)
Process: 10842 ExecStart=/nix/store/9rjdvhq7hnzwwhib8na2gmllsrh671xg-systemd-252.1/lib/systemd/systemd-sysctl (code=exited, status=243/CREDENTIALS)
Main PID: 10842 (code=exited, status=243/CREDENTIALS)
IP: 0B in, 0B out
Dec 25 08:26:30 nixos systemd1: Starting Apply Kernel Variables...
Dec 25 08:26:30 nixos systemd[10842]: systemd-sysctl.service: Failed to set up credentials: Protocol error
Dec 25 08:26:30 nixos systemd[10842]: systemd-sysctl.service: Failed at step CREDENTIALS spawning /nix/store/9rjdvhq7hnzwwhib8na2gmllsrh671xg-systemd-252.1/lib/systemd/systemd-sysctl: Protocol error
Dec 25 08:26:30 nixos systemd1: systemd-sysctl.service: Main process exited, code=exited, status=243/CREDENTIALS
Dec 25 08:26:30 nixos systemd1: systemd-sysctl.service: Failed with result 'exit-code'.
Dec 25 08:26:30 nixos systemd1: Failed to start Apply Kernel Variables.
× systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev
Loaded: loaded (/etc/systemd/system/systemd-tmpfiles-setup-dev.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-12-25 08:26:30 UTC; 68ms ago
Duration: 29min 7.256s
Docs: man:tmpfiles.d(5)
man:systemd-tmpfiles(8)
Process: 10844 ExecStart=systemd-tmpfiles --prefix=/dev --create --boot (code=exited, status=243/CREDENTIALS)
Main PID: 10844 (code=exited, status=243/CREDENTIALS)
IP: 0B in, 0B out
Dec 25 08:26:30 nixos systemd1: Starting Create Static Device Nodes in /dev...
Dec 25 08:26:30 nixos systemd[10844]: systemd-tmpfiles-setup-dev.service: Failed to set up credentials: Protocol error
Dec 25 08:26:30 nixos systemd[10844]: systemd-tmpfiles-setup-dev.service: Failed at step CREDENTIALS spawning systemd-tmpfiles: Protocol error
Dec 25 08:26:30 nixos systemd1: systemd-tmpfiles-setup-dev.service: Main process exited, code=exited, status=243/CREDENTIALS
Dec 25 08:26:30 nixos systemd1: systemd-tmpfiles-setup-dev.service: Failed with result 'exit-code'.
Dec 25 08:26:30 nixos systemd1: Failed to start Create Static Device Nodes in /dev.
warning: error(s) occurred while switching to the new configuration

couldn't run docker when mount volume

I am new to docker and I tried to run a mariadb daemon in docker.
I could run mariadb as a daemon with the command
PS D:\docker> docker run --name mariadb2 -e MYSQL_ROOT_PASSWORD=password -d mariadb
but couldn't run with the command
PS D:\docker> docker run --name mariadb2 -v /d/docker/volume/db:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=password -d mariadb
the folder had some files created in there so it should have mounted properly.
Here is the log and system info
> PS D:\docker> docker logs mariadb2
Initializing database
2016-08-15 8:50:13 139894536869824 [Note] /usr/sbin/mysqld (mysqld 10.1.16-MariaDB-1~jessie) starting as process 51 ...
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: The InnoDB memory heap is disabled
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Memory barrier is not used
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Compressed tables use zlib 1.2.8
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Using Linux native AIO
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Using SSE crc32 instructions
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Initializing buffer pool, size = 256.0M
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Completed initialization of buffer pool
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: The first specified data file ./ibdata1 did not exist: a new database to be created!
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Setting file ./ibdata1 size to 12 MB
2016-08-15 8:50:13 139894536869824 [Note] InnoDB: Database physically writes the file full: wait...
2016-08-15 08:50:13 7f3bbc2d17c0 InnoDB: Error: Write to file ./ibdata1 failed at offset 0.
InnoDB: 1048576 bytes should have been written, only 0 were written.
InnoDB: Operating system error number 22.
InnoDB: Check that your OS and file system support files of this size.
InnoDB: Check also that the disk is not full or a disk quota exceeded.
InnoDB: Error number 22 means 'Invalid argument'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html
2016-08-15 8:50:13 139894536869824 [ERROR] InnoDB: Error in creating ./ibdata1: probably out of disk space
2016-08-15 8:50:13 139894536869824 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to
ailed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in th
rote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your prec
2016-08-15 8:50:13 139894536869824 [ERROR] Plugin 'InnoDB' init function returned error.
2016-08-15 8:50:13 139894536869824 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2016-08-15 8:50:14 139894536869824 [ERROR] Unknown/unsupported storage engine: InnoDB
2016-08-15 8:50:14 139894536869824 [ERROR] Aborting
PS D:\docker> docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 27
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 77
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host bridge null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.15-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.95 GiB
Name: moby
ID: ZCNG:T3MP:ORJH:E5V4:SBTR:PFKC:DMBM:IG2C:QHQE:FICI:ALHP:JKNO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
PS D:\docker> docker version
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:04:48 2016
OS/Arch: windows/amd64
Experimental: true
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:04:48 2016
OS/Arch: linux/amd64
Experimental: true
Thanks in advance
I had the same issue before.
Apparently when using docker on windows (in my case) it had problems with the file path. The solution in that case was to create a volume in docker by using the following command:
docker volume create --name=your_volume
then in docker compose you would have something like
db:
image: mariadb
volumes:
- volume_db:/var/lib/mysql
Hopes this helps you

kube-addons.service failed on CoreOS-libvirt installation

I have the following issue installing and provisioning my Kubernetes CoreOS-libvirt-based cluster.
When I'm logging on the master node, I see the following:
ssh core#192.168.10.1
Last login: Thu Dec 10 17:19:21 2015 from 192.168.10.254
CoreOS alpha (884.0.0)
Update Strategy: No Reboots
Failed Units: 1
kube-addons.service
Trying to debug it, I run and receive the following:
core#kubernetes-master ~ $ systemctl status kube-addons.service
● kube-addons.service - Kubernetes addons
Loaded: loaded (/etc/systemd/system/kube-addons.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2015-12-10 16:41:06 UTC; 41min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 801 ExecStart=/opt/kubernetes/bin/kubectl create -f /opt/kubernetes/addons (code=exited, status=1/FAILURE)
Process: 797 ExecStartPre=/bin/sleep 10 (code=exited, status=0/SUCCESS)
Process: 748 ExecStartPre=/bin/bash -c while [[ "$(curl -s http://127.0.0.1:8080/healthz)" != "ok" ]]; do sleep 1; done (code=exited, status=0/SUCCESS)
Main PID: 801 (code=exited, status=1/FAILURE)
Dec 10 16:40:53 kubernetes-master systemd[1]: Starting Kubernetes addons...
Dec 10 16:41:06 kubernetes-master kubectl[801]: replicationcontroller "skydns" created
Dec 10 16:41:06 kubernetes-master kubectl[801]: error validating "/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 16:41:06 kubernetes-master systemd[1]: Failed to start Kubernetes addons.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Unit entered failed state.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Failed with result 'exit-code'.
My etcd version is:
etcd --version
etcd version 0.4.9
But I have a etcd2 also:
etcd2 --version
etcd Version: 2.2.2
Git SHA: b4bddf6
Go Version: go1.4.3
Go OS/Arch: linux/amd64
And at the current moment the second one is being runned:
ps aux | grep etcd
etcd 731 0.5 8.4 329788 42436 ? Ssl 16:40 0:16 /usr/bin/etcd2
root 874 0.4 7.4 59876 37804 ? Ssl 17:19 0:02 /opt/kubernetes/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd-servers=http://127.0.0.1:2379 --kubelet-port=10250 --service-cluster-ip-range=10.11.0.0/16
core 953 0.0 0.1 6740 876 pts/0 S+ 17:27 0:00 grep --colour=auto etcd
What causes the issue and how can I solve it?
Thank you.
The relevant log line is:
/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
You should figure out what's invalid about that IP or set the flag to ignore.

Docker, Vagrant, CentOS 7, cannot start

The /var/log/messages log is below
Aug 3 04:27:54 localhost lvm[3169]: Thin centos-docker--pool is now 100% full.
Aug 3 04:27:54 localhost lvm[3169]: Insufficient free space: 1 extents needed, but only 0 available
Aug 3 04:27:54 localhost lvm[3169]: Failed to extend thin centos-docker--pool.
And the systemctl status log is below:
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: failed (Result: exit-code) since Mon 2015-08-03 04:22:52 UTC; 6min ago
Docs: http://docs.docker.com
Process: 7577 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 7577 (code=exited, status=1/FAILURE)
Aug 03 04:22:52 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...
Aug 03 04:22:52 localhost.localdomain docker[7577]: time="2015-08-03T04:22:52Z" level=info msg="+job serveapi(unix:///var/run/docker.sock)"
Aug 03 04:22:52 localhost.localdomain docker[7577]: time="2015-08-03T04:22:52Z" level=info msg="Listening for HTTP on unix (/var/run/docker.sock)"
Aug 03 04:22:52 localhost.localdomain docker[7577]: time="2015-08-03T04:22:52Z" level=fatal msg="Shutting down daemon due to errors: error intializing graphdriver: Unable to take ownership of... data blocks"
Aug 03 04:22:52 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Aug 03 04:22:52 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
Aug 03 04:22:52 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.
And here is the lsblk result:
[root#localhost vagrant]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 9.9G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 9.4G 0 part
├─centos-root 253:0 0 8.4G 0 lvm /
├─centos-swap 253:1 0 1016M 0 lvm [SWAP]
├─centos-docker--pool_tmeta 253:2 0 12M 0 lvm
│ └─centos-docker--pool 253:4 0 16M 0 lvm
└─centos-docker--pool_tdata 253:3 0 16M 0 lvm
└─centos-docker--pool 253:4 0 16M 0 lvm
sr0 11:0 1 1024M 0 rom
sr1 11:1 1 1024M 0 rom
This problem maybe there is not enough lvm space to use. It was only to be allocated 16MB.
The solution is to resize the logic volume of centos-docker-pool

After installing docker on centos7,Failed to start docker."Job for docker.service failed."

After executing yum install docker on centos7, I want to start docker by executing service docker start, then i can see the error:
Redirecting to /bin/systemctl start docker.service
Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details.
then I execute systemctl status docker.service -l, then the error is:
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: failed (Result: exit-code) since Sun 2015-03-15 03:49:49 EDT; 12min ago
Docs: http://docs.docker.com
Process: 11444 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS (code=exited, status=1/FAILURE)
Main PID: 11444 (code=exited, status=1/FAILURE)
Mar 15 03:49:48 localhost.localdomain docker[11444]: 2015/03/15 03:49:48 docker daemon: 1.3.2 39fa2fa/1.3.2; execdriver: native; graphdriver:
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] +job serveapi(fd://)
Mar 15 03:49:48 localhost.localdomain docker[11444]: [info] Listening for HTTP on fd ()
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] +job init_networkdriver()
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] -job init_networkdriver() = OK (0)
Mar 15 03:49:49 localhost.localdomain docker[11444]: 2015/03/15 03:49:49 write /var/lib/docker/init/dockerinit-1.3.2: no space left on device
Mar 15 03:49:49 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 15 03:49:49 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
Mar 15 03:49:49 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
I really have no idea, looking forward to your response, I will be very appreciative!
this error usually occurs because of missing device-mapper-event-libs package.
# yum install device-mapper-event-libs
Thanks for Ben Whaley's advice,When I check my disk space,Indeed it's not enough.I extend my disk space and solve the problem. It's the first time I put forward questions,It's really of help. thanks again.
I upgraded the CentOS 7 kernel from 3 to 4.
NOTE: I upgraded Kernel for other reasons also, first try without upgrading kernel.
delete the folder docker under /var/lib
go to cd /etc/sysconfig
vi docker (before editing copy docker docker.org)
see Line there you find OPTIONS='--selinux-disabled --log-driver=journald'
Remove --selinux-disabled should like OPTIONS='--log-driver=journald'
Now un-comment # setsebool -P docker_transition_unconfined 1 to setsebool -P docker_transition_unconfined 1
reboot the machine or you try only docker start to check for me it works :)

Resources