Ubuntu 20.04 LTS
I just installed oss version if JFrog Artifactory
To run artifactory I used systemctl start artifactory.service by getting this error:
Job for artifactory.service failed because the control process exited with error code.
See "systemctl status artifactory.service" and "journalctl -xe" for details.
If I run systemctl status artifactory.service this is what I am getting
● artifactory.service - Artifactory service
Loaded: loaded (/lib/systemd/system/artifactory.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Jun 01 00:25:42 siddharth-HP-Notebook systemd[1]: Stopped Artifactory service.
Jun 01 00:25:42 siddharth-HP-Notebook systemd[1]: Starting Artifactory service...
Jun 01 00:25:43 siddharth-HP-Notebook artifactoryManage.sh[17274]: 2020-05-31T18:55:43.286Z [shell] [INFO ] [] [artifac>
Jun 01 00:25:43 siddharth-HP-Notebook systemd[1]: artifactory.service: Can't open PID file /run/artifactory.pid (yet?) >
Jun 01 00:25:43 siddharth-HP-Notebook systemd[1]: artifactory.service: Failed with result 'protocol'.
Jun 01 00:25:43 siddharth-HP-Notebook systemd[1]: Failed to start Artifactory service.
Jun 01 00:25:48 siddharth-HP-Notebook systemd[1]: Stopped Artifactory service.
Jun 01 00:25:48 siddharth-HP-Notebook systemd[1]: /lib/systemd/system/artifactory.service:10: PIDFile= references a pat>
Jun 01 00:31:37 siddharth-HP-Notebook systemd[1]: /lib/systemd/system/artifactory.service:10: PIDFile= references a pat>
Jun 01 00:31:38 siddharth-HP-Notebook systemd[1]: /lib/systemd/system/artifactory.service:10: PIDFile= references a pat>
Also during the installation, I got this error in the end that can be helpful:
Triggering migration script, this will migrate if needed ...
chown: invalid user: ‘artifactory:artifactory’
[WARN] Could not set owner of [/opt/jfrog/artifactory/var/etc] to [artifactory:artifactory]
Processing triggers for systemd (245.4-4ubuntu3.1) ...
Be sure that PID file is there:
Jun 01 00:25:43 siddharth-HP-Notebook systemd[1]: artifactory.service: Can't open PID file /run/artifactory.pid (yet?) >
If it's there, you need to check permissions and your service file to check what's your path of PID file
Related
After installing Java and Jenkins on my CentOS 7 server. I tried to start the Jenkins, and I am getting the below error message.
Job for jenkins.service failed. See "systemctl status jenkins.service"
and "journalctl -xe" for details.
When I run "systemctl status jenkins.service" to see what the issue is, I get the below output
● jenkins.service - Jenkins Continuous Integration Server
Loaded: loaded (/usr/lib/systemd/system/jenkins.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Thu 2022-08-18 14:23:02 UTC; 20s ago
Process: 8847 ExecStart=/usr/bin/jenkins (code=exited, status=0/SUCCESS)
Main PID: 8847 (code=exited, status=0/SUCCESS)
Aug 18 14:23:02 localhost.localdomain systemd[1]: Failed to start Jenkins Continuous Integration Server.
Aug 18 14:23:02 localhost.localdomain systemd[1]: Unit jenkins.service entered failed state.
Aug 18 14:23:02 localhost.localdomain systemd[1]: jenkins.service failed.
Aug 18 14:23:02 localhost.localdomain systemd[1]: jenkins.service holdoff time over, scheduling restart.
Aug 18 14:23:02 localhost.localdomain systemd[1]: Stopped Jenkins Continuous Integration Server.
Aug 18 14:23:02 localhost.localdomain systemd[1]: start request repeated too quickly for jenkins.service
Aug 18 14:23:02 localhost.localdomain systemd[1]: Failed to start Jenkins Continuous Integration Server.
Aug 18 14:23:02 localhost.localdomain systemd[1]: Unit jenkins.service entered failed state.
Aug 18 14:23:02 localhost.localdomain systemd[1]: jenkins.service failed.
Not quite sure how to fix this. Anybody with a solution? Thanks
can you please use journalctl -xe for more detailed logs.
also can you run Jenkins in interactive mode to see why its failing to start like -
java -jar jenkins.war
you can get command details in /usr/bin/jenkins file.
I noticed that when launching some Jenkins builds sometimes the node hosting Jenkins get stuck forever. It means the whole node is not reachable, and all its pods are down (not ready in the dashboard).
To make things up again I need to remove it from the cluster and add it again (I'm on GCE so I need to remove it from the instance group to be able to delete it).
Note: during hours I'm not able to connect through SSH to the node, it is clearly out of service ^^
From my understanding, reaching memory top crashes a node, but reaching top CPU usage should just slow down the server and not make a big deal like what I'm experiencing. In the worst case Kubelet should be unavailable until CPU gets better.
Does someone is able to help me determining the origin of this issue? What could cause such a problem?
Node metrics 1
Node metrics 2
Jenkins slave metrics
Node metrics from GCE
On the other side, after waiting hours, I've been able to access the node through SSH and I run sudo journalctl -u kubelet to see what's going on. I don't see anything specific at 7pm o'clock but I'm able to see recurrent error like:
Apr 04 19:00:58 nodes-s2-2g5v systemd[43508]: kubelet.service: Failed at step EXEC spawning /home/kubernetes/bin/kubelet: Permission denied
Apr 04 19:00:58 nodes-s2-2g5v systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC
Apr 04 19:00:58 nodes-s2-2g5v systemd[1]: kubelet.service: Unit entered failed state.
Apr 04 19:00:58 nodes-s2-2g5v systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 04 19:01:00 nodes-s2-2g5v systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Apr 04 19:01:00 nodes-s2-2g5v systemd[1]: Stopped Kubernetes Kubelet Server.
Apr 04 19:01:00 nodes-s2-2g5v systemd[1]: Started Kubernetes Kubelet Server.
Apr 04 19:01:00 nodes-s2-2g5v systemd[43511]: kubelet.service: Failed at step EXEC spawning /home/kubernetes/bin/kubelet: Permission denied
Apr 04 19:01:00 nodes-s2-2g5v systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC
Apr 04 19:01:00 nodes-s2-2g5v systemd[1]: kubelet.service: Unit entered failed state.
Apr 04 19:01:00 nodes-s2-2g5v systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 04 19:01:02 nodes-s2-2g5v systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Apr 04 19:01:02 nodes-s2-2g5v systemd[1]: Stopped Kubernetes Kubelet Server.
Apr 04 19:01:02 nodes-s2-2g5v systemd[1]: Started Kubernetes Kubelet Server.
I go to older logs and I found at 5:30pm the start of this kind of messages:
Apr 04 17:26:50 nodes-s2-2g5v kubelet[1841]: I0404 17:25:05.168402 1841 prober.go:111] Readiness probe for "...
Apr 04 17:26:50 nodes-s2-2g5v kubelet[1841]: I0404 17:25:04.021125 1841 prober.go:111] Readiness probe for "...
-- Reboot --
Apr 04 17:31:31 nodes-s2-2g5v systemd[1]: Started Kubernetes Kubelet Server.
Apr 04 17:31:31 nodes-s2-2g5v systemd[1699]: kubelet.service: Failed at step EXEC spawning /home/kubernetes/bin/kubelet: Permission denied
Apr 04 17:31:31 nodes-s2-2g5v systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC
Apr 04 17:31:31 nodes-s2-2g5v systemd[1]: kubelet.service: Unit entered failed state.
Apr 04 17:31:31 nodes-s2-2g5v systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 04 17:31:33 nodes-s2-2g5v systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Apr 04 17:31:33 nodes-s2-2g5v systemd[1]: Stopped Kubernetes Kubelet Server.
Apr 04 17:31:33 nodes-s2-2g5v systemd[1]: Started Kubernetes Kubelet Server.
At this time node kubelet reboots and it corresponds to a Jenkins build. There is the same pattern with high CPU usage. I don't know why earlier it just rebooted and around 7pm the node just get stuck :/
I'm really sorry, it's a lot of information but I'm totally lost, that's not the first time it happens to me ^^
Thank you,
As mentioned by #Brandon, it was related to resource limits applied to my Jenkins slaves.
In my case even if precised in my Helm chart YAML file, the values were not set. I had to go deeper in the UI to set them manually.
From this modification, everything is now stable! :)
I'm trying run docker in Ubuntu 16.04 after system reboot . I created service for it "/etc/systemd/system/openvpnBOX.service":
[Unit]
Description=Openvpn Docker
[Service]
User=root
ExecStart=/etc/init/openvpn.conf
[Install]
WantedBy=multi-user.target
Alias=openvpnBOX.service
openvpn.conf:
#!/bin/bash
exec docker run --volumes-from ovpn-data --rm -p 1194:1194/udp --cap- add=NET_ADMIN kylemanna/openvpn
When i'm running this service "sudo service openvpnBOX start i see that service is run, but when i'm rebooting my system, after reboot i see that service can't start:
"sudo service openvpnBOX status"
● openvpnBOX.service - Openvpn Docker
Loaded: loaded (/etc/systemd/system/openvpnBOX.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2017-10-01 21:35:48 SST; 2min 51s ago
Process: 1771 ExecStart=/etc/init/openvpn.conf (code=exited, status=1/FAILURE)
Main PID: 1771 (code=exited, status=1/FAILURE)
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: Started Openvpn Docker.
Oct 01 21:35:48 openvpn.conf[1771]: Error response from daemon: 404 page not found
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Start request repeated too quickly.
Oct 01 21:35:48 systemd[1]: Failed to start Openvpn Docker.
I can use "sudo docker run --restart=always --volumes-from ovpn-data -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn" but it doesn't solve my problem, because i woud like understand why my service doesn't work after reboot.
Any idea?
I am trying to deploy my new rails app to Ubuntu 16.04 Digital Ocean Server. Here Unicorn is managed via systemd. This is my /etc/systemd/system/unicorn.service file
[Unit]
Description=Skreem Application
Before=nginx.service
Requires=network.target
[Service]
Type=simple
User=rails
Group=rails
RuntimeDirectory=DigitalOceanOneClick
SyslogIdentifier=DigitalOceanRailsOneClick
# Go paranoid
PrivateTmp=true
PrivateDevices=true
ProtectSystem=full
ProtectKernelTunables=true
NoNewPrivileges=true
WorkingDirectory=/home/rails/skreem-ror
ExecStart=/bin/bash /home/rails/skreem-ror/.unicorn.sh
TimeoutSec=60s
RestartSec=10s
Restart=always
[Install]
WantedBy=multi-user.target
When I am trying to restart the unicorn service, I am getting following error
Failed to restart unicorn.service: Unit unicorn.service is not loaded properly: Invalid argument.
See system logs and 'systemctl status unicorn.service' for details.
Then I tried systemctl status unicorn.service and getting
Jul 03 10:05:06 skreem-dev2 systemd[1]: unicorn.service: Main process exited, code=exited, status=1/FAILURE
Jul 03 10:05:06 skreem-dev2 systemd[1]: unicorn.service: Unit entered failed state.
Jul 03 10:05:06 skreem-dev2 systemd[1]: unicorn.service: Failed with result 'exit-code'.
Jul 03 10:05:07 skreem-dev2 systemd[1]: [/etc/systemd/system/unicorn.service:18] Unknown lvalue 'ProtectKernelTunables' in section 'Service'
Jul 03 10:05:07 skreem-dev2 systemd[1]: [/etc/systemd/system/unicorn.service:32] Missing '='.
Jul 03 10:05:16 skreem-dev2 systemd[1]: unicorn.service: Service hold-off time over, scheduling restart.
Jul 03 10:05:16 skreem-dev2 systemd[1]: unicorn.service: Failed to schedule restart job: Unit unicorn.service is not loaded properly: Invalid a
Jul 03 10:05:16 skreem-dev2 systemd[1]: unicorn.service: Unit entered failed state.
Jul 03 10:05:16 skreem-dev2 systemd[1]: unicorn.service: Failed with result 'resources'.
Jul 03 11:33:51 skreem-dev2 systemd[1]: Stopped DigitalOcean Rails One-Click Application.
Its not coming from my updated unicorn.service file. Is it because my changes are not loading properly. Please help me to solve this issue.
So I installed docker engine on RHEL 7
Now when I do a
service docker start
I get the following error:
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
and when I go to "systemctl status docker.service" and "journalctl -xe"
I get:
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/docker.service.d
└─docker.conf
Active: failed (Result: exit-code) since Thu 2016-09-08 22:15:53 EDT; 10s ago
Docs: https://docs.docker.com
Process: 13504 ExecStart=/usr/bin/docker daemon -H fd:// --mtu 1400 --exec-opt native.cgroupdriver=systemd (code=exited, status=1/FAILURE)
Main PID: 13504 (code=exited, status=1/FAILURE)
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: Starting Docker Application Container Engine...
Sep 08 22:15:53 app-linux2.app-netapp.lab.com docker[13504]: time="2016-09-08T22:15:53.227074798-04:00" level=fatal msg="no sockets found via socket activation: make sure the service ...by systemd"
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: Failed to start Docker Application Container Engine.
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: Unit docker.service entered failed state.
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: docker.service failed.
And
--
-- The start-up result is done.
Sep 08 22:10:01 app-linux2.app-netapp.lab.com CROND[12753]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Sep 08 22:10:01 app-linux2.app-netapp.lab.com systemd[1]: Starting Session 58 of user root.
-- Subject: Unit session-58.scope has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-58.scope has begun starting up.
Sep 08 22:10:53 app-linux2.app-netapp.lab.com polkitd[766]: Registered Authentication Agent for unix-process:12878:2674931 (system bus name :1.173 [/usr/bin/pkttyagent --notify-fd 5 --fallback], ob
Sep 08 22:10:53 app-linux2.app-netapp.lab.com systemd[1]: Starting Docker Application Container Engine...
-- Subject: Unit docker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
Sep 08 22:10:53 app-linux2.app-netapp.lab.com docker[12895]: time="2016-09-08T22:10:53.413304246-04:00" level=fatal msg="no sockets found via socket activation: make sure the service was started by
Sep 08 22:10:53 app-linux2.app-netapp.lab.com systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Sep 08 22:10:53 app-linux2.app-netapp.lab.com systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Sep 08 22:10:53 app-linux2.app-netapp.lab.com systemd[1]: Unit docker.service entered failed state.
Sep 08 22:10:53 app-linux2.app-netapp.lab.com systemd[1]: docker.service failed.
Sep 08 22:10:53 app-linux2.app-netapp.lab.com polkitd[766]: Unregistered Authentication Agent for unix-process:12878:2674931 (system bus name :1.173, object path /org/freedesktop/PolicyKit1/Authent
Sep 08 22:13:36 app-linux2.app-netapp.lab.com polkitd[766]: Registered Authentication Agent for unix-process:13214:2691210 (system bus name :1.174 [/usr/bin/pkttyagent --notify-fd 5 --fallback], ob
Sep 08 22:13:36 app-linux2.app-netapp.lab.com polkitd[766]: Unregistered Authentication Agent for unix-process:13214:2691210 (system bus name :1.174, object path /org/freedesktop/PolicyKit1/Authent
Sep 08 22:15:53 app-linux2.app-netapp.lab.com polkitd[766]: Registered Authentication Agent for unix-process:13489:2704913 (system bus name :1.175 [/usr/bin/pkttyagent --notify-fd 5 --fallback], ob
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: Starting Docker Application Container Engine...
-- Subject: Unit docker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
Sep 08 22:15:53 app-linux2.app-netapp.lab.com docker[13504]: time="2016-09-08T22:15:53.227074798-04:00" level=fatal msg="no sockets found via socket activation: make sure the service was started by
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: Unit docker.service entered failed state.
Sep 08 22:15:53 app-linux2.app-netapp.lab.com systemd[1]: docker.service failed.
Sep 08 22:15:53 app-linux2.app-netapp.lab.com polkitd[766]: Unregistered Authentication Agent for unix-process:13489:2704913 (system bus name :1.175, object path /org/freedesktop/PolicyKit1/Authent
lines 3473-3523/3523 (END)
I tried to search solution for this
but could not find any.
Just Remove Docker Lib and restart it again with:
sudo rm -rf /var/lib/docker
then
sudo systemctl enable docker
sudo systemctl start docker
Check your OS logs files for warning or error messages.
Probably you have made a mistake in the config files of docker and when the service starts it gets an error.
The log's location depends on your OS.
On Linux system logs are often in:
/var/log/daemon.log
/var/log/docker
/var/log/messages
/var/log/syslog
/var/log/upstart/docker.log
Some useful linux console commands to inspect docker logs:
sudo systemctl status docker.service
sudo journalctl -fu docker.service
cat /var/log/daemon.log | grep docker
cat /var/log/messages | grep docker
If you are using Windows this article might be helpful
Here are more details
What version of Docker do you use? If you are not locked to the elder one, consider use of the most recent version (currently it's 1.12). Here's my startup options (Debian 8, /etc/systemd/system/docker.service):
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd:// --dns=10.240.116.7 --dns 8.8.8.8 --bip=172.17.42.1/24
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
Also try to start Docker in debug mode (-D) without systemd, just like if it was a common program. This will help to find out why daemon doesn't start.
Fixing the Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. problem, for me this worked:
create daemon.json in /etc/docker/
put this in it:
{
"exec-root": "/path/to/docker/run",
"storage-driver": "overlay",
"graph": "/path/to/docker/lib"
}
then try: docker daemon
reboot
docker run hello-world should succeed now
There are many reasons for docker service failing to run. One of them that I encountered is using single quotes instead of double quotes for the key value pairs in the json file.
It fails
sudo cat > /etc/docker/daemon.json << '_EOF'
{
'registry-mirrors': ['https://docker.io']
}
_EOF
It works!
sudo cat > /etc/docker/daemon.json << '_EOF'
{
"registry-mirrors": ["https://docker.io"]
}
_EOF
For more info see here
I came across same issue in my Linux VM(virtual machine).
System details : ubuntu 18.04
I had to just delete my daemon.json and then do a service docker start this worked for me.
Note: I had put insecure registry in my daemon.json file and i didn't want that as well hence i deleted it. I don't know it's usage though.