Not able to change hhvm user from hhvm to nginx on CentOS 7 - hhvm

I am able to run HHVM with user hhvm as it is default. But when trying to change the user to nginx I get error when checking it's status.
$ vim /usr/lib/systemd/system/hhvm.service
I add the following
[Unit]
Description=HipHop Virtual Machine (FCGI)
[Service]
ExecStart=/usr/local/bin/hhvm -c /etc/hhvm/server.ini -c /etc/hhvm/php.ini --user nginx --mode daemon
[Install]
WantedBy=multi-user.target
on the config file
$ vim /etc/hhvm/server.ini
this is what I have for config
pid = /var/run/hhvm/pid
hhvm.server.ip = 127.0.0.1
hhvm.server.port = 9000
;hhvm.server.file_socket = /var/run/hhvm/hhvm.sock
hhvm.server.user = nginx
hhvm.server.type = fastcgi
hhvm.server.default_document = index.php
hhvm.source_root = /srv/www/public_html
hhvm.server.always_use_relative_path = false
hhvm.server.thread_count = 32
hhvm.resource_limit.max_socket = 65536
hhvm.jit = true
hhvm.jit_a_size = 67108864
hhvm.jit_a_stubs_size = 22554432
hhvm.jit_global_data_size = 22554432
hhvm.jit_profile_interp_requests = 3
; mysql
hhvm.mysql.socket = /var/lib/mysql/mysql.sock
hhvm.mysql.typed_results = true
; logging
hhvm.log.use_syslog = false
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.log.level = Warning
hhvm.log.always_log_unhandled_exceptions = true
hhvm.log.runtime_error_reporting_level = 8191
after this I run the commands
$ systemctl daemon-reload
$ systemctl start hhvm.service
$ systemctl status hhvm.service
When I check for status I get the following message
hhvm.service - HipHop Virtual Machine (FCGI)
Loaded: loaded (/usr/lib/systemd/system/hhvm.service; enabled)
Active: failed (Result: exit-code) since Fri 2015-04-10 23:43:00 CEST; 2s ago
Process: 2545 ExecStart=/usr/local/bin/hhvm -c /etc/hhvm/server.ini -c /etc/hhvm/php.ini --user nginx --mode daemon (code=exited, status=1/FAILURE)
Main PID: 2545 (code=exited, status=1/FAILURE)
Apr 10 23:43:00 centos7.local systemd[1]: Starting HipHop Virtual Machine (FCGI)...
Apr 10 23:43:00 centos7.local systemd[1]: Started HipHop Virtual Machine (FCGI).
Apr 10 23:43:00 centos7.local systemd[1]: hhvm.service: main process exited, code=exited, status=1/FAILURE
Apr 10 23:43:00 centos7.local systemd[1]: Unit hhvm.service entered failed state.
Also I have to create this folder manually every time I reboot
$ mkdir /var/run/hhvm
Why does it get deleted automatically?

Can you start it in daemon mode normally?
Just execute command you see in the service file, the ExecStart one.
And for the missing hhvm folder maybe you could write a command to make the dir before starting script using ExecStartPre or you could change the location of the pid file to just /var/run folder.
#limilaw i think you can change the user in the service file but be careful as you can mess things up.

Related

Cannot start docker with systemctl

I've just experienced this problem, i cannot start / restart docker or docker.service with systemctl command. I've just configuring the TCP socket listening in /etc/docker/daemon.json file and all works when i'm manually run the docker daemon, but every time i try to handle the service with systemctl i get the error:
Ubuntu22.04 LTS
Docker version 20.10.18, build b40c2f6
systemd 249 (249.11-0ubuntu3.4)
/etc/docker/daemon.json
{
"hosts": ["unix:///var/run/docker.sock","tcp://localhost:2375"],
"dns": ["8.8.4.4","8.8.8.8"]
}
# not works
sudo systemctl restart docker
> Job for docker.service failed because the control process exited with error code.
> systemd[1]: docker.service: Start request repeated too quickly.
> systemd[1]: docker.service: Failed with result 'exit-code'.
> systemd[1]: Failed to start Docker Application Container Engine.
unable to configure the Docker daemon with file /etc/docker/daemon.json: the >
set 22 17:38:26 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
set 22 17:38:26 systemd[1]: docker.service: Failed with result 'exit-code'.
set 22 17:38:26 systemd[1]: Failed to start Docker Application Container Engine.
set 22 17:38:28 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
set 22 17:38:28 systemd[1]: Stopped Docker Application Container Engine.
set 22 17:38:28 systemd[1]: Starting Docker Application Container Engine...
# works
sudo dockerd
sudo dockerd --debug
The issue could be that in /usr/lib/systemd/system/docker.service there is the line (note fd://)
ExecStart=/usr/bin/dockerd -H fd://
error:
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: hosts: (from flag: [fd://], from file: [unix:///var/run/docker.sock tcp://localhost:2375])
[RESOLVED] Exactely, if you configure the /etc/docker/daemon.json configuration file follow the docs:
https://docs.docker.com/engine/reference/commandline/dockerd/
If you configure the hosts fields remove change the ExecStart from the docker.service file > /usr/lib/systemd/system/docker.service
From:
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
To:
ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl status docker

docker.socket: Failed with result 'service-start-limit-hit' after protecting docker daemon socket

I followed the steps provided in the documentation here to add tls security for docker api. Certificates are located in ~/.docker/ as well as /etc/docker/ssl/ folders. I added override.conf to /etc/systemd/system/docker.service.d/ with content
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem
Then, I used daemon-reload and docker start
$ systemctl daemon-reload
$ service docker start
The errors in journalctl -xe is:
-- Unit docker.socket has finished starting up.
--
-- The start-up result is RESULT.
Jan 15 21:43:24 cynicalplyaground systemd[1]: docker.service: Start request repeated too quickly.
Jan 15 21:43:24 cynicalplyaground systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 15 21:43:24 cynicalplyaground systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit docker.service has failed.
--
-- The result is RESULT.
Jan 15 21:43:24 cynicalplyaground systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.
Jan 15 21:45:01 cynicalplyaground CRON[12768]: pam_unix(cron:session): session opened for user root by (uid=0)
Jan 15 21:45:01 cynicalplyaground CRON[12769]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jan 15 21:45:01 cynicalplyaground CRON[12768]: pam_unix(cron:session): session closed for user root
How can I sort this issue?
In the present case the same error occured after the latest manjaro update (2020-01-20).
Tried to change the systemd docker service, as adviced in other cases, but I reverted those changes and finally this was solved with:
a reboot of the system
(like advised here: https://www.reddit.com/r/archlinux/comments/7ya4ug/installing_docker_on_arch_linux/)
Getting to the root of the problem;
systemctl status docker.service
has this:
/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Trying to run that command, it complains about
unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF
ls -l /etc/docker/daemon.json
-rw-r--r-- 1 root root 0 Jul 30 10:32 /etc/docker/daemon.json
NOTE that the JSON file is empty. Delete it.
For me it was because the docker installer uses iptables for nat. Unfortunately Debian uses nftables. You can convert the entries over to nftables or just setup Debian to use the legacy iptables.
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
dockerd, should start fine after switching to iptables-legacy.
I have the same issue and just modify the "/usr/bin/dockerd" to "/usr/sbin/dockerd", then it works.
You can check the dockerd path first.
in my case... the host was part of a docker swarm...but the IPv6 was no longer reachable or automatically assigned to the host...
I manually add the old_IPv6
ip -6 address add 28xx:xxxx:x:x:xx:ebff:fe14:xxx dev ens3x
the journalctl -u docker.service mention:
level=fatal msg="Error starting cluster component: could not find local IP address: dial udp [2xxx:xxx:xxxx:xxx]:2377: connect: network is unreachable"
after add manually the IPv6 I was able to start docker so with docker running I leave the "swarm" and reboot
docker swarm leave --force
after reboot the docker services run as usual
For me it was missing disk space. Reboot also helped, but I was stillnot able to build any container.
After pruning some outdated stuff from the docker volumes I was able to continue.
I faced a similar issue on Ubuntu because I added the hosts option to /etc/docker/daemon.json file. That's ok, but for systems that use systemd it may cause conflict with the arguments passed to dockerd on start.
The solution was to delete the /etc/docker/daemon.json's hosts entry and set this config on file /etc/systemd/system/docker.service.d/options.conf.
$ cat /etc/systemd/system/docker.service.d/options.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix://
After that, restart the service.
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
You may check that your changes has been applied by running docker info. Also, you may note on the docker service status that Drop-In field is using the options.conf created, and dockerd was executed with the specified host list.
$ systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset>
Drop-In: /etc/systemd/system/docker.service.d
└─options.conf
Active: active (running) since Fri 2022-11-18 01:02:18 EST; 1h 50min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 1111 (dockerd)
Tasks: 18
Memory: 58.5M
CPU: 1.294s
CGroup: /system.slice/docker.service
└─1111 /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix://
References:
Daemon configuration file
Control Docker with systemd
I had a similar issue on nixOS installed in a btrfs filesystem.
For me the solution was to add virtualisation.docker.storageDriver = "btrfs"; to my /etc/nixos/configuration.nix
Which according to the docker docs should equate to adding the following to /etc/docker/daemon.json in most other distros:
{
"storage-driver": "btrfs"
}
I was able to solve the problem by disabling the firewalld
systemctl disable firewalld
systemctl stop firewalld

Start Docker daemon as another user?

Installed Docker 17.x version in RHEL and we are getting below excetption.
-bash-4.2$ docker version
Client:
Version: 17.09.1-ce
API version: 1.32
Go version: go1.8.3
Git commit: 19e2cf6
Built: Thu Dec 7 22:23:40 2017
OS/Arch: linux/amd64
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/version: dial unix /var/run/docker.sock: connect: permission denied
-bash-4.2$
to solve this , we introduce another user group (docker-user) and we added all the users in this group. after that we ran this command and able to ran docker .
sudo systemctl stop docker
sudo systemctl start docker
cd /var/run
sudo chown :docker-user docker.sock
But we are facing another issue that whenever VM is getting restarted ,DOCKER is not running. So we decided to setup run docker as daemon process and we followed
below steps.
1. create docker.conf file under /etc/systemd/system/docker.service.d folder.
2. added this entry in docker.conf file
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
ExecStartPost=/bin/chown :docker-user /var/run/docker.sock
After adding this entry and we ran
1. sudo systemctl daemon-reload
2. sudo systemctl stop docker
3. sudo systemctl start docker
We are getting below exception
-bash-4.2$ sudo systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/docker.service.d
└─docker.conf
Active: failed (Result: start-limit) since Wed 2018-03-28 09:10:50 PDT; 12s ago
Docs: https://docs.docker.com
Process: 23395 ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 23395 (code=exited, status=1/FAILURE)
Mar 28 09:10:50 hostname systemd[1]: Failed to start Docker Application Container Engine.
Mar 28 09:10:50 hostname systemd[1]: Unit docker.service entered failed state.
Mar 28 09:10:50 hostname systemd[1]: docker.service failed.
Mar 28 09:10:50 hostname systemd[1]: docker.service holdoff time over, scheduling restart.
Mar 28 09:10:50 hostname systemd[1]: start request repeated too quickly for docker.service
Mar 28 09:10:50 hostname systemd[1]: Failed to start Docker Application Container Engine.
Mar 28 09:10:50 hostname systemd[1]: Unit docker.service entered failed state.
Mar 28 09:10:50 hostname systemd[1]: docker.service failed.
Guide me how to setup docker as daemon process
So, you have already dug in pretty good.
However, this behavior is built into Docker. For user groups, the docker daemon will allow users with the docker group to access the server (important to remember, this is exactly the same as giving root access to any users in that group!). If you wanted to specify a different group, you can start the daemon with the -g option.
Installing docker also installs a systemd service unit to run the daemon. The correct way to enable that (so that it restarts automatically) is
sudo systemctl enable docker
At this point, you didn't include enough of the journalctl logs to actually say why the daemon is not starting for you- if it is an option, I would try starting over since I don't know everything you have messed with, but if that is not an option the journalctl logs will likely explain the problem (probably that the user 'docker' no longer has access to the socket after you chowned it, but that is just a guess)

Failed to start Docker Application Container Engine

I am new to Docker, so don't have much idea about it.
I tried restarting Docker service using command
service docker restart
As the command was taking too much of time I did a CTL+C
Now I am not able to start docker deamon
Any docker command gives following op
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
I tried starting Docker deamon using
systemctl start docker
But it outputs:
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Output of command
**systemctl status docker.service**
`● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/docker.service.d
└─docker.conf, http-proxy.conf, https-proxy.conf
Active: failed (Result: exit-code) since Mon 2018-03-05 17:17:54 IST; 2min 23s ago
Docs: https://docs.docker.com
Process: 11331 ExecStart=/usr/bin/dockerd --graph=/app/dockerRT (code=exited, status=1/FAILURE)
Main PID: 11331 (code=exited, status=1/FAILURE)
Memory: 76.9M
CGroup: /system.slice/docker.service
└─4593 docker-containerd-shim 3bda33eac892d14adda9f3b1fc8dc52173e26ce60ca949075227d903399c7517 /var/run/docker/libcontainerd/3bda33eac892d14adda9f3b1fc8dc52173e26c...
Mar 05 17:17:05 hj-fsbfsd9761.persistent.co.in systemd[1]: Starting Docker Application Container Engine...
Mar 05 17:17:05 hj-fsbfsd9761.persistent.co.in dockerd[11331]: time="2018-03-05T17:17:05.126009059+05:30" level=info msg="libcontainerd: new containerd process, pid: 11337"
Mar 05 17:17:06 hj-fsbfsd9761.persistent.co.in dockerd[11331]: time="2018-03-05T17:17:06.346599571+05:30" level=warning msg="devmapper: Usage of loopback devices is ...section."
Mar 05 17:17:10 hj-fsbfsd9761.persistent.co.in dockerd[11331]: time="2018-03-05T17:17:10.889378989+05:30" level=warning msg="devmapper: Base device already exists an...ignored."
Mar 05 17:17:10 hj-fsbfsd9761.persistent.co.in dockerd[11331]: time="2018-03-05T17:17:10.976695025+05:30" level=info msg="[graphdriver] using prior storage driver \"...mapper\""
Mar 05 17:17:54 hj-fsbfsd9761.persistent.co.in dockerd[11331]: time="2018-03-05T17:17:54.312812069+05:30" level=fatal msg="Error starting daemon: timeout"
Mar 05 17:17:54 hj-fsbfsd9761.persistent.co.in systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 05 17:17:54 hj-fsbfsd9761.persistent.co.in systemd[1]: **Failed to start Docker Application Container Engine.**
Mar 05 17:17:54 hj-fsbfsd9761.persistent.co.in systemd[1]: Unit docker.service entered failed state.
Mar 05 17:17:54 hj-fsbfsd9761.persistent.co.in systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
journalctl -xe
loop: Write error at byte offset 63585648640, length 4096.
How would I be able to start Docker without losing any containers and using previous configurations?
I had the same issue (Fedora 30 x86_64, kernel 5.2.9) and it turned out that being connected to a VPN was the problem. Apparently having a changed gateway address causes an "Error initializing network controller" error which I was able to see when I tried starting docker via sudo dockerd instead of sudo systemctl start docker.
I found the note here about the VPN being a possible problem, disconnecting immediately allowed me to start docker with systemctl start docker.
"Failed to start Docker Application Container Engine" is a general error message.
You should inspect journal for more details:
journalctl -eu docker
In my case it was: "error initializing graphdriver: /var/lib/docker contains several valid graphdrivers: devicemapper, overlay2"
Changing graphdriver to overlay2, fixed it:
$ sudo systemctl stop docker
$ vi /etc/docker/daemon.json # Create the file if it does not exist, and add:
{
"storage-driver": "overlay2"
}
$ sudo systemctl start docker
$ systemctl status docker.service # Hopefully it's running now
I removed the file /etc/docker/daemon.json and started it with sudo systemctl start docker and it worked!!
For the googlers:
For me what worked was doing a killall dockerd and a sudo rm /var/run/docker.pid
I got the error message that recommended that from running dockerd
I ran into this too but all I did after receiving the error was sudo systemctl start docker and then ran sudo systemctl status docker again and that took care of it while on a vpn.
You may have the issue with the current docker installation.
If you don't have much done already you may want to try to reinstall using the install script provided by Docker: https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-convenience-script this will help you to investigate the errors.
I faced the same problem, in my case the disk was full. I removed docker volumes from /var/lib/docker/volumes/ and started docker with sudo systemctl start docker.

Docker containers shut down after systemd start

For some reason when using systemd unit files my docker containers start but get shut down instantly. I have tried finding logs but can not see any indication on why this is happening. Is there someone that knows how to solve this / find the logs that show what is happening?
Note: When starting them manually after boot with docker start containername then it works (also when using systemctl start nginx)
After some more digging I found this error: could not find udev device: No such device it could have something to do with this?
Unit Service file:
[Unit]
Description=nginx-container
Requires=docker.service
After=docker.service
[Service]
Restart=always
RestartSec=2
StartLimitInterval=3600
StartLimitBurst=5
TimeoutStartSec=5
ExecStartPre=-/usr/bin/docker kill nginx
ExecStartPre=-/usr/bin/docker rm nginx
ExecStart=/usr/bin/docker run -i -d -t --restart=no --name nginx -p 80:80 -v /projects/frontend/data/nginx/:/var/www -v /projects/frontend: nginx
ExecStop=/usr/bin/docker stop -t 2 nginx
[Install]
WantedBy=multi-user.target
Journalctl output:
May 28 11:18:15 frontend dockerd[462]: time="2015-05-28T11:18:15Z" level=info msg="-job start(d757f83d4a13f876140ae008da943e8c5c3a0765c1fe5bc4a4e2599b70c30626) = OK (0)"
May 28 11:18:15 frontend dockerd[462]: time="2015-05-28T11:18:15Z" level=info msg="POST /v1.18/containers/nginx/stop?t=2"
May 28 11:18:15 frontend dockerd[462]: time="2015-05-28T11:18:15Z" level=info msg="+job stop(nginx)"
Docker logs: empty (docker logs nginx)
Systemctl output: (systemctl status nginx, nginx.service)
● nginx.service - nginx-container
Loaded: loaded (/etc/systemd/system/multi-user.target.wants/nginx.service)
Active: failed (Result: start-limit) since Thu 2015-05-28 11:18:20 UTC; 12min ago
Process: 3378 ExecStop=/usr/bin/docker stop -t 2 nginx (code=exited, status=0/SUCCESS)
Process: 3281 ExecStart=/usr/bin/docker run -i -d -t --restart=no --name nginx -p 80:80 -v /projects/frontend/data/nginx/:/var/www -v /projects/frontend:/nginx (code=exited, status=0/SUCCESS)
Process: 3258 ExecStartPre=/usr/bin/docker rm nginx (code=exited, status=0/SUCCESS)
Process: 3246 ExecStartPre=/usr/bin/docker kill nginx (code=exited, status=0/SUCCESS)
Main PID: 3281 (code=exited, status=0/SUCCESS)
May 28 11:18:20,frontend systemd[1]: nginx.service holdoff time over, scheduling restart.
May 28 11:18:20 frontend systemd[1]: start request repeated too quickly for nginx.service
May 28 11:18:20 frontend systemd[1]: Failed to start nginx-container.
May 28 11:18:20 frontend systemd[1]: Unit nginx.service entered failed state.
May 28 11:18:20 frontend systemd[1]: nginx.service failed.
Because you have not specified a Type in your systemd unit file, systemd is using the default, simple. From systemd.service:
If set to simple (the default if neither Type= nor BusName=, but
ExecStart= are specified), it is expected that the process
configured with ExecStart= is the main process of the service.
This means that if the process started by ExecStart exits, systemd
will assume your service has exited and will clean everything up.
Because you are running the docker client with -d, it exits
immediately...thus, systemd cleans up the service.
Typically, when starting containers with systemd, you would not use
the -d flag. This means that the client will continue running, and
will allow systemd to collect any output produced by your application.
That said, there are fundamental problems in starting Docker containers with systemd. Because of the way Docker operates, there really is no way for systemd to monitor the status of your container. All it can really do is track the status of the docker client, which is not the same thing (the client can exit/crash/etc without impacting your container). This isn't just relevant to systemd; any sort of process supervisor (upstart, runit, supervisor, etc) will have the same problem.

Resources