Cannot start memcached - ruby-on-rails

I cannot get memcached to run on my server.
This is what I tried so far:
% sudo systemctl start memcached # no output
% sudo systemctl status memcached.service
● memcached.service - memcached daemon
Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-02-16 17:45:09 CET; 4s ago
Process: 22725 ExecStart=/usr/share/memcached/scripts/systemd-memcached-wrapper /etc/memcached.conf (code=exited, status=71)
Main PID: 22725 (code=exited, status=71)
systemd[1]: Started memcached daemon.
systemd-memcached-wrapper[22725]: bind(): Cannot assign requested address
systemd-memcached-wrapper[22725]: failed to listen on TCP port 11211: Cannot assign requested address
systemd[1]: memcached.service: Main process exited, code=exited, status=71/n/a
systemd[1]: memcached.service: Unit entered failed state.
systemd[1]: memcached.service: Failed with result 'exit-code'.
I am running Ubuntu 16.04.6 LTS
How can I start my memcached service?

Have a look into /etc/memcached.conf there might be written sth. like
-l xxx.xx.xx.xx
If you are trying to connect via localhost: just comment the line.
If you are trying to connect from somewhere else check the IP for correctness.

Related

Docker - error after moved storage to second disk and using overlay2

I just moved Docker default storage location to second disk setting up a /etc/docker/daemon.json as described in documentation, so far so goood.
The problem is that now I keep getting a bunch of volumes being continuously (re)mounted, ad obiously it is really annoying.
So I tried to set up overlay2 in /etc/docker/daemon.json, but now Docker doesn't event start
# sudo systemctl restart docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
# systemctl status docker.service
× docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-12-15 11:06:36 CET; 10s ago
TriggeredBy: × docker.socket
Docs: https://docs.docker.com
Process: 17614 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status>
Main PID: 17614 (code=exited, status=1/FAILURE)
CPU: 54ms
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: Stopped Docker Application Container Engine.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Start request repeated too quickly.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Failed with result 'exit-code'.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: Failed to start Docker Application Container Engine.
So, for now I give up using overlay2 since having all the Docker images and container on the seond disk is more important than getting rid of a bunch of volumes being mounted continuously, but can anyone tells me where the problem is and if there is a solution?
Update #1: strange permissions behaviour problem
Got a simple docker-compose.yml with a Wordpress service (official WP image) and a database service, and when I have the docker storage location on the second disk instead of default one the database (volume maybe?) seems inaccessible:
wordpress keep giving error on db connection
trying to run mysql interactive from db service result in error on login with root user
Obviously this is related to the docker storage location, but cannot find why, since new location is created by docker itself when started.

how to "active: running" docker in ubuntu when the status result is "Active: failed (Result: exit-code)"

after the migration of the cloud server between two data-center, my docker doesn't work correctly. I can't see my containers and images. and i receive error below:
ubuntu#ubuntu-servername-server:~$ sudo docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
and when i checked the status of the docker by "systemctl status docker" I received "active: failed" error.
ubuntu#ubuntu-gardooon-server:~$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2022-09-10 16:29:10 UTC; 2 days ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 35714 (code=exited, status=1/FAILURE)
Sep 10 16:29:08 ubuntu-gardooon-server systemd[1]: docker.service: Main process exited, code=exited, statu>
Sep 10 16:29:08 ubuntu-gardooon-server systemd[1]: docker.service: Failed with result 'exit-code'.
Sep 10 16:29:08 ubuntu-gardooon-server systemd[1]: Failed to start Docker Application Container Engine.
Sep 10 16:29:10 ubuntu-gardooon-server systemd[1]: docker.service: Scheduled restart job, restart counter >
Sep 10 16:29:10 ubuntu-gardooon-server systemd[1]: Stopped Docker Application Container Engine.
Sep 10 16:29:10 ubuntu-gardooon-server systemd[1]: docker.service: Start request repeated too quickly.
Sep 10 16:29:10 ubuntu-gardooon-server systemd[1]: docker.service: Failed with result 'exit-code'.
Sep 10 16:29:10 ubuntu-gardooon-server systemd[1]: Failed to start Docker Application Container Engine.
docker and docker-compose are installed on my server and the versions of them are:
ubuntu#ubuntu-gardooon-server:~$ docker --version
Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2
ubuntu#ubuntu-gardooon-server:~$ docker-compose --version
docker-compose version 1.29.2, build 5becea4c
after i saw these errors i trying to check the docker folder in /var/lib/ on ubuntu 20.04 and i couldn't open it. so after some tries i deleted the folder by mistake.
now please help me to find out how i can run my docker and if possible recover my containers and images? and if not, please let me know how i can rebuild my docker?
Thank you
.........................
I tried to reinstall the docker. with command (apt --reinstall install docke) but i received message below:
ubuntu#ubuntu-gardooon-server:~$ sudo apt install docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker is already the newest version (1.5-2).
The following packages were automatically installed and are no longer required:
fontconfig-config fonts-dejavu-core libfontconfig1 libgd3 libjbig0
libjpeg-turbo8 libjpeg8 libtiff5 libwebp6 libxpm4
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 122 not upgraded.
after that i tried to check docker activation but it's status active failed yet.
ubuntu#ubuntu-gardooon-server:~$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset>
Active: failed (Result: exit-code) since Sat 2022-09-10 16:29:10 UTC; 4 da>
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 35714 (code=exited, status=1/FAILURE)
I install the docker again completely and then the problem was solved.
but all images and containers are removed.

How to fix docker storage-driver=overlay2 problem

I need to change the underlying storage for a Proxmox LXC Debian Buster container from RAW to ZFS. For this I restored a snapshot to ZFS storage. This is normally transparent for the OS in the container, but in this case docker no longer starts.
The initial problem was that docker wasn't started, and after some digging around I find this:
# dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
INFO[2021-08-03T09:24:40.909844803Z] Starting up
...
ERRO[2021-08-03T09:24:56.914420548Z] failed to mount overlay: invalid argument storage-driver=overlay2
ERRO[2021-08-03T09:24:56.914439880Z] [graphdriver] prior storage driver overlay2 failed: driver not supported
failed to start daemon: error initializing graphdriver: driver not supported
How can I fix this?
EDIT:
I tried the suggested fix, but still no cigar:
root#mail:/var/log# systemctl status docker.service
* docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-10-09 10:05:49 UTC; 1min 23s ago
Docs: https://docs.docker.com
Process: 236 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 236 (code=exited, status=1/FAILURE)
Oct 09 10:05:49 mail systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Oct 09 10:05:49 mail systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Oct 09 10:05:49 mail systemd[1]: Stopped Docker Application Container Engine.
Oct 09 10:05:49 mail systemd[1]: docker.service: Start request repeated too quickly.
Oct 09 10:05:49 mail systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 09 10:05:49 mail systemd[1]: Failed to start Docker Application Container Engine.
The link offered suggests creating a new zpool within the container. Seems a bit of an overkill for that to be necessary, no?
Configure Docker to use zfs. Edit /etc/docker/daemon.json and set the storage-driver to zfs. If the file was empty before, it should now look like this:
{
"storage-driver": "zfs"
}
more details: https://docs.docker.com/storage/storagedriver/zfs-driver/

Does dockerd support WatchdogSec sd_notify health checks?

We've been having issues where the Docker daemon will occasionally stop responding on one of our Kubernetes systems, but Systemd still thinks the service is running:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-04-15 20:40:57 UTC; 3 months 22 days ago
Docs: https://docs.docker.com
Main PID: 1281 (dockerd)
Tasks: 1409
Memory: 31.0G
CPU: 5d 17h 3min 4.758s
CGroup: /system.slice/docker.service
├─ 1281 /usr/bin/dockerd -H fd://
...
There isn't anything in the journalctl -u docker or syslog files to indicate what the issue is, but the Docker daemon no longer responds to requests (docker ps just hangs). We are currently using the 17.03.2~ce-0~ubuntu-xenial package for Ubuntu 16.04, which has the following service unit:
cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket firewalld.service
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I noticed that even though it is a Type=notify service, there isn't a WatchdogSec= defined in the service unit.
Does the Docker daemon support setting a watchdog timeout for sd_notify based health checks?
No, currently the components/engine/cmd/dockerd/daemon_linux.go file only implements systemdDaemon.SdNotifyReady to notify Systemd when the process has started. For watchdog support it would have to use something like SdWatchdogEnabled to continually send SdNotifyWatchdog = "WATCHDOG=1" notifications.
If you try and set WatchdogSec=60s on the docker.service file it will kill and restart the service because the daemon doesn't send the required notifications.
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-08-08 02:09:52 UTC; 50s ago
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: deactivating (stop-sigabrt) (Result: watchdog) since Thu 2019-08-08 02:10:02 UTC; 45ms ago
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: activating (start) since Thu 2019-08-08 02:10:04 UTC; 777ms ago
# Log entries:
Aug 08 02:09:14 kam1 systemd[1]: Starting Docker Application Container Engine...
Aug 08 02:09:15 kam1 systemd[1]: Started Docker Application Container Engine.
Aug 08 02:10:15 kam1 systemd[1]: docker.service: Watchdog timeout (limit 60s)!
Aug 08 02:10:15 kam1 systemd[1]: docker.service: Killing process 12383 (dockerd) with signal SIGABRT.
Aug 08 02:10:16 kam1 systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 08 02:10:16 kam1 systemd[1]: docker.service: Failed with result 'watchdog'.
Aug 08 02:10:18 kam1 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Aug 08 02:10:18 kam1 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 08 02:10:18 kam1 systemd[1]: Stopped Docker Application Container Engine.
Aug 08 02:10:18 kam1 systemd[1]: Starting Docker Application Container Engine...

Warning: Stopping docker.service, but it can still be activated by: docker.socket

I've reinstalled Docker. When I'm trying to start Docker, everything is fine:
# /etc/init.d/docker start
[ ok ] Starting docker (via systemctl): docker.service.
until I want to stop Docker service and many times restart it:
# /etc/init.d/docker stop
[....] Stopping docker (via systemctl): docker.serviceWarning: Stopping docker.service, but it can still be activated by:
docker.socket
. ok
Finally, I've got error:
# /etc/init.d/docker start
[....] Starting docker (via systemctl): docker.serviceJob for docker.service failed.
See "systemctl status docker.service" and "journalctl -xe" for details.
failed!
# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Sat 2017-11-25 20:04:20 CET; 2min 4s ago
Docs: https://docs.docker.com
Process: 12845 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=0/SUCCESS)
Main PID: 12845 (code=exited, status=0/SUCCESS)
CPU: 326ms
Nov 25 20:04:18 example.com systemd[1]: Started Docker Application Container Engine.
Nov 25 20:04:18 example.com dockerd[12845]: time="2017-11-25T20:04:18.191949863+01:00" level=inf
Nov 25 20:04:19 example.com systemd[1]: Stopping Docker Application Container Engine...
Nov 25 20:04:19 example.com dockerd[12845]: time="2017-11-25T20:04:19.368990531+01:00" level=inf
Nov 25 20:04:19 example.com dockerd[12845]: time="2017-11-25T20:04:19.37953454+01:00" level=info
Nov 25 20:04:20 example.com systemd[1]: Stopped Docker Application Container Engine.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Start request repeated too quickly.
Nov 25 20:04:21 example.com systemd[1]: Failed to start Docker Application Container Engine.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Unit entered failed state.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Failed with result 'start-limit-hit'.
I've installed Docker on Debian 9 Stretch.
Can anyone help me get rid of this warning and resolve an error "Failed with result 'start-limit-hit'"?
Simply start and stop the socket if the docker is triggered by the socket
sudo systemctl stop docker.socket
This is because in addition to the docker.service unit file, there is a docker.socket unit file... this is for socket activation. The warning means if you try to connect to the docker socket while the docker service is not running, then systemd will automatically start docker for you.
You can get rid of this by removing /lib/systemd/system/docker.socket... you may also need to remove -H fd:// from the docker.service unit file.

Resources