Trouble starting the redis server - ruby-on-rails

I am using rails and want to run sidekiq and running sidekiq requires a Redis server to be installed. I installed Redis for my KDE Neon by following the instructions from the digital ocean. Here is the error that is displayed when I try to run sudo systemctl status redis :
redis.service - Redis In-Memory Data Store
Loaded: loaded (/etc/systemd/system/redis.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2021-03-24 17:24:12 IST; 6s ago
Process: 47334 ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf (code=exited, status=203/EXEC)
Main PID: 47334 (code=exited, status=203/EXEC)
Mar 24 17:24:12 maxagno3 systemd[1]: redis.service: Scheduled restart job, restart counter is at 5.
Mar 24 17:24:12 maxagno3 systemd[1]: Stopped Redis In-Memory Data Store.
Mar 24 17:24:12 maxagno3 systemd[1]: redis.service: Start request repeated too quickly.
Mar 24 17:24:12 maxagno3 systemd[1]: redis.service: Failed with result 'exit-code'.
Mar 24 17:24:12 maxagno3 systemd[1]: Failed to start Redis In-Memory Data Store.
Using redis-cli works fine. I assume when I first ran the command sudo systemctl disable redis it deleted the redis.service file. Since then I have uninstalled and installed the Redis but still, the error persists.
Quick help would greatly be appreciated.

The error your showing is hiding your originally error. Redis is basically in a reboot loop, which your error is eluding to. What you need to do is disable this restart functionality to get the underlining problem.
This can be done by doing the following:
Edit the /etc/systemd/system/redis.service
Edit the restart line to Restart=no. The possible options are no, on-success, on-failure, on-abnormal, on-watchdog, on-abort or always.
Edit the start limit interval to StartLimitInterval=0. This is normally set really high to prevent load from spiking by a service constantly restarting
Lastly reload your services for your changes to take affect. This is done by running systemctl daemon-reload
Once your service stops looping, you can try to manually start the service to get the actual error. If the error is too large you can look in your OS's general log greping specifically for Redis, or by running journalctl: journalctl -u redis.service
Hopefully this helps!

If you want clean and repeatable approach I suggest you to always use docker, especially for dev environment.
So starting docker redis as simple as:
docker run -d -p 6379:6379 --name my-redis redis

Related

Docker - error after moved storage to second disk and using overlay2

I just moved Docker default storage location to second disk setting up a /etc/docker/daemon.json as described in documentation, so far so goood.
The problem is that now I keep getting a bunch of volumes being continuously (re)mounted, ad obiously it is really annoying.
So I tried to set up overlay2 in /etc/docker/daemon.json, but now Docker doesn't event start
# sudo systemctl restart docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
# systemctl status docker.service
× docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-12-15 11:06:36 CET; 10s ago
TriggeredBy: × docker.socket
Docs: https://docs.docker.com
Process: 17614 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status>
Main PID: 17614 (code=exited, status=1/FAILURE)
CPU: 54ms
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: Stopped Docker Application Container Engine.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Start request repeated too quickly.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Failed with result 'exit-code'.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: Failed to start Docker Application Container Engine.
So, for now I give up using overlay2 since having all the Docker images and container on the seond disk is more important than getting rid of a bunch of volumes being mounted continuously, but can anyone tells me where the problem is and if there is a solution?
Update #1: strange permissions behaviour problem
Got a simple docker-compose.yml with a Wordpress service (official WP image) and a database service, and when I have the docker storage location on the second disk instead of default one the database (volume maybe?) seems inaccessible:
wordpress keep giving error on db connection
trying to run mysql interactive from db service result in error on login with root user
Obviously this is related to the docker storage location, but cannot find why, since new location is created by docker itself when started.

Docker daemon cannot be started for some (hidden) reason

I am trying to push a docker image and noticed that my docker daemon actually is probably not running.
If for example I run:
docker run hello-world
docker: Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?.
If I try to restart the daemon using:
systemctl start docker
Job for docker.service failed because the control process exited with
error code. See "systemctl status docker.service" and "journalctl -xe"
for details.
Continuing running:
systemctl status docker.service
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor
preset: enabled)
Active: failed (Result: start-limit-hit) since Wed 2021-05-12 14:45:09
EEST; 43s ago
Docs: https://docs.docker.com
Process: 4810 ExecStart=/usr/bin/dockerd -H fd://
--containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 4810 (code=exited, status=1/FAILURE)
May 12 14:45:07 iti-554 systemd[1]: docker.service: Unit entered
failed state.
May 12 14:45:07 iti-554 systemd[1]: docker.service: Failed with result
'exit-code'.
May 12 14:45:09 iti-554 systemd[1]: docker.service: Service hold-off
time over, scheduling restart.
May 12 14:45:09 iti-554 systemd[1]: Stopped Docker Application
Container Engine.
May 12 14:45:09 iti-554 systemd[1]: docker.service: Start request
repeated too quickly.
May 12 14:45:09 iti-554 systemd[1]: Failed to start Docker Application
Container Engine.
May 12 14:45:09 iti-554 systemd[1]: docker.service: Unit entered
failed state.
May 12 14:45:09 iti-554 systemd[1]: docker.service: Failed with result
'start-limit-hit'.
which as I understand it it means docker daemon is not loaded (it's in a failed state) and the last reason for this is the start-limit-hit has been reached. This on this side probably means another reason exists for this to happen.
SO, how do I find out which is the actual reason for my docker daemon refusing to start?
If I run to reset the failed attemps counter with:
systemctl reset-failed docker.service
it return without error so I assume it succeeds. And indeed when I check the status it has become:
Active: inactive (dead) since Wed 2021-05-12 14:45:09 EEST; 14min ago
Of course if I run docker daemon again it fails.
Can someone provide any workaround about this issue? I even tried to invoke the commands after restarting (didn't help).
Edit
Well, to my case the problem was a rather stupid one. I had added a daemon.json file with minimal content in it. Just this:
cat /etc/docker/daemon.json
{
"insecure-registries": [
"docker-server.com:10022",
"docker-server.com:10023"
],
}
The problem was that the dangling comma before } made docker search for another parameter. The relevant message shown using journalctl -u docker was:
unable to configure the Docker daemon with file
/etc/docker/daemon.json: invalid character '}' looking for beginning
of object key string
is quite obvious but the previous ones did not help much.
journalctl -u docker gives you docker daemon logs. Maybe u can find something there.
The unix:///var/run/docker.sock requires the correct permission to work. This a security feature for Docker.
Try sudo chmod 755 /var/run/docker.sock and re-run Docker command.
Note the permission number given here may not be suitable for everyone.

Cannot start memcached

I cannot get memcached to run on my server.
This is what I tried so far:
% sudo systemctl start memcached # no output
% sudo systemctl status memcached.service
● memcached.service - memcached daemon
Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-02-16 17:45:09 CET; 4s ago
Process: 22725 ExecStart=/usr/share/memcached/scripts/systemd-memcached-wrapper /etc/memcached.conf (code=exited, status=71)
Main PID: 22725 (code=exited, status=71)
systemd[1]: Started memcached daemon.
systemd-memcached-wrapper[22725]: bind(): Cannot assign requested address
systemd-memcached-wrapper[22725]: failed to listen on TCP port 11211: Cannot assign requested address
systemd[1]: memcached.service: Main process exited, code=exited, status=71/n/a
systemd[1]: memcached.service: Unit entered failed state.
systemd[1]: memcached.service: Failed with result 'exit-code'.
I am running Ubuntu 16.04.6 LTS
How can I start my memcached service?
Have a look into /etc/memcached.conf there might be written sth. like
-l xxx.xx.xx.xx
If you are trying to connect via localhost: just comment the line.
If you are trying to connect from somewhere else check the IP for correctness.

docker not responding when using different data directory

I want to change the image directory in docker. I tried the initial two methods mentioned here. Both methods work and change the directory for docker images. But the problem is that the images stop responding. I can run the hello world example but if I run the ubuntu container or the whalesay container, docker stops responding and I can't run it again.
docker run -it ubuntu bash
docker run docker/whalesay cowsay boo
On using the above commands, the images get downloaded and nothing happens. Then I enter the command again to run and the system stops responding. I used Ctrl + C to terminate it but after that I can not open any other terminal screen. Also, the system doesn't power off; it gets stuck at a black screen. On force restarting the system docker starts failing to run giving the following log:
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2017-04-14 20:12:14 EDT; 10min ago
Docs: https://docs.docker.com
Process: 1160 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 1160 (code=exited, status=1/FAILURE)
Apr 14 20:12:14 abmittal-linux systemd[1]: Starting Docker Application Container Engine...
Apr 14 20:12:14 abmittal-linux dockerd[1160]: unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF
Apr 14 20:12:14 abmittal-linux systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 14 20:12:14 abmittal-linux systemd[1]: Failed to start Docker Application Container Engine.
Apr 14 20:12:14 abmittal-linux systemd[1]: docker.service: Unit entered failed state.
Apr 14 20:12:14 abmittal-linux systemd[1]: docker.service: Failed with result 'exit-code'
Removing and reinstalling docker also doesn't work if the directory is same as before (even if the directory has been deleted and then made again). I have to change the directory in the configuration to get it to run again but again it stops responding.
The following is my daemon.json file:
{
"graph":"/mnt/other/docker_images"
}
EDIT: I think I may have found the error. The partition /mnt/other is using NTFS file system (and is on a different disk). Can someone please confirm if this might be the source of the error?
This is a known bug: Link
I tried creating a custom directory on an ext4 partition and it worked.

CoreOS Fleet could not get container

I have 3 containers running on 3 machines. One is called graphite, one is called back and one is called front. The front container needs both the others to run, so i link them separately like this:
[Unit]
Description=front hystrix
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill front
ExecStartPre=-/usr/bin/docker rm -v front
ExecStartPre=/usr/bin/docker pull blurio/hystrixfront
ExecStart=/usr/bin/docker run --name front --link graphite:graphite --link back:back -p 8080:8080 blurio/hystrixfront
ExecStop=/usr/bin/docker stop front
I start both the other containers, wait till they're up and running, then start this one with fleetctl and it just instantly fails with this message:
fleetctl status front.service
? front.service - front hystrix
Loaded: loaded (/run/fleet/units/front.service; linked-runtime; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2015-05-12 13:46:08 UTC; 24s ago
Process: 922 ExecStop=/usr/bin/docker stop front (code=exited, status=0/SUCCESS)
Process: 912 ExecStart=/usr/bin/docker run --name front --link graphite:graphite --link back:back -p 8080:8080 blurio/hystrixfront (code=exited, status=1/FAILURE)
Process: 902 ExecStartPre=/usr/bin/docker pull blurio/hystrixfront (code=exited, status=0/SUCCESS)
Process: 892 ExecStartPre=/usr/bin/docker rm -v front (code=exited, status=1/FAILURE)
Process: 885 ExecStartPre=/usr/bin/docker kill front (code=exited, status=1/FAILURE)
Main PID: 912 (code=exited, status=1/FAILURE)
May 12 13:46:08 core-04 docker[902]: 8b9853c10955: Download complete
May 12 13:46:08 core-04 docker[902]: 0dc7a355f916: Download complete
May 12 13:46:08 core-04 docker[902]: 0dc7a355f916: Download complete
May 12 13:46:08 core-04 docker[902]: Status: Image is up to date for blurio/hystrixfront:latest
May 12 13:46:08 core-04 systemd[1]: Started front hystrix.
May 12 13:46:08 core-04 docker[912]: time="2015-05-12T13:46:08Z" level="fatal" msg="Error response from daemon: Could not get container for graphite"
May 12 13:46:08 core-04 systemd[1]: front.service: main process exited, code=exited, status=1/FAILURE
May 12 13:46:08 core-04 docker[922]: front
May 12 13:46:08 core-04 systemd[1]: Unit front.service entered failed state.
May 12 13:46:08 core-04 systemd[1]: front.service failed.
I also want to include the fleetctl list-units output, where you can see that the other two are running without problems.
fleetctl list-units
UNIT MACHINE ACTIVE SUB
back.service 0ff08b11.../172.17.8.103 active running
front.service 69ab2600.../172.17.8.104 failed failed
graphite.service 2886cedd.../172.17.8.101 active running
there are a couple issues here. first, you can't use the --link argument for docker. this is a docker specific instruction for linking one container to another on the same docker engine. in your example, you have multiple engines, so this technique won't work. If you want to use that technique, you will need to employ the ambassador pattern: coreos ambassador, either that, you you can use the X-Fleet directive MachineOf: to make all of the docker containers run on the same machine, however, I think that would defeat your goals.
Often with cloud services one service needs another, like in your case. If the other service is not running (yet), then the services that need it should be well behaved and either exit, or wait for the needed service to be ready. So the needed service must be discovered. There are many techniques for the discovery phase, and the waiting phase. For example, you can write a 'wrapper' script in each of your containers. That wrapper can do these duties. In your case, you could have a script in the back.service and graphite.service which writes information to the etcd database, like:
ExecStartPre=/usr/bin/env etcdctl set /graphite/status ready }'
Then in the startup script in front you can do a etcdctl get /graphite/status to see when the container becomes ready (and not continue until it is). If you like you can store the ip address and port in the graphite script so that the front script can pick up the place to connect to.
Another technique for discovery is to use registrator. This is a super handy docker container that updates a directory structure in etcd everytime a container comes and goes. This makes it easier to use a discovery technique like I listed above without having each container having to announce itself, it becomes automatic. You still need the 'front' container to have a startup script that waits for the service to appear in the etcd database. I usually start registrator on coreos boot. In fact, I start two copies, one for discovering internal addresses (flannel ones) and one for external (services that are available outside my containers). Here is an example of the database registrator manages on my machines:
core#fo1 ~/prs $ etcdctl ls --recursive /skydns
/skydns/net
/skydns/net/tacodata
/skydns/net/tacodata/services
/skydns/net/tacodata/services/cadvisor-4194
/skydns/net/tacodata/services/cadvisor-4194/fo2:cadvisor:4194
/skydns/net/tacodata/services/cadvisor-4194/fo1:cadvisor:4194
/skydns/net/tacodata/services/cadvisor-4194/fo3:cadvisor:4194
/skydns/net/tacodata/services/internal
/skydns/net/tacodata/services/internal/cadvisor-4194
/skydns/net/tacodata/services/internal/cadvisor-4194/fo2:cadvisor:4194
/skydns/net/tacodata/services/internal/cadvisor-4194/fo1:cadvisor:4194
/skydns/net/tacodata/services/internal/cadvisor-4194/fo3:cadvisor:4194
/skydns/net/tacodata/services/internal/cadvisor-8080
/skydns/net/tacodata/services/internal/cadvisor-8080/fo2:cadvisor:8080
/skydns/net/tacodata/services/internal/cadvisor-8080/fo1:cadvisor:8080
/skydns/net/tacodata/services/internal/cadvisor-8080/fo3:cadvisor:8080
You can see the internal and external available ports for cadvisor. If I get one of the records:
etcdctl get /skydns/net/tacodata/services/internal/cadvisor-4194/fo2:cadvisor:4194
{"host":"10.1.88.3","port":4194}
you get everything you need to connect to that container internally. This technique really starts to shine when coupled with skydns. Skydns presents a dns service using the information presented by registrator. So, long story short, I can simply make my application use the hostname (the hostname defaults to be the name of the docker image, but it can be changed). So in this example here my application can connect to cadvisor-8080, and dns will give it one of the 3 ip addresses it has (it is on 3 machines). The dns also supports srv records, so, if you aren't using a well know port the srv record can give you the port number.
Using coreos and fleet it is difficult not to get the containers themselves involved in the publish/discovery/wait game. At least that's been my experience.
-g

Resources