Gitlab always exited automatically - docker

I am running the gitlab with docker, but it always exits after a period of time
==> /var/log/gitlab/redis/current <==
2019-06-21_18:00:08.72435 459:signal-handler (1561140008) Received SIGTERM scheduling shutdown...
2019-06-21_18:00:08.81864 459:M 21 Jun 18:00:08.817 # User requested shutdown...
2019-06-21_18:00:08.81866 459:M 21 Jun 18:00:08.817 * Saving the final RDB snapshot before exiting.
2019-06-21_18:00:08.83736 459:M 21 Jun 18:00:08.837 * DB saved on disk
2019-06-21_18:00:08.83741 459:M 21 Jun 18:00:08.837 * Removing the pid file.
2019-06-21_18:00:08.83817 459:M 21 Jun 18:00:08.838 * Removing the unix socket file.
2019-06-21_18:00:08.83935 459:M 21 Jun 18:00:08.839 # Redis is now ready to exit, bye bye...
ok: down: redis-exporter: 0s, normally up
==> /var/log/gitlab/gitlab-rails/sidekiq.log <==
2019-06-21_18:00:09.57615 2019-06-21T18:00:09.576Z 807 TID-oviw2sgmf INFO: Shutting down
2019-06-21_18:00:09.57625 2019-06-21T18:00:09.576Z 807 TID-ovivo05i7 INFO: Scheduler exiting...
2019-06-21_18:00:09.57655 2019-06-21T18:00:09.576Z 807 TID-oviw2sgmf INFO: Terminating quiet workers

This was reported in gitlab-org/omnibus-gitlab issue 4137: "runsv send SIGTERM to redis in docker version"
runsv sends SIGTERM to redis every 60 secs
gitlab-org/omnibus-gitlab issue 1611 suggests a docker restart first.
But the general issue is not conclusively resolved yet.

Related

redis json (redislabs/rejson:latest) on Heroku

I cannot figure this out. Hope I can get some help
I have a hobby tier in Heroku running django. To this I would like to attach a Redis service. However, I would like to use the recent rejson (redislabs/rejson:latest) docker image (instead of redistogo or heroku-redis)because it has json support. This works great on my local env. I was able to push the docker image into the container registry and actually start the redis server
2021-07-23T00:14:48.576294+00:00 app[worker.1]: 3:M 23 Jul 2021 00:14:48.576 # Current maximum open files is 10000. maxclients has been reduced to 9968 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
2021-07-23T00:14:48.577054+00:00 app[worker.1]: 3:M 23 Jul 2021 00:14:48.577 * Running mode=standalone, port=6379.
2021-07-23T00:14:48.577124+00:00 app[worker.1]: 3:M 23 Jul 2021 00:14:48.577 # Server initialized
2021-07-23T00:14:48.577184+00:00 app[worker.1]: 3:M 23 Jul 2021 00:14:48.577 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2021-07-23T00:14:48.577671+00:00 app[worker.1]: 3:M 23 Jul 2021 00:14:48.577 # <ReJSON> JSON data type for Redis v1.0.7 [encver 0]
2021-07-23T00:14:48.577789+00:00 app[worker.1]: 3:M 23 Jul 2021 00:14:48.577 * Module 'ReJSON' loaded from /usr/lib/redis/modules/rejson.so
2021-07-23T00:14:48.578232+00:00 app[worker.1]: 3:M 23 Jul 2021 00:14:48.578 * Ready to accept connections
Unfortunately, django is unable to connect to it
ConnectionError: Error -2 connecting to redis://localhost:6379. Name or service not known
There are no ENV variable exposed, so am unable to set any (I mean I can set them, but doubt if they will be relevant)
I did experiment with installing redistogo add on and am able to connect to it (this required setting the connection based on the REDIS_URL env variable that gets exposed when redistogo is added)
At my wits end...any help appreciated...I guess the question really boils do to:

Systemd and Puma in an infinite startup loop

I'm banging my head against the wall with puma and systemd. I used foreman to set up my systemd files, but can't get puma out of its restart loop. Ubuntu 16.
Jun 19 02:48:12 ip-172-31-28-225 systemd[1]: Stopped rajlogviewer-web.service.
Jun 19 02:48:12 ip-172-31-28-225 systemd[1]: Started rajlogviewer-web.service.
Jun 19 02:48:12 ip-172-31-28-225 rajlogviewer-web.service[8954]: APP_DIR = /home/ubuntu/rajlogviewer, SHARED_DIR /home/ubuntu/rajlogviewer/shared
Jun 19 02:48:12 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] Puma starting in cluster mode...
Jun 19 02:48:12 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Version 3.6.0 (ruby 2.3.3-p222), codename: Sleepy Sunday Serenity
Jun 19 02:48:12 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Min threads: 1, max threads: 6
Jun 19 02:48:12 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Environment: production
Jun 19 02:48:12 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Process workers: 2
Jun 19 02:48:12 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Preloading application
Jun 19 02:48:14 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Listening on tcp://0.0.0.0:3000
Jun 19 02:48:14 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Listening on unix:///home/ubuntu/rajlogviewer/shared/tmp/sockets/puma.sock
Jun 19 02:48:14 ip-172-31-28-225 rajlogviewer-web.service[8954]: [8954] * Daemonizing...
Jun 19 02:48:24 ip-172-31-28-225 systemd[1]: rajlogviewer-web.service: Service hold-off time over, scheduling restart.
Jun 19 02:48:24 ip-172-31-28-225 systemd[1]: Stopped rajlogviewer-web.service.
Jun 19 02:48:24 ip-172-31-28-225 systemd[1]: Started rajlogviewer-web.service.
It just keeps restarting indefinitely. Here is my systemd init file
/etc/systemd/system/rajlogviewer-web.service
[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/rajlogviewer/current
#Environment=PATH=/home/ubuntu/.rvm/gems/ruby-2.3.3#rajlogsViewer/bin:$PATH
WorkingDirectory=/home/ubuntu/rajlogviewer/current/
ExecStart=/bin/bash -lc 'PATH=/home/ubuntu/.rvm/gems/ruby-2.3.3#rajlogsViewer/bin:$PATH exec /home/ubuntu/.rvm/bin/rvm ruby-2.3.3 do bundle exec puma -C /home/ubuntu/rajlogviewer
/shared/config/puma.rb --daemon'
Restart=no
RestartSec=10
StandardInput=null
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=%n
KillMode=mixed
TimeoutStopSec=5
The app usually boots in 5 seconds when booting with 'cap production puma:start' which works, so RestartSec=10 ought to work. Making it 20 seconds makes no difference.
Nothing interesting in the puma.stderr.log and puma.stdout.log .
Any ideas?
You should remove --daemon option from the systemd, there are several types of unit:
Configures the unit process startup type that affects the
functionality of ExecStart and related options. One of:
simple – The
default value. The process started with ExecStart is the main process
of the service.
forking – The process started with ExecStart spawns a
child process that becomes the main process of the service. The parent
process exits when the startup is complete.
oneshot – This type is
similar to simple, but the process exits before starting consequent
units. dbus – This type is similar to simple, but consequent units are
started only after the main process gains a D-Bus name.
notify – This
type is similar to simple, but consequent units are started only after
a notification message is sent via the sd_notify() function.
idle –
similar to simple, the actual execution of the service binary is
delayed until all jobs are finished, which avoids mixing the status
output with shell output of services.
The default value is simple, for the sake of puma configuration you use --daemon options which contradict with the systemd configuration.

Strange behavior of neo4j-service on Debian 8.1

I have installed neo4j on Debian 8.1 thanks to these instructions : http://debian.neo4j.org/
Now if, as root, I start neo4j with neo4j-service like this
service neo4j-service start
Sometimes it will works correctly but most of the the time, the neo4j-service will timeout. But the interesting fact is that neo4j is indeed started, I can go the the browser and make some queries. But the neo4j-service tells me that it failed :
root#ns***:~# service neo4j-service start
Job for neo4j-service.service failed. See 'systemctl status neo4j-service.service' and 'journalctl -xn' for details.
root#ns***:~# systemctl status neo4j-service.service
● neo4j-service.service - LSB: Neo4j Graph Database server
Loaded: loaded (/etc/init.d/neo4j-service)
Active: failed (Result: timeout) since Fri 2015-10-16 19:03:08 CEST; 6min ago
Process: 24556 ExecStop=/etc/init.d/neo4j-service stop (code=exited, status=0/SUCCESS)
Process: 29730 ExecStart=/etc/init.d/neo4j-service start (code=killed, signal=TERM)
Oct 16 18:58:08 ns***.ip-91-***-***.eu neo4j-service[29730]: WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Oct 16 18:58:08 ns***.ip-91-***-***.eu neo4j-service[29730]: Starting Neo4j Server...WARNING: not changing user
Oct 16 19:03:08 ns***.ip-91-***-***.eu systemd[1]: neo4j-service.service start operation timed out. Terminating.
Oct 16 19:03:08 ns***.ip-91-***-***.eu systemd[1]: Failed to start LSB: Neo4j Graph Database server.
Oct 16 19:03:08 ns***.ip-91-***-***.eu systemd[1]: Unit neo4j-service.service entered failed state.
And sometimes it will tell me that the service started correctly but will not manage to stop it.
Most of the time, I have to kill the process myself to "reset everything" correctly.
Do you know why this is happening ?
Are you aware of any issues with the neo4j-service on Debian 8.1 ?
This approach to running Neo4j is deprecated and you should use neo4j command.
Or you can write your own service wrapper and for that I suggest to you use http://supervisord.org/

Passenger/mod_rails fails to initialize in Fedora 12 when starting Apache

I am in the process of setting up a server to run a Ruby on Rails application on Fedora 12, using Passenger.
I am at the stage where I've installed Passenger, set it up as prescribed, but get the following errors when I restart Apache:
[Wed Jan 13 15:41:38 2010] [notice] caught SIGTERM, shutting down
[Wed Jan 13 15:41:40 2010] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Wed Jan 13 15:41:40 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jan 13 15:41:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /tmp/passenger.25235/.guard: Permission denied (13)
[Wed Jan 13 15:41:40 2010] [notice] Digest: generating secret for digest authentication ...
[Wed Jan 13 15:41:40 2010] [notice] Digest: done
[Wed Jan 13 15:41:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /tmp/passenger.25235/.guard: Permission denied (13)
[Wed Jan 13 15:41:40 2010] [error] python_init: Python version mismatch, expected '2.6', found '2.6.2'.
[Wed Jan 13 15:41:40 2010] [error] python_init: Python executable found '/usr/bin/python'.
[Wed Jan 13 15:41:40 2010] [error] python_init: Python path being used '/usr/lib/python26.zip:/usr/lib/python2.6/:/usr/lib/python2.6/plat-linux2:/usr/lib/python2.6/lib-tk:/usr/lib/python2.6/lib-old:/usr/lib/python2.6/lib-dynload'.
[Wed Jan 13 15:41:40 2010] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads.
[Wed Jan 13 15:41:40 2010] [notice] mod_python: using mutex_directory /tmp
[Wed Jan 13 15:41:40 2010] [notice] Apache/2.2.14 (Unix) DAV/2 Phusion_Passenger/2.2.9 PHP/5.3.0 mod_python/3.3.1 Python/2.6.2 mod_ssl/2.2.14 OpenSSL/1.0.0-fips-beta3 mod_perl/2.0.4 Perl/v5.10.0 configured -- resuming normal operations
As you can see, there is a permissions problem when Passenger is trying to initialize:
[Wed Jan 13 15:41:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /tmp/passenger.25235/.guard: Permission denied (13)
When Apache is starts, it does create a file in /tmp:
d-ws--x--x. 2 root root 4096 2010-01-13 16:04 passenger.26117
If instead I run the app by firing up mongrel directly with mongrel_rails start -e production, I see the following:
ActiveRecord::StatementInvalid (Mysql::Error: Can't create/write to file '/tmp/#sql_5d3_0.MYI' (Errcode: 13): SHOW FIELDS FROM `users`):
Again the error points to permission issues with the /tmp directory.
I am at a loss as to what the solution is. I'm not sure if it is related to simply directory permissions or Fedora's SELinux security.
Any help would be appreciated. Thanks.
I did the same as Fred, except that instead of doing it one error at a time:
Go into permissive mode by running setenforce 0
Restart apache, and hit your site and use it for a while as normal
Run grep httpd /var/log/audit/audit.log | audit2allow -M passenger
semodule -i passenger.pp
Go back to enforcing mode by running setenforce 1
Restart apache and test your site - hopefully it should all be working as before!
Note that this is basically a specific example of the procedure on the Centos SELinux help - check it out.
I'm having the same issue in CentOS 5.4, SELinux getting in the way of Passenger.
Setting PassengerTempDir to /var/run/passenger simply gives you the same permission errors in the new directory instead of /tmp :
[Mon Feb 22 11:42:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create directory '/var/run/passenger/passenger.3686'
I can then change the security context of /var/run/passenger to get past this error:
chcon -R -h -t httpd_sys_content_t /var/run/passenger/
...and that lets Passenger create the temp directory, but not files within that directory:
[Mon Feb 22 12:07:06 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /var/run/passenger/passenger.3686/.guard: Permission denied (13)
Oddly, re-running the recursive chcon again doesn't get past this error, it keeps dying at this point, and this is where my SELinux knowledge gets murky.
The Phusion Passenger guide sections 6.3.5 and 6.3.7 have some useful thoughts, but they don't seem to completely resolve the problem.
You need more than just the httpd_sys_content_t permission. I use the following technique to get things started:
start a tail on the audit log: tail -f /var/log/audit/audit.log
reload apache: apachectl restart
Go to the /tmp/directory: cd /tmp
If just 1 line is added use the command: tail -1 /var/log/audit/audit.log | audit2allow -M httpdfifo
Note that the name 'httpdfifo' is just a name chosen to reflect the kind of error that has been observed.
This will create a file named 'httpdfifo.pp'. To allow apache to create a FIFO from here on after you have to issue the command: semodule -i httpdfifo.pp
Continue to do this until all audit errors have been resolved (It took 4 different kind of permissions on my system running Centos 5.4)
Running setenforce 0 before starting will let you test if it's SELinux. Don't forget to run setenforce 1 afterwards.
I tried what Dan Sketcher and Fred Appleman suggested, i.e. repeat the following:
yum install setroubleshoot
echo > /var/log/audit/audit.log # clear irrelevant errors
cd ~
service httpd restart # try booting passenger -- audit.log now shows the relevant permission errors
tail -f /var/log/httpd/error_log # check that passenger is still failing due to permission errors
sealert -a /var/log/audit/audit.log > selinux-diag.txt # translate the permission errors
# read and check that you are happy with selinux-diag.txt
# and either follow its specific advice, or if it just wants you to grep into audit2allow, then:
cat /var/log/audit/audit.log | audit2allow -M mypol # grant everything just denied
semodule -i mypol.p # commit new permissions
But after doing this 5 or 6 times, I kept coming up against new errors, and some of the same errors came up even after I had tried to permit them with "audit2allow".
In the end I just turned off SELinux, with:
echo 0 >/selinux/enforce

Apache shutting down unexpectedly

I have a mongrel server running behind Apache. It works fine; however, every now and then the Apache server shuts downs seemingly by itself. I'm not sure if there is configuration issue or if it's an attack. Here is Apache error log:
[Thu Apr 30 02:15:07 2009] [notice] SIGHUP received. Attempting to restart
[Thu Apr 30 02:15:07 2009] [warn] NameVirtualHost *:0 has no VirtualHosts
[Thu Apr 30 02:15:07 2009] [notice] Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 configured -- resuming normal operations
[Thu Apr 30 02:17:13 2009] [error] [client 61.139.105.163] File does not exist: /var/www/fastenv
[Thu Apr 30 02:24:06 2009] [error] [client 61.139.105.163] File does not exist: /var/www/fastenv
[Thu Apr 30 10:49:18 2009] [warn] pid file /var/run/apache2.pid overwritten -- Unclean shutdown of previous Apache run?
[Thu Apr 30 10:49:18 2009] [notice] Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 configured -- resuming normal operations
[Thu Apr 30 12:53:08 2009] [notice] SIGHUP received. Attempting to restart
[Thu Apr 30 12:53:08 2009] [warn] NameVirtualHost *:0 has no VirtualHosts
[Thu Apr 30 12:53:08 2009] [notice] Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 configured -- resuming normal operations
[Thu Apr 30 12:59:15 2009] [notice] SIGHUP received. Attempting to restart
[Thu Apr 30 12:59:15 2009] [warn] NameVirtualHost *:0 has no VirtualHosts
[Thu Apr 30 12:59:15 2009] [notice] Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 configured -- resuming normal operations
[Thu Apr 30 13:58:49 2009] [notice] SIGHUP received. Attempting to restart
[Thu Apr 30 13:58:49 2009] [warn] NameVirtualHost *:0 has no VirtualHosts
[Thu Apr 30 13:58:49 2009] [notice] Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 configured -- resuming normal operations
[Fri May 01 10:59:07 2009] [warn] pid file /var/run/apache2.pid overwritten -- Unclean shutdown of previous Apache run?
[Fri May 01 10:59:07 2009] [notice] Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 configured -- resuming normal operations
[Fri May 01 17:51:15 2009] [warn] pid file /var/run/apache2.pid overwritten -- Unclean shutdown of previous Apache run?
[Fri May 01 17:51:15 2009] [notice] Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 configured -- resuming normal operations
Not quite sure what is /var/www/fastenv but I don't think there is anything in my application that calls that. Also, website is still in Beta mode with few users and I don't think any have 61.139.105.163 IP address but it's possible that they might have it.
Any ideas? It would be good if you can give me hints where to look or how to go about anaysing this problem
I have the exact same log from the same IP. Looking it up shows it to belong to the Chinese government. It appears to be a scan using server side includes to find out as much as they can about your server. I banned the IP.
Not sure this is entirely programming-related, but anyway... none of those look like serious errors to me. The accesses to /var/www/fastenv just mean that the computer at IP address 61.139.105.163 sent a request for http://www.example.com/fastenv or something like that (it depends on exactly how you've configured your virtual hosts); I'd look at the access log for more information, to see what other requests have been coming from that IP address. It's probably not anything to worry about.
The line about NameVirtualHost *:0 means that somewhere in your configuration file you have an incorrect NameVirtualHost directive, maybe with no arguments. You should probably look for that and remove it, but if the server is running fine anyway, it's not a big deal.
The reason your server is terminating (restarting, actually) appears to be a SIGHUP - that is, something on the system is sending Apache a signal telling it to restart. It's basically the same thing that happens if you run apache2 restart, I think. Without knowing what's sending that signal, there's not more I can say.
61.139.105.163 is known for doing all kinds of hacking type things, just google the IP address. You should definitly ban this IP address.
Click on Apache Config --> Apache(httpd.conf)
Search for #Listen 12.34.56.78:80 and replace it with #Listen 12.34.56.78:8081.
Search for Listen 80 and replace it with Listen 8081.
Now you can start Apache now, and can run it with this URL: localhost:8081/xampp/

Resources