I want to start the thin web server upon restart of my Raspberry Pi.
I have the required config file in /etc/thin/myapp.yml
---
chdir: "/home/pi/web-interface/current"
environment: production
address: 0.0.0.0
port: 3000
timeout: 30
log: "/home/pi/web-interface/shared/tmp/sockets/log/thin.log"
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
threadpool_size: 20
servers: 1
daemonize: true
I did this to install thin as a runlevel command:
thin install
sudo /usr/sbin/update-rc.d -f thin defaults
From the second command I get the following Log output
update-rc.d: using dependency based boot sequencing
update-rc.d: warning: default stop runlevel arguments (0 1 6) do not match thin Default-Stop values (S 0 1 6)
insserv: warning: current stop runlevel(s) (0 1 6) of script `thin' overrides LSB defaults (0 1 6 S).
When I run /etc/init.d/thin start the server starts without trouble so there seems to be something wrong when the device starts up.
This is /etc/init.d/thin:
#!/bin/sh
### BEGIN INIT INFO
# Provides: thin
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: S 0 1 6
# Short-Description: thin initscript
# Description: thin
### END INIT INFO
# Original author: Forrest Robertson
# Do NOT "set -e"
# DAEMON=/home/pi/.rvm/gems/ruby-2.1.0/bin/thin
DAEMON=/home/pi/.rvm/wrappers/raspberrypi/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
say "Starting thin"
$DAEMON start --all $CONFIG_PATH
;;
stop)
say "Stopping thin"
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
:
Now my server does not startup up properly even though I have the following entry in my boot log:
Sat Mar 1 08:19:45 2014: [start] /etc/thin/myapp.yml ...
Sat Mar 1 08:19:52 2014: [....] Starting NTP server: ntpd^[[?25l^[[?1c^[7^[[1G[^[[32m ok ^[[39;49m^[8^[[?25h^[[?0c.
Sat Mar 1 08:19:54 2014: [....] Starting OpenBSD Secure Shell server: sshd^[[?25l^[[?1c^[7^[[1G[^[[32m ok ^[[39;49m^[8^[[?25h^[[?0c.
Sat Mar 1 08:19:56 2014: Starting server on 0.0.0.0:3000 ...
Sat Mar 1 08:19:56 2014:
Try removing the S from this line:
# Default-Stop: S 0 1 6
There is something called crontab. Maybe it can help you, to start it when your raspberry starts.
Related
I'm trying to perform some user operation(change admin-user), after Neo4j container boots up. But my background script doesn't wait for the neo4j to come up and dies before Neo4j comes online.
entrypoint.sh is something like
if [some condition]
my_function &
fi
if [${cmd}" == "neo4j" ]; then
exec neo4j console
fi
helper_file.sh has my_function
function my_function {
echo "Checking to see if Neo4j has started at http://${DB_HOST}:${DB_PORT}..."
curl --retry-connrefused --retry 5 --retry-max-time 300 http://${DB_HOST}:${DB_PORT}
if [ $? -ne 0 ]; then
echo "Curl failed with error $?. Exiting.."
return 1
fi
migrate_users <--- another function
}
the problem that I'm facing is Neo4j doesn't bootup till curl is doing the retries.
Tue Sep 20 12:46:35 UTC 2022 Checking to see if Neo4j has started at http://localhost:7474...
Tue Sep 20 12:46:35 UTC 2022 % Total % Received % Xferd Average Speed Time Time Time Current
Tue Sep 20 12:46:35 UTC 2022 Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Tue Sep 20 12:46:35 UTC 2022 curl: (7) Failed to connect to localhost port 7474: Connection refused
Tue Sep 20 12:46:35 UTC 2022 Curl failed with error 0. Exiting..
user: vmanage; command: neo4j
Directories in use:
How can I ensure that migrate_users function gets called after Neo4j has come up online completely?
Edit:
thank you for providing the suggestion.
If I go with the background process approach, I'm seeing that Neo4j doesn't boots up, till curl queries have finished
Tue Sep 20 18:57:34 UTC 2022 Checking to see if Neo4j has started
at http://localhost:7474...
Tue Sep 20 18:57:34 UTC 2022 Neo4j not ready
Tue Sep 20 18:57:34 UTC 2022 Connection refused
Tue Sep 20 18:57:34 UTC 2022 config-db is not up, try to setup password again
user: vmanage; command: neo4j
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /log
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j
data: /data
certificates: /var/lib/neo4j/certificates
licenses: /var/lib/neo4j/licenses
run: /var/lib/neo4j/run
Starting Neo4j.
Going to try this : https://github.com/neo4j/docker-neo4j/issues/166#issuecomment-486890785
You can add a loop inside your script to check the health of neo4j container. If the health check get pass only proceeed further in you script otherwise loop untill it pass.
You can use docker-compose with the depends_on + condition to do that.
Even docker-compose documentation recommends to implement some kind of script to wait until the service is up. Take a look to the following links docker-compose and stackoverflow
But it could be something like:
version: "2"
services:
neo4j-admin:
build: .
depends_on:
- "neo4j"
command: ["./wait-for-it.sh","--", "sh", "change_admin_passwd.sh"]
neo4j:
image: neo4j
Your function named my_function could use until to keep waiting for neo4j to start, for example:
function my_function {
let RETRIES=0
declare SUCCESS=0
until [[ $SUCCESS -eq 1 ]] || [[ $RETRIES -eq 50 ]]; do
echo "Checking to see if Neo4j has started at
http://${DB_HOST}:${DB_PORT}..."
STATUS_CODE=$(curl -w %{http_code} -o /dev/null -s http://${DB_HOST}:${DB_PORT})
if [[ $STATUS_CODE -eq 200 ]]; then
echo "Neo4j is up and running" && SUCCESS=1 && exit 0
else
echo "Neo4j not ready" && let RETRIES+=1 && sleep 10
fi
done
migrate_users
}
I configured Monit to watch unicorn and restart it when the memory exceeded or the cpu increased above a certain limit ,
how ever when it happens , monit doesn't restart unicorn , and here is the logs I found in monit log file
[UTC Aug 11 20:15:41] error : 'unicorn_myapp' failed to restart (exit status 127) -- '/etc/init.d/unicorn_myapp restart': /etc/init.d/unicorn_myapp: 27: kill: No such process
Couldn't reload, starting 'cd /home/ubuntu_user/apps/myapp/current; bundle exec unicorn -D -c /home/ubuntu_user/apps/myapp/shared/config/unicorn.rb -E pr
[UTC Aug 11 20:16:11] error : 'unicorn_myapp' process is not running
[UTC Aug 11 20:16:11] info : 'unicorn_myapp' trying to restart
[UTC Aug 11 20:16:11] info : 'unicorn_myapp' restart: '/etc/init.d/unicorn_myapp restart'
[UTC Aug 11 20:16:42] error : 'unicorn_myapp' failed to restart (exit status 127) -- '/etc/init.d/unicorn_myapp restart': /etc/init.d/unicorn_myapp: 27: kill: No such process
Couldn't reload, starting 'cd /home/ubuntu_user/apps/myapp/current; bundle exec unicorn -D -c /home/ubuntu_user/apps/myapp/shared/config/unicorn.rb -E pr
[UTC Aug 11 20:17:12] error : 'unicorn_myapp' process is not running
[UTC Aug 11 20:17:12] info : 'unicorn_myapp' trying to restart
[UTC Aug 11 20:17:12] info : 'unicorn_myapp' restart: '/etc/init.d/unicorn_myapp restart'
[UTC Aug 11 20:17:42] error : 'unicorn_myapp' failed to restart (exit status 127) -- '/etc/init.d/unicorn_myapp restart': /etc/init.d/unicorn_myapp: 27: kill: No such process
Couldn't reload, starting 'cd /home/ubuntu_user/apps/myapp/current; bundle exec unicorn -D -c /home/ubuntu_user/apps/myapp/shared/config/unicorn.rb -E pr
[UTC Aug 11 20:18:12] error : 'unicorn_myapp' process is not running
Here is my monit configuration under /etc/monit/conf.d/
check process unicorn_myapp
with pidfile /home/ubuntu_user/apps/myapp/current/tmp/pids/unicorn.pid
start program = "/etc/init.d/unicorn_myapp start"
stop program = "/etc/init.d/unicorn_myapp stop"
restart program = "/etc/init.d/unicorn_myapp restart"
if not exist then restart
if mem is greater than 300.0 MB for 2 cycles then restart # eating up memory?
if cpu is greater than 50% for 4 cycles then restart # send an email to admin
if cpu is greater than 80% for 30 cycles then restart # hung process?
group unicorn
it should restart unicorn when such error happens which break the app
from unicorn.log file
ERROR -- : Cannot allocate memory - fork(2) (Errno::ENOMEM)
when I run /etc/init.d/unicorn_myapp restart from terminal it works
Monit mostly uses the restart program to start a program. I do not know why this is, but I also observed this behavior.
Just try to comment out the "restart" line. This should force monit to run the start script, what should not try to kill an existing process.
You might also want to watch your log-file like
CHECK FILE unicorn_log PATH log__file__path___change_me_or_the_world_will_flatten
start program = "/etc/init.d/unicorn_myapp start"
stop program = "/etc/init.d/unicorn_myapp stop"
# No log File entry for one hour?
if timestamp is older than 1 hour then restart
# Allocate Memery error?
if content = "Cannot allocate memory" then restart
This is my rookie first question to the community.
Background:
I try to deploy Sidekiq on a my own Jessie Debian server for a Rails 5.0.6 app that works with Phusion Passenger with a user "deploy" . I have Redis 3.2.6 installed and tested ok. I've opted for a Systemd daemon to start Sidekiq as a system service.
Here is the configuration :
[Unit]
Description=sidekiq
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/var/www/my_app/code
ExecStart=/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
User=deploy
Group=deploy
UMask=0002
# if we crash, restart
RestartSec=4
#Restart=on-failure
Restart=always
# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
Here is sidekiq.yml
---
:verbose: true
:concurrency: 4
:pidfile: tmp/pids/sidekiq.pid
:queues:
- [critical, 2]
- default
- low
production:
:concurrency: 15
And finally #config/initializers/sidekiq.rb:
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
How it fails
I've been trying to solve the following error found in /var/log/syslog:
Dec 18 00:13:39 jjflo systemd[1]: Started sidekiq.
Dec 18 00:13:48 jjflo sidekiq[8159]: Cannot load `Rails.application.database_configuration`:
Dec 18 00:13:48 jjflo sidekiq[8159]: key not found: "MY_APP_DATABASE_PASSWORD"
which ends up in a sequence of sidekiq failure and a retry...
Yet another try
I have tried the following and this works :
cd /var/www/my_app/code
su - deploy
/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
Could someone help me connect the dots, please ?
Environment variable were obviously the problem. Since I was using
ExecStart=/bin/bash -lc 'bundle...
where -l was refering to a bash interactive session, I had to get into .bashrc of deploy user to move the export lines at the top of the file instead of the bottom where they used to be, or at least before this line of .bashrc :
case $- in
This post helped me a lot.
I am using Debian flavor linux system. I am using thin web server to get live status of call in my application. This process gets started, when I use /etc/init.d/thin start. I used update-rc.d -f thin defaults to make thin process to be started at system boot. After adding the entry, I rebooted the system but thin process not getting started. I checked apache2 and it gets started properly at system boot. My thin script in init.d is as follows,
DAEMON=/usr/local/lib/ruby/gems/1.9.1/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
stop)
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
My configuration file in /etc/thin is as follows.
user_status.yml
---
chdir: /FMS/src/FMS-Frontend
environment: production
address: localhost
port: 5000
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
rackup: user_status.ru
threaded: true
daemonize: false
You need a wrapper for 'thin'.
See https://rvm.io/integration/init-d.
The wrapper path then needs substituting for DAEMON in the init.d script.
I keep forgetting this and it has cost a good few hours!
Now I've checked it out, as root, enter the two commands
rvm wrapper current bootup thin
which bootup_thin
The first creates the wrapper, and the second gives the path to it.
Edit the DAEMON line in /etc/init.d/thin to use this path, and finish off with
systemctl daemon-reload
service thin restart
I have assumed a multi-user installation of rvm, also you have to enter root
with
su -
to get the rvm environment right.
/etc/lighttpd.conf:
...
server.modules = (
"mod_fastcgi"
)
server.username = "_lighttpd"
server.groupname = "_lighttpd"
fastcgi.server = (
".fcgi" =>
((
"socket" => "/tmp/a.out.socket",
"bin-path" => "/tmp/a.out"
))
)
...
I run as root:
spawn-fcgi -s /tmp/a.out.socket -n -u _lighttpd -g _lighttpd -U _lighttpd -G _lighttpd -- /tmp/a.out
ps aux:
...
_lighttpd 28973 0.0 0.2 448 596 p1 I+ 2:33PM 0:00.01 /tmp/a.out
...
ls -ltr /tmp
-rwxr-xr-x 1 _lighttpd _lighttpd 6992 Jul 16 13:38 a.out
srwxr-xr-x 1 _lighttpd _lighttpd 0 Jul 16 14:33 a.out.socket
Now I try to start lighttpd as root:
/usr/local/sbin/lighttpd -f /etc/lighttpd.conf
The logfile contains the following error:
2011-07-16 14:39:23: (log.c.166) server started
2011-07-16 14:39:23: (mod_fastcgi.c.1367) --- fastcgi spawning local
proc: /tmp/a.out
port: 0
socket /tmp/a.out.socket
max-procs: 4
2011-07-16 14:39:23: (mod_fastcgi.c.1391) --- fastcgi spawning
port: 0
socket /tmp/a.out.socket
current: 0 / 4
2011-07-16 14:39:23: (mod_fastcgi.c.978) bind failed for: unix:/tmp/a.out.socket-0 No such file or directory
2011-07-16 14:39:23: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed.
2011-07-16 14:39:23: (server.c.938) Configuration of plugins failed. Going down.
What is wrong with my configuration? I run OpenBSD 4.9.
Many thanks in advance
Toru
Lighttpd is chrooted. To solve the problem described above comment out "server.chroot" in /etc/lighttpd.conf.