Starting lighttpd with fastcgi and local unix-domain socket failed - fastcgi

/etc/lighttpd.conf:
...
server.modules = (
"mod_fastcgi"
)
server.username = "_lighttpd"
server.groupname = "_lighttpd"
fastcgi.server = (
".fcgi" =>
((
"socket" => "/tmp/a.out.socket",
"bin-path" => "/tmp/a.out"
))
)
...
I run as root:
spawn-fcgi -s /tmp/a.out.socket -n -u _lighttpd -g _lighttpd -U _lighttpd -G _lighttpd -- /tmp/a.out
ps aux:
...
_lighttpd 28973 0.0 0.2 448 596 p1 I+ 2:33PM 0:00.01 /tmp/a.out
...
ls -ltr /tmp
-rwxr-xr-x 1 _lighttpd _lighttpd 6992 Jul 16 13:38 a.out
srwxr-xr-x 1 _lighttpd _lighttpd 0 Jul 16 14:33 a.out.socket
Now I try to start lighttpd as root:
/usr/local/sbin/lighttpd -f /etc/lighttpd.conf
The logfile contains the following error:
2011-07-16 14:39:23: (log.c.166) server started
2011-07-16 14:39:23: (mod_fastcgi.c.1367) --- fastcgi spawning local
proc: /tmp/a.out
port: 0
socket /tmp/a.out.socket
max-procs: 4
2011-07-16 14:39:23: (mod_fastcgi.c.1391) --- fastcgi spawning
port: 0
socket /tmp/a.out.socket
current: 0 / 4
2011-07-16 14:39:23: (mod_fastcgi.c.978) bind failed for: unix:/tmp/a.out.socket-0 No such file or directory
2011-07-16 14:39:23: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed.
2011-07-16 14:39:23: (server.c.938) Configuration of plugins failed. Going down.
What is wrong with my configuration? I run OpenBSD 4.9.
Many thanks in advance
Toru

Lighttpd is chrooted. To solve the problem described above comment out "server.chroot" in /etc/lighttpd.conf.

Related

[HTCONDOR][kubernetes / k8s] : Unable to start minicondor image within k8s - condor_master not working

POST EDIT
The issue is due to :
PSP (Pod security policy) By default escalation is not permit for my condor user. That is why it is not working. because the supervisord is running as root user and try to write logs and start condor collector as root and not as an other user (i.e condor)
Description
The mini-condor base image is not starting as expected on kubernetes rancher pod.
I am using :
This image : https://hub.docker.com/r/htcondor/mini In a custom namespace in rancher (k8s)
ps : the image was working perfectly on :
a local env
minikube default installation
I am running it as a simple deployment :
When the pod is starting, the Kubernetes default log file is displaying :
2021-09-15 09:26:36,908 INFO supervisord started with pid 1
2021-09-15 09:26:37,911 INFO spawned: 'condor_master' with pid 20
2021-09-15 09:26:37,912 INFO spawned: 'condor_restd' with pid 21
2021-09-15 09:26:37,917 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:37,924 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:38,926 INFO spawned: 'condor_master' with pid 22
2021-09-15 09:26:38,928 INFO spawned: 'condor_restd' with pid 23
2021-09-15 09:26:38,932 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:38,936 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:40,939 INFO spawned: 'condor_master' with pid 24
2021-09-15 09:26:40,943 INFO spawned: 'condor_restd' with pid 25
2021-09-15 09:26:40,947 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:40,948 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:43,953 INFO spawned: 'condor_master' with pid 26
2021-09-15 09:26:43,955 INFO spawned: 'condor_restd' with pid 27
2021-09-15 09:26:43,959 INFO exited: condor_restd (exit status 127; not expected)
2021-09-15 09:26:43,968 INFO gave up: condor_restd entered FATAL state, too many start retries too quickly
2021-09-15 09:26:43,969 INFO exited: condor_master (exit status 4; not expected)
2021-09-15 09:26:44,970 INFO gave up: condor_master entered FATAL state, too many start retries too quickly
Here is a brief cmd and output result:
CMD
output
condor_status
CEDAR:6001:Failed to connect to <127.0.0.1:9618>
condor_master
ERROR "Cannot open log file '/var/log/condor/MasterLog'" at line 174 in file /var/lib/condor/execute/slot1/dir_17406/userdir/.tmpruBd6F/BUILD/condor-9.0.5/src/condor_utils/dprintf_setup.cpp`
1)first try to fix the issue
I decided to customize the image, but the error is the same
The docker images used to try to fix the permission issue
Image :
FROM htcondor/mini:9.2-el7
RUN condor_master
RUN chown condor:root /var/
RUN chown condor:root /var/log
RUN chown -R condor:root /var/log/
RUN chown -R condor:condor /var/log/condor
RUN chown condor:condor /var/log/condor/ProcLog
RUN chown condor:condor /var/log/condor/MasterLog
RUN chmod 775 -R /var/
Kubernetes - rancher
yaml file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: htcondor-mini--all-in-one
namespace: grafana-exporter
spec:
containers:
- image: <custom_image>
imagePullPolicy: Always
name: htcondor-mini--all-in-one
resources: {}
securityContext:
capabilities: {}
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
dnsConfig: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Here is a brief cmd and output result:
CMD
output
condor_status
CEDAR:6001:Failed to connect to <127.0.0.1:9618>
condor_master
ERROR "Cannot open log file '/var/log/condor/MasterLog'" at line 174 in file /var/lib/condor/execute/slot1/dir_17406/userdir/.tmpruBd6F/BUILD/condor-9.0.5/src/condor_utils/dprintf_setup.cpp`
ls -ld /var/
drwxrwxr-x 1 condor root 17 Nov 13 2020 /var/
ls -ld /var/log/
drwxrwxr-x 1 condor root 65 Oct 7 11:54 /var/log/
ls -ld /var/log/condor
drwxrwxr-x 1 condor condor 240 Oct 7 11:23 /var/log/condor
ls -ld /var/log/condor/MasterLog
-rwxrwxr-x 1 condor condor 3243 Oct 7 11:23 /var/log/condor/MasterLog
MasterLog content :
10/07/21 11:23:21 ******************************************************
10/07/21 11:23:21 ** condor_master (CONDOR_MASTER) STARTING UP
10/07/21 11:23:21 ** /usr/sbin/condor_master
10/07/21 11:23:21 ** SubsystemInfo: name=MASTER type=MASTER(2) class=DAEMON(1)
10/07/21 11:23:21 ** Configuration: subsystem:MASTER local:<NONE> class:DAEMON
10/07/21 11:23:21 ** $CondorVersion: 9.2.0 Sep 23 2021 BuildID: 557262 PackageID: 9.2.0-1 $
10/07/21 11:23:21 ** $CondorPlatform: x86_64_CentOS7 $
10/07/21 11:23:21 ** PID = 7
10/07/21 11:23:21 ** Log last touched time unavailable (No such file or directory)
10/07/21 11:23:21 ******************************************************
10/07/21 11:23:21 Using config source: /etc/condor/condor_config
10/07/21 11:23:21 Using local config sources:
10/07/21 11:23:21 /etc/condor/config.d/00-htcondor-9.0.config
10/07/21 11:23:21 /etc/condor/config.d/00-minicondor
10/07/21 11:23:21 /etc/condor/config.d/01-misc.conf
10/07/21 11:23:21 /etc/condor/condor_config.local
10/07/21 11:23:21 config Macros = 73, Sorted = 73, StringBytes = 1848, TablesBytes = 2692
10/07/21 11:23:21 CLASSAD_CACHING is OFF
10/07/21 11:23:21 Daemon Log is logging: D_ALWAYS D_ERROR
10/07/21 11:23:21 SharedPortEndpoint: waiting for connections to named socket master_7_43af
10/07/21 11:23:21 SharedPortEndpoint: failed to open /var/lock/condor/shared_port_ad: No such file or directory
10/07/21 11:23:21 SharedPortEndpoint: did not successfully find SharedPortServer address. Will retry in 60s.
10/07/21 11:23:21 Permission denied error during DISCARD_SESSION_KEYRING_ON_STARTUP, continuing anyway
10/07/21 11:23:21 Adding SHARED_PORT to DAEMON_LIST, because USE_SHARED_PORT=true (to disable this, set AUTO_INCLUDE_SHARED_PORT_IN_DAEMON_LIST=False)
10/07/21 11:23:21 SHARED_PORT is in front of a COLLECTOR, so it will use the configured collector port
10/07/21 11:23:21 Master restart (GRACEFUL) is watching /usr/sbin/condor_master (mtime:1632433213)
10/07/21 11:23:21 Cannot remove wait-for-startup file /var/lock/condor/shared_port_ad
10/07/21 11:23:21 WARNING: forward resolution of ip6-localhost doesn't match 127.0.0.1!
10/07/21 11:23:21 WARNING: forward resolution of ip6-loopback doesn't match 127.0.0.1!
10/07/21 11:23:22 Started DaemonCore process "/usr/libexec/condor/condor_shared_port", pid and pgroup = 9
10/07/21 11:23:22 Waiting for /var/lock/condor/shared_port_ad to appear.
10/07/21 11:23:22 Found /var/lock/condor/shared_port_ad.
10/07/21 11:23:22 Cannot remove wait-for-startup file /var/log/condor/.collector_address
10/07/21 11:23:23 Started DaemonCore process "/usr/sbin/condor_collector", pid and pgroup = 10
10/07/21 11:23:23 Waiting for /var/log/condor/.collector_address to appear.
10/07/21 11:23:23 Found /var/log/condor/.collector_address.
10/07/21 11:23:23 Started DaemonCore process "/usr/sbin/condor_negotiator", pid and pgroup = 11
10/07/21 11:23:23 Started DaemonCore process "/usr/sbin/condor_schedd", pid and pgroup = 12
10/07/21 11:23:24 Started DaemonCore process "/usr/sbin/condor_startd", pid and pgroup = 15
10/07/21 11:23:24 Daemons::StartAllDaemons all daemons were started
A huge thanks for reading. Hope it will help many other people.
Cause of the issue
The issue is due to :
PSP policy (Pod security policy)
By default escalation is not permit for my condor user.
SOLUTION
THE BEST SOLUTION I found at the moment is to run EVERYTHING as condor user and give the permisssion to the condor users. To do so you need :
In the supervisord.conf : Run supervisor as condor user
In the supervisord.conf : run log and socket in /tmp
In the Dockerfile : Change the owner of most of folder by condor
In the deployment.yamlset the ID to 64 (condor user)
Dockerfile
FROM htcondor/mini:9.2-el7
# SET WORKDIR
WORKDIR /home/condor/
RUN chown condor:condor /home/condor
# COPY SUPERVISOR
COPY supervisord.conf /etc/supervisord.conf
# Need to run the cmd to create all dir
RUN condor_master
# FIX PERMISSION ISSUES FOR RANCHER
RUN chown -R condor:condor /var/log/ /tmp &&\
chown -R restd:restd /home/restd &&\
chmod 755 -R /home/restd
supervisord.conf:
[supervisord]
user=condor
nodaemon=true
logfile = /tmp/supervisord.log
directory = /tmp
pidfile = /tmp/supervisord.pid
childlogdir = /tmp
# next 3 sections contain using supervisorctl to manage daemons
[unix_http_server]
file=/tmp/supervisord.sock
chown=condor:condor
chmod=0777
user=condor
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:condor_master]
user=condor
command=/usr/sbin/condor_master -f
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile = /var/log/condor_master.log
stderr_logfile = /var/log/condor_master.error.log
deployment.yaml
apiVersion: apps/v1
kind: Deployment
spec:
containers:
- image: <condor-image>
imagePullPolicy: Always
name: htcondor-exporter
ports:
- containerPort: 8080
name: myport
protocol: TCP
resources: {}
securityContext:
capabilities: {}
runAsNonRoot: false
runAsUser: 64
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true

unable to inspect docker to postgres traffic in the same host

There's a docker web application installed in a linux host, and it works fine. I know this docker instance is connecting to postgres database with port 5432, and I'm trying to understand the traffic between docker and postgres database by logging into the linux host.
When I ran docker ps I only saw the web application, I don't see any docker instance for postgres.
I saw there are postgres process running with ps -ef | grep postgres but I'm unable to see port 5432 is listening with netstat -an | grep 5432
The interesting part is /usr/lib/postgresql folder doesn't even exist per the output below.
$ ps -ef | grep postgres ubuntu 2622 2555 0 16:09 pts/0 00:00:00 /usr/lib/postgresql/10/bin/postgres -D /var/lib/postgresql/10/main -c config_file=/etc/postgresql/10/main/postgresql.conf ubuntu 3249 2622 0 16:09 ? 00:00:00 postgres: 10/main: checkpointer process ubuntu 3250 2622 0 16:09 ? 00:00:00 postgres: 10/main: writer process ubuntu 3251 2622 0 16:09 ? 00:00:00 postgres: 10/main: wal writer process ubuntu 3252 2622 0 16:09 ? 00:00:00 postgres: 10/main: autovacuum launcher process ubuntu 3253 2622 0 16:09 ? 00:00:00 postgres: 10/main: stats collector process ubuntu 3254 2622 0 16:09 ? 00:00:00 postgres: 10/main: bgworker: logical replication launcher ubuntu 3821 2622 0 16:10 ? 00:00:00 postgres: 10/main: openvino workbench 127.0.0.1(35548) idle
I ran sudo tcpdump -i docker0 'port 5432' but I'm unable to see any traffic.
Tracing PPID of postgres PID:
PID CMD
28691 /usr/lib/postgresql/10/bin/postgres -D /var/lib/postgresql/10/main -c config_file=/etc/postgresql/10/main/postgresql.conf
PID CMD
28639 bash /opt/intel/openvino/deployment_tools/tools/workbench/docker-entrypoint.sh
PID CMD
28619 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/a129c1514c181b85e9d75d8b20d909b1540f94c39792e370ab82918a681cbcb7 -address /run/contain
PID CMD
5198 /usr/bin/containerd
PID CMD
1 /lib/systemd/systemd --system --deserialize 38
Any ideas? Please advise.
Thanks!!

Processes spawned by sidekiq are stopped when sidekiq stops

I'm doing some processing inside a job which ends up executing an external shell command. The command is executing a script that takes hours to finish.
Problem is that after I start the script using spawn and detach the script stops execution if I shut down the sidekiq job using a kill -15 signal. This behaviour is only occurring if the spawn command is fired by sidekiq - not if I do it in irb and close the console. So somehow it's still bound to sidekiq it seems - but why and how to avoid it?.
test.sh
#!/bin/bash
for a in `seq 1000` ; do
echo "$a "
sleep 1
done
spawn_test_job.rb
module WorkerJobs
class SpawnTestJob < CountrySpecificWorker
sidekiq_options :queue => :my_jobs, :retry => false
def perform version
logfile = "/home/deployer/test_#{version}.log"
pid = spawn(
"cd /home/deployer &&
./test.sh
",
[:out, :err] => logfile
)
Process.detach(pid)
end
end
end
I enqueue the job WorkerJobs::SpawnTestJob.perform_async(1) and if I tail the test_1.log I can see my counter going on. However when I send sidekiq the kill -15 the counter stops and the script pid disappears.
After hours of debugging I ended up finding that systemd was causing this. The process started inside sidekiq got the sidekiq cgroup and whenever you kill a process the default killmode is control-group.
deployer#srv-14:~$ ps -efj | grep test.sh
UID PID PPID PGID SID C STIME TTY TIME CMD
deployer 16679 8455 16678 8455 0 12:59 pts/0 00:00:00 grep --color=auto test.sh
deployer 24904 30861 24904 30861 0 12:52 ? 00:00:00 sh -c cd /home/deployer && ./test.sh
deployer 24906 24904 24904 30861 0 12:52 ? 00:00:00 /bin/bash ./test.sh
deployer 6382 1 6382 6382 38 12:53 ? 00:02:14 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 7787 1 7787 7787 30 12:46 ? 00:04:07 sidekiq 4.2.10 my_proj [6 of 8 busy]
deployer 13680 1 13680 13680 29 12:49 ? 00:03:08 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 14372 1 14372 14372 38 12:49 ? 00:03:48 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 16719 8455 16718 8455 0 12:59 pts/0 00:00:00 grep --color=auto sidekiq
deployer 17678 1 17678 17678 38 12:50 ? 00:03:22 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 18023 1 18023 18023 32 12:50 ? 00:02:49 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 18349 1 18349 18349 34 12:43 ? 00:05:32 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 18909 1 18909 18909 34 12:51 ? 00:02:53 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 22956 1 22956 22956 39 12:01 ? 00:22:42 sidekiq 4.2.10 my_proj [8 of 8 busy]
deployer 30861 1 30861 30861 46 12:00 ? 00:27:23 sidekiq 4.2.10 my_proj [8 of 8 busy]
and
cat /proc/24904/cgroup
11:perf_event:/
10:blkio:/
9:pids:/system.slice
8:devices:/system.slice/system-my_proj\x2dsidekiq.slice
7:cpuset:/
6:freezer:/
5:memory:/
4:cpu,cpuacct:/
3:net_cls,net_prio:/
2:hugetlb:/
1:name=systemd:/system.slice/system-my_proj\x2dsidekiq.slice/my_proj-sidekiq#9.service
I fixed the problem by instructing my sidekiq service that the KillMode is process
References:
https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/
https://www.freedesktop.org/software/systemd/man/systemd.kill.html

How I can access docker data volumes on Windows machine?

I have docker-compose.yml like this:
version: '3'
services:
mysql:
image: mysql
volumes:
- data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=$ROOT_PASSWORD
volumes:
data:
And my mount point looks like: /var/lib/docker/volumes/some_app/_data and I want to access data from that mount point and I'm not sure how to do it on Windows machine. Maybe I can create some additional container which can pass data from docker virtual machine to my directory?
When I'm mounting folder like this:
volumes:
- ./data:/var/lib/mysql
to use my local directory - I had no success because of permissions issue. And read that "right way" is using docker volumes.
UPD: MySQL container it's just example. I want to use such behaviour for my codebase and use docker foe local development.
For Linux containers under Windows, docker runs actually over a Linux virtual machine, so your named volume is a mapping of a local directory in that VM to a directory in the container.
So what you got as /var/lib/docker/volumes/some_app/_data is a directory inside that VM. To inspect it you can:
docker run --rm -it -v /:/vm-root alpine:edge ls -l /vm-root/var/lib/docker/volumes/some_app/_data
total 188476
-rw-r----- 1 999 ping 56 Jun 4 04:49 auto.cnf
-rw------- 1 999 ping 1675 Jun 4 04:49 ca-key.pem
-rw-r--r-- 1 999 ping 1074 Jun 4 04:49 ca.pem
-rw-r--r-- 1 999 ping 1078 Jun 4 04:49 client-cert.pem
-rw------- 1 999 ping 1679 Jun 4 04:49 client-key.pem
-rw-r----- 1 999 ping 1321 Jun 4 04:50 ib_buffer_pool
-rw-r----- 1 999 ping 50331648 Jun 4 04:50 ib_logfile0
-rw-r----- 1 999 ping 50331648 Jun 4 04:49 ib_logfile1
-rw-r----- 1 999 ping 79691776 Jun 4 04:50 ibdata1
-rw-r----- 1 999 ping 12582912 Jun 4 04:50 ibtmp1
drwxr-x--- 2 999 ping 4096 Jun 4 04:49 mysql
drwxr-x--- 2 999 ping 4096 Jun 4 04:49 performance_schema
-rw------- 1 999 ping 1679 Jun 4 04:49 private_key.pem
-rw-r--r-- 1 999 ping 451 Jun 4 04:49 public_key.pem
-rw-r--r-- 1 999 ping 1078 Jun 4 04:49 server-cert.pem
-rw------- 1 999 ping 1675 Jun 4 04:49 server-key.pem
drwxr-x--- 2 999 ping 12288 Jun 4 04:49 sys
That is running an auxiliar container which has mounted the hole root filesystem of that VM / into the container dir /vm-root.
To get some file run the container with some command in background (tail -f /dev/null in my case), then you can use docker cp:
docker run --name volume-holder -d -it -v /:/vm-root alpine:edge tail -f /dev/null
docker cp volume-holder:/vm-root/var/lib/docker/volumes/volumes_data/_data/public_key.pem .
If you want a transparent SSH to that VM, it seems that is not supported yet, as of Jun-2017. Here a docker staff member said that.

Starting thin server on Raspberry PI on startup

I want to start the thin web server upon restart of my Raspberry Pi.
I have the required config file in /etc/thin/myapp.yml
---
chdir: "/home/pi/web-interface/current"
environment: production
address: 0.0.0.0
port: 3000
timeout: 30
log: "/home/pi/web-interface/shared/tmp/sockets/log/thin.log"
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
threadpool_size: 20
servers: 1
daemonize: true
I did this to install thin as a runlevel command:
thin install
sudo /usr/sbin/update-rc.d -f thin defaults
From the second command I get the following Log output
update-rc.d: using dependency based boot sequencing
update-rc.d: warning: default stop runlevel arguments (0 1 6) do not match thin Default-Stop values (S 0 1 6)
insserv: warning: current stop runlevel(s) (0 1 6) of script `thin' overrides LSB defaults (0 1 6 S).
When I run /etc/init.d/thin start the server starts without trouble so there seems to be something wrong when the device starts up.
This is /etc/init.d/thin:
#!/bin/sh
### BEGIN INIT INFO
# Provides: thin
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: S 0 1 6
# Short-Description: thin initscript
# Description: thin
### END INIT INFO
# Original author: Forrest Robertson
# Do NOT "set -e"
# DAEMON=/home/pi/.rvm/gems/ruby-2.1.0/bin/thin
DAEMON=/home/pi/.rvm/wrappers/raspberrypi/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
say "Starting thin"
$DAEMON start --all $CONFIG_PATH
;;
stop)
say "Stopping thin"
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
:
Now my server does not startup up properly even though I have the following entry in my boot log:
Sat Mar 1 08:19:45 2014: [start] /etc/thin/myapp.yml ...
Sat Mar 1 08:19:52 2014: [....] Starting NTP server: ntpd^[[?25l^[[?1c^[7^[[1G[^[[32m ok ^[[39;49m^[8^[[?25h^[[?0c.
Sat Mar 1 08:19:54 2014: [....] Starting OpenBSD Secure Shell server: sshd^[[?25l^[[?1c^[7^[[1G[^[[32m ok ^[[39;49m^[8^[[?25h^[[?0c.
Sat Mar 1 08:19:56 2014: Starting server on 0.0.0.0:3000 ...
Sat Mar 1 08:19:56 2014:
Try removing the S from this line:
# Default-Stop: S 0 1 6
There is something called crontab. Maybe it can help you, to start it when your raspberry starts.

Resources