Supervisord as Windows Service on Cygwin - windows-services

I am attempting to run Celery as a Windows Service using Supervisord. I followed the configuration laid out on the Celery site and [here][1]. I have set up a virtual environment to run supervisord through cygwin.I have highlighted the lines I think are most important (with **). It appears supervisord and rabbitMQ are working. The problem is with Celery.
I setup the service with the commands:
$ cygrunsrv --install supervisord --path /usr/bin/python --args "/usr/bin/supervisord -n -c /usr/etc/supervisord.conf"
$ supervisord
UPDATED: I now have the following in my supervisord.log file:
2014-08-07 12:46:40,676 INFO exited: celery (exit status 1; not expected)
2014-08-07 12:47:07,187 INFO Increased RLIMIT_NOFILE limit to 1024
2014-08-07 12:47:07,238 INFO RPC interface 'supervisor' initialized
2014-08-07 12:47:07,251 INFO daemonizing the supervisord process
2014-08-07 12:47:07,253 INFO supervisord started with pid 7508
2014-08-07 12:47:08,272 INFO spawned: 'celery' with pid 8056
**2014-08-07 12:47:08,833 INFO success: celery entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)**
The config file is:
[inet_http_server] ; inet (TCP) server disabled by default
port=127.0.0.1:8072 ; (ip_address:port specifier, *:port for all iface)
username = user
password = 123
[supervisord]
logfile= /home/HBA/venv/logFiles/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
;user=HBA ; (default is current user, required if root)
childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=http://127.0.0.1:8072 ; use an http:// url to specify an inet socket
[program:celery]
command= celery worker -A runLogProject --loglevel=INFO ; the program (relative uses PATH, can take args)
directory= /home/HBA/venv/runLogProject
environment=PATH="/home/HBA/venv/;/home/HBA/venv/Scripts/"
numprocs=1
stdout_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stdout log path, NONE for none; default AUTO
stderr_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stderr log path, NONE for none; default AUTO
autostart=true ; start at supervisord start (default: true)
autorestart=true ; whether/when to restart (default: unexpected)
startsecs=0
stopwaitsecs=1000
killasgroup=true
My celery log file gives me:
**[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-4' pid:12284 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-3' pid:4432 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-2' pid:9120 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-1' pid:6280 exited with 'signal -1'**
C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2014-08-07 19:47:08,822: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
**[2014-08-07 19:47:08,944: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2014-08-07 19:47:08,954: INFO/MainProcess] mingle: searching for neighbors
[2014-08-07 19:47:09,963: INFO/MainProcess] mingle: all alone**
C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] celery#CORONADO ready.

I solved my issue using the following command: /home/HBA/venv/Scripts/celery worker -A runLogProject --loglevel=INFO
My biggest issue was an unfamiliarity with virtual environments. I needed to make sure the files were in the correct folders within the venv.

Related

Yocto in Docker yields pseudo inode errors

We are currently building an embedded Linux OS using Yocto inside an Docker container. All persistent directories are mounted as volumes.
This is accomplished by generating an conf/site.conf setting those directories:
DL_DIR="/artifacts/downloads"
TMPDIR="/artifacts/tmp"
SSTATE_DIR="/artifacts/sstate_cache"
PERSISTENT_DIR="/artifacts/persistent"
DEPLOY_DIR_IMAGE="/images"
DEPLOY_DIR_IPK="/ipk"
And therefore running the image with
docker run --rm \
-v /opt/yocto/projectname:/artifacts \
-v /opt/deploy/projectname/ipk:/ipk \
-v /opt/deploy/projectname/images:/images \
-it <container>
All of this is working fine, the output is deployed as expected and everything works great.
However, upon rebuilding various recipes due to updates, we see yoctos pseudo build-environment abort()ing frequently. Most of the time its an rm or an tar command being killed.
Almost all errors are ino path mismatches like
path mismatch [1 link]: ino 23204894 db '/ipk/aarch64/glibc-charmap-jis-c6229-1984-a_2.35-r0_aarch64.ipk' req '/ipk/aarch64/locale-base-is-is_2.35-r0.1_aarch64.ipk'.
dir err : 107508467 ['/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-nb-no.iso-8859-1/CONTROL'] (db '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-sgs-lt/CONTROL/control') db mode 0100644, header mode 040755 (unlinking db)
Child process exit status 4: lock_held
Couldn't obtain lock: Resource temporarily unavailable.
lock already held by existing pid 3365057.
(I appended the error logs of the this example at the end of this post)
or just plain
path mismatch [1 link]: ino 23200106 db '/ipk/aarch64/libcap-ng-doc_0.8.2-r0_aarch64.ipk' req '/ipk/aarch64/libcap-ng-doc_0.8.2-r0.1_aarch64.ipk'.
Setup complete, sending SIGUSR1 to pid 2167709.
When we were building the same project natively without docker, we never have seen errors like this. So we assume there are some compatibility issues with docker and pseudo. We already tried dockers devicemapper and overlay2 storage drivers.
The current workaround is to delete those affected files manually. But this mostly leads to other problems down the line.
We are out of ideas where to look in solving the problem. No yocto resources regarding pseudo-errors were of any help.
Is there any hint to debug those errors in a meaningful way or do we have to refactor the docker builds somehow to prevent those pseudo-errors?
Logs
Bitbake output
DEBUG: Hardlink test failed with [Errno 18] Invalid cross-device link: '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/deploy-ipks/aarch64/glibc-localedata-tk-tm_2.35-r0.1_aarch64.ipk' -> '/ipk/testfile'
ERROR: Error executing a python function in exec_func_python() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_func_python() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:sstate_task_postfunc(d)
0003:
File: '/yoctoagl/external/poky/meta/classes/sstate.bbclass', lineno: 822, function: sstate_task_postfunc
0818:
0819: sstateinst = d.getVar("SSTATE_INSTDIR")
0820: d.setVar('SSTATE_FIXMEDIR', shared_state['fixmedir'])
0821:
*** 0822: sstate_installpkgdir(shared_state, d)
0823:
0824: bb.utils.remove(d.getVar("SSTATE_BUILDDIR"), recurse=True)
0825:}
0826:sstate_task_postfunc[dirs] = "${WORKDIR}"
File: '/yoctoagl/external/poky/meta/classes/sstate.bbclass', lineno: 418, function: sstate_installpkgdir
0414:
0415: for state in ss['dirs']:
0416: prepdir(state[1])
0417: bb.utils.rename(sstateinst + state[0], state[1])
*** 0418: sstate_install(ss, d)
0419:
0420: for plain in ss['plaindirs']:
0421: workdir = d.getVar('WORKDIR')
0422: sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared")
File: '/yoctoagl/external/poky/meta/classes/sstate.bbclass', lineno: 343, function: sstate_install
0339:
0340: # Run the actual file install
0341: for state in ss['dirs']:
0342: if os.path.exists(state[1]):
*** 0343: oe.path.copyhardlinktree(state[1], state[2])
0344:
0345: for postinst in (d.getVar('SSTATEPOSTINSTFUNCS') or '').split():
0346: # All hooks should run in the SSTATE_INSTDIR
0347: bb.build.exec_func(postinst, d, (sstateinst,))
File: '/yoctoagl/external/poky/meta/lib/oe/path.py', lineno: 134, function: copyhardlinktree
0130: s_dir = os.getcwd()
0131: cmd = 'cp -afl --preserve=xattr %s %s' % (source, os.path.realpath(dst))
0132: subprocess.check_output(cmd, shell=True, cwd=s_dir, stderr=subprocess.STDOUT)
0133: else:
*** 0134: copytree(src, dst)
0135:
0136:def copyhardlink(src, dst):
0137: """Make a hard link when possible, otherwise copy."""
0138:
File: '/yoctoagl/external/poky/meta/lib/oe/path.py', lineno: 94, function: copytree
0090: # This way we also preserve hardlinks between files in the tree.
0091:
0092: bb.utils.mkdirhier(dst)
0093: cmd = "tar --xattrs --xattrs-include='*' -cf - -S -C %s -p . | tar --xattrs --xattrs-include='*' -xf - -C %s" % (src, dst)
*** 0094: subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
0095:
0096:def copyhardlinktree(src, dst):
0097: """Make a tree of hard links when possible, otherwise copy."""
0098: bb.utils.mkdirhier(dst)
File: '/usr/lib/python3.9/subprocess.py', lineno: 424, function: check_output
0420: else:
0421: empty = b''
0422: kwargs['input'] = empty
0423:
*** 0424: return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
0425: **kwargs).stdout
0426:
0427:
0428:class CompletedProcess(object):
File: '/usr/lib/python3.9/subprocess.py', lineno: 528, function: run
0524: # We don't call process.wait() as .__exit__ does that for us.
0525: raise
0526: retcode = process.poll()
0527: if check and retcode:
*** 0528: raise CalledProcessError(retcode, process.args,
0529: output=stdout, stderr=stderr)
0530: return CompletedProcess(process.args, retcode, stdout, stderr)
0531:
0532:
Exception: subprocess.CalledProcessError: Command 'tar --xattrs --xattrs-include='*' -cf - -S -C /artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/deploy-ipks -p . | tar --xattrs --xattrs-include='*' -xf - -C /ipk' returned non-zero exit status 134.
Subprocess output:
abort()ing pseudo client by server request. See https://wiki.yoctoproject.org/wiki/Pseudo_Abort for more details on this.
Check logfile: /artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/pseudo//pseudo.log
Aborted (core dumped)
DEBUG: Python function sstate_task_postfunc finished
pseudo.log
debug_logfile: fd 2
pid 1234878 [parent 1234771], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 1234771.
db cleanup for server shutdown, 16:26:33.548
memory-to-file backup complete, 16:26:33.548.
db cleanup finished, 16:26:33.548
debug_logfile: fd 2
pid 997357 [parent 997320], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 997320.
db cleanup for server shutdown, 16:52:05.287
memory-to-file backup complete, 16:52:05.288.
db cleanup finished, 16:52:05.288
debug_logfile: fd 2
pid 30407 [parent 30405], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 30405.
dir err : 20362030 ['/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/locale-base-kab-dz/CONTROL'] (db '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/locale-base-fr-ca.iso-8859-1/CONTROL/control') db mode 0100644, header mode 040755 (unlinking db)
debug_logfile: fd 2
pid 1901634 [parent 1901625], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 1901625.
db cleanup for server shutdown, 10:40:12.401
memory-to-file backup complete, 10:40:12.402.
db cleanup finished, 10:40:12.402
debug_logfile: fd 2
pid 3365057 [parent 3364988], doing new pid setup and server start
Setup complete, sending SIGUSR1 to pid 3364988.
debug_logfile: fd 2
pid 3365111 [parent 3365055], doing new pid setup and server start
lock already held by existing pid 3365057.
Couldn't obtain lock: Resource temporarily unavailable.
Child process exit status 4: lock_held
dir err : 107508467 ['/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-nb-no.iso-8859-1/CONTROL'] (db '/artifacts/tmp/work/aarch64-agl-linux/glibc-locale/2.35-r0/packages-split/glibc-binary-localedata-sgs-lt/CONTROL/control') db mode 0100644, header mode 040755 (unlinking db)
path mismatch [1 link]: ino 23204894 db '/ipk/aarch64/glibc-charmap-jis-c6229-1984-a_2.35-r0_aarch64.ipk' req '/ipk/aarch64/locale-base-is-is_2.35-r0.1_aarch64.ipk'.
db cleanup for server shutdown, 10:41:32.164
memory-to-file backup complete, 10:41:32.164.
db cleanup finished, 10:41:32.164

uwsgi upgrade to python3.7 to fix ImportError: No module named 'encodings'

Honestly, I have no idea what I am doing, so be gentle with me. I am trying to use uwsgi to run my django application on a aws ubuntu instance. I have a virtual environment with python3.7 running, but when I try to run uwsgi. I get this in the logs:
*** Starting uWSGI 2.0.14 (64bit) on [Sun Jan 5 14:51:32 2020] ***
compiled with version: 5.4.0 20160609 on 20 October 2016 05:56:34
os: Linux-4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018
nodename: ip-172-31-41-139
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
chdir() to /home/ubuntu/web/graff
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 3804
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /home/ubuntu/web/graffuwsgi.sock fd 3
Python version: 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609]
Set PythonHome to /home/ubuntu/.virtualenvs/graff
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ImportError: No module named 'encodings'
Here is my uwsgi.conf
# file: /etc/init/graffuwsgi.conf
description "uWSGI server for graff"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /usr/local/bin/uwsgi --home /home/ubuntu/web/graff/ --socket /home/ubuntu/web/graffuwsgi.sock --chmod-socket=666 --module=graff.wsgi --pythonpath /home/ubuntu/web -H /home/ubuntu/.virtualenvs/graff --logto /home/ubuntu/web/logs/graffuwsgi.log --chdir=/home/ubuntu/web/graff --chmod-socket=666
It seems like python3.5 just doesn't work anymore. I feel like I've had to replace python3.5 with 3.7 in several places lately to fix various bugs, and I have it in my head that if I can get uwsgi to run python3.7 instead of 3.5 then that will solve this error too. Anyway, any help is much appreciated.
Looks like your uwsgi is compiled with different python version, make sure you compile with python 3.5
PYTHON=python3.5 uwsgi --build-plugin "/usr/src/uwsgi/plugins/python python35"
mv python35_plugin.so /usr/lib/uwsgi/plugins/python35_plugin.so
chmod 644 /usr/lib/uwsgi/plugins/python35_plugin.so
you can follow this guide:
https://www.paulox.net/2017/04/04/how-to-use-uwsgi-with-python3-6-in-ubuntu/
The source of error is PythonHome (pyhome, venv, home) setting.
See at the official Python docs on Environment variables
Is your path /home/ubuntu/.virtualenvs/graff suited for Python requirements?
In short:
Use pythonpath (pp) and let the system to find modules.
You can set several repeated options in your uwsgi config to custom modules search, eg:
pythonpath = /opt/web2py/
pythonpath = /opt/anaconda/lib/python3.9/site-packages/
So, I had the same error. And it was fixed by editing uwsgi vassal config.
I thing the core of problem that virtual envs are created with symlinks or have some inner relative paths, so an isolated process inside uwsgi could not find modules.

Supervisord keeps repeating a process with an exit of zero.

I am working on having supervisord send a curl request upon start. however, despite my best efforts, I set auto restart to false, it keeps running the script.
[program:slack-client]
priority=99
autorestart=false
command=bash -c 'SOME BASH COMMAND'
The error that keeps repeating is
INFO exited: slack-client (exit status 0; not expected)
I faced with similar behavior of rsyslog. This workaround helped me (but I'm not sure if that's exactly your case):
[program:rsyslog]
command = rsyslogd -n -c3
startsecs = 5
stopwaitsecs = 5

Query regarding Zookeeper Windows API start/stop, using Zk as a windows service(using prunsrv)

I am using zookeeper in my product(3.3.3).
While working with zookeeper on Windows, I am creating a service(using prunsrv) ,
I have few queries and issues. Listed them all,
Issues:
1) zkServer.cmd didn’t start on Win server 2008 machine & Win 7 Enterprise(64 bit both), had to replace the following line,
java "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" %*
to
java "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%"
And it worked, could it be fixed in some other way?
2) In the zoo.cnf I specified the dataDir, still it creates some other directory (bin/zookeeper-3.4.5zookeeper-3.4.5data/ version-2/snapshot) and stores the snapshots there.
Queries:
1) There is no start/stop with zkServer.cmd as it is in zkServer.sh, so basically it is started with zkServer.cmd but to stop I do a Ctrl+C/Z
So if I start the process, it is a foreground process and gets killed when I do a ctrl+C
2) I have to create a zookeeper service, and I am using prunsrv to do that. I figured out the following 2 ways to do so.
a)
prunsrv //IS//Zookeeper --DisplayName=" ZOOKEEPER Service" --Description=" ZOOKEEPER Service" --Startup=auto --StartMode=exe --StartPath=%ZOOKEEPER_HOME% --StartImage=%ZOOKEEPER_HOME%\bin\zkServer.cmd --StopTimeout=5 --LogPath=%LOGS_DIR% --LogPrefix=zookeeper --LogLevel=Info --PidFile=zookeeper.pid --StdOutput=auto --StdError=auto
b)
cd %ZOOKEEPER_HOME%\bin\
call "%~dp0zkEnv.cmd"
set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
prunsrv //IS//Zookeeper --DisplayName=" ZOOKEEPER Service" --Description=" ZOOKEEPER Service" --Jvm="%JVM_DLL%" --JvmOptions=!JAVA_OPTS! --Environment=zookeeper.log.dir=%ZOO_LOG_DIR%;zookeeper.root.logger=%ZOO_LOG4J_PROP%; --Startup=auto --LibraryPath=%LIB_DIR% --StartMode=jvm --Classpath=%CLASSPATH% %ZOOMAIN% %ZOOCFG% --StartClass=org.apache.zookeeper.server.quorum.QuorumPeerMain --StartMethod=start --StopMode=jvm --StopClass=org.apache.zookeeper.server.quorum.QuorumPeerMain --StopMethod=stop --StopTimeout=10 --LogPath=%LOGS_DIR% --LogPrefix=zookeeper --LogLevel=Info --PidFile=zookeeper.pid --StdOutput=auto --StdError=auto
basically in the second approach I am myself doing all tasks done by the zkServer.cmd
=>> My Query is in the second step(2b), that to stop the service there should be a stop method exposed, so that when I stop the service it is called.
So right now if I create a service and start it, ZK runs fine, but stopping it takes indefinitely, so I have to go and kill the process.
Is there some stop() for the same, I see a shutdown() but there is no description for it
I went through the class org.apache.zookeeper.server.quorum.QuorumPeerMain, here the main() is the start method( if my understanding is correct), and there should be some method to shutdown the process.
Just got the following link
https://issues.apache.org/jira/browse/ZOOKEEPER-1122, exposes a start/stop, but the stop has some issues
it throws the following error:
E:\zookeeper-3.4.5\zookeeper-3.4.5\bin>zkServer.cmd stop
"JMX enabled by default"
"Using config: E:\zookeeper-3.4.5\zookeeper-3.4.5\bin\..\conf\zoo.cfg"
"Stopping zookeeper ... "
ERROR: The process with PID 452 (child process of PID 4) could not be terminated.
Reason: This is critical system process. Taskkill cannot end this process.
ERROR: The process with PID 4 (child process of PID 0) could not be terminated.
Reason: Access is denied.
ERROR: The process with PID 0 (child process of PID 0) could not be terminated.
Reason: This is critical system process. Taskkill cannot end this process.
STOPED
I am running this stop command on a Administrator console.
E:\zookeeper-3.4.5\zookeeper-3.4.5\bin>tasklist | findstr "java"
java.exe 10324 Console 1 36,036 K.
Any help would be highly appreciated
So what you want to do is run Zookeeper as a Windows service. I had the same requirement, here is the solution I have chosen:
Prereqs: Zookeeper & prunsrv
Set ZOOKEEPER_SERVICE environment variable to the name of the windows service to create, and ZOOKEEPER_HOME to the path to the zookeeper home folder. Then,
prunsrv.exe "//IS//%ZOOKEEPER_SERVICE%" ^
--DisplayName="Zookeeper (%ZOOKEEPER_SERVICE%)" ^
--Description="Zookeeper (%ZOOKEEPER_SERVICE%)" ^
--Startup=auto --StartMode=exe ^
--StartPath=%ZOOKEEPER_HOME% ^
--StartImage=%ZOOKEEPER_HOME%\bin\zkServer.cmd ^
--StopPath=%ZOOKEEPER_HOME%\ ^
--StopImage=%ZOOKEEPER_HOME%\bin\zkServerStop.cmd ^
--StopMode=exe --StopTimeout=5 ^
--LogPath=%ZOOKEEPER_HOME% --LogPrefix=zookeeper-wrapper ^
--PidFile=zookeeper.pid --LogLevel=Info --StdOutput=auto --StdError=auto
Add a zkServerStop.cmd file in zookeeper bin folder with the following content:
#echo off
setlocal
TASKLIST /svc | findstr /c:"%ZOOKEEPER_SERVICE%" > %ZOOKEEPER_HOME%\zookeeper_svc.pid
FOR /F "tokens=2 delims= " %%G IN (%ZOOKEEPER_HOME%\zookeeper_svc.pid) DO (
#set zkPID=%%G
)
taskkill /PID %zkPID% /T /F
del %ZOOKEEPER_HOME%/zookeeper_svc.pid
endlocal
(Of course it needs the two environment variables ZOOKEEPER_HOME & ZOOKEEPER_SERVICE to be set)
Hope it helps,
Guillaume.

How to determine which script is being executed in PHP-FPM process

I am running nginx + php-fpm. Is there any way how can I know what is each of the PHP processes doing? Something like extended mod_status in apache, where I can see that apache process with PID x is processing URL y. I'm not sure if the PHP process knows the URL, but getting the script path and name will be sufficient.
After some googling hours and browsing PHP.net bug tracking system I have found the solution. It is available since PHP 5.3.8 or 5.3.9, but doesn't seem to be documented. Based on feature request #54577, the status page supports option full, which will display status of each worker separately. So for example the URL will be http://server.com/php-status?full and sample output looks like:
pid: 22816
state: Idle
start time: 22/Feb/2013:15:03:42 +0100
start since: 10933
requests: 28352
request duration: 1392
request method: GET
request URI: /ad.php?zID=597
content length: 0
user: -
script: /home/web/server.com/ad/ad.php
last request cpu: 718.39
last request memory: 1310720
PHP-FPM has a built in status monitor, though it's not as details as mod_status. From the php-fpm config file /etc/php-fpm.d/www.conf (on CentOS 6)
; The URI to view the FPM status page. If this value is not set, no URI will be
; recognized as a status page. By default, the status page shows the following
; information:
; accepted conn - the number of request accepted by the pool;
; pool - the name of the pool;
; process manager - static or dynamic;
; idle processes - the number of idle processes;
; active processes - the number of active processes;
; total processes - the number of idle + active processes.
; The values of 'idle processes', 'active processes' and 'total processes' are
; updated each second. The value of 'accepted conn' is updated in real time.
; Example output:
; accepted conn: 12073
; pool: www
; process manager: static
; idle processes: 35
; active processes: 65
; total processes: 100
; By default the status page output is formatted as text/plain. Passing either
; 'html' or 'json' as a query string will return the corresponding output
; syntax. Example:
; http://www.foo.bar/status
; http://www.foo.bar/status?json
; http://www.foo.bar/status?html
; Note: The value must start with a leading slash (/). The value can be
; anything, but it may not be a good idea to use the .php extension or it
; may conflict with a real PHP file.
; Default Value: not set
;pm.status_path = /status
If you enable this, you can then pass the path from nginx to your socket/port for PHP-FPM and you can view the status page.
nginx.conf:
location /status {
include fastcgi_params;
fastcgi_pass unix:/var/lib/php/php-fpm.sock;
}
cgi command line is more convinient:
SCRIPT_NAME=/status \
SCRIPT_FILENAME=/status \
REQUEST_METHOD=GET \
cgi-fcgi -bind -connect 127.0.0.1:9000
You can use strace to show the scripts being run - and many other things - in real time. It's pretty verbose, but it can give you a good overall picture of what's going on:
# switch php-fpm7.0 for process you're using
sudo strace -f $(pidof php-fpm7.0 | sed 's/\([0-9]*\)/\-p \1/g')
The above will attach to the forked processes of php fpm. Use -p to attach to a particular pid.
The above would get the scrip path. To get the urls, you would look at your nginx / apache access logs.
As a side note, to see the syscalls and which ones are taking longest:
sudo strace -c -f $(pidof php-fpm7.0 | sed 's/\([0-9]*\)/\-p \1/g')
Wait a while, then hit Ctr-C

Resources