Start mulitple instances of a Windows service from Wix - windows-services

I'm currently working on a msi, and am facing some problems with starting services from the wix project. This is the xml for copying the exe file, which is the service, and installing the service.
<Component Id='MatcherService' Guid='{81EC2888-DFA6-49BA-829A-5A8354D89310}' Directory='MATCHERDIR'>
<File Id='MatchingServer.exe' Source='$(var.Matcher.TargetDir)\MatchingServer.exe'/>
<ServiceInstall
Id="ServiceInstaller1"
Type="ownProcess"
Name="Matcher1"
DisplayName="Matching Service 1"
Description="test"
Start="auto"
Account="NT AUTHORITY\NETWORK SERVICE"
Interactive="no"
ErrorControl="normal"
Vital="yes">
<util:PermissionEx
User="Everyone"
GenericAll="yes"
ServiceChangeConfig="yes"
ServiceEnumerateDependents="yes"
ChangePermission="yes"
ServiceInterrogate="yes"
ServicePauseContinue="yes"
ServiceQueryConfig="yes"
ServiceQueryStatus="yes"
ServiceStart="yes"
ServiceStop="yes" />
</ServiceInstall>
<ServiceControl Id="StartService1" Stop="both" Remove="uninstall" Name="Matcher1" Wait="yes"/>
</Component>
This only installs the Service, and when I open Services, I'm able to start this service properly.
The problem am facing is that I want several instance of this MatchingServer.exe to run as a service, I want 30 instances.
I tried to do it this way:
<Component Id='MatcherService' Guid='{81EC2888-DFA6-49BA-829A-5A8354D89310}' Directory='MATCHERDIR'>
<File Id='MatchingServer.exe' Source='$(var.Matcher.TargetDir)\MatchingServer.exe'/>
<ServiceInstall
Id="ServiceInstaller1"
Type="ownProcess"
Name="Matcher1"
DisplayName="Matching Service 1"
Description="test"
Start="auto"
Account="NT AUTHORITY\NETWORK SERVICE"
Interactive="no"
ErrorControl="normal"
Vital="yes">
<util:PermissionEx
User="Everyone"
GenericAll="yes"
ServiceChangeConfig="yes"
ServiceEnumerateDependents="yes"
ChangePermission="yes"
ServiceInterrogate="yes"
ServicePauseContinue="yes"
ServiceQueryConfig="yes"
ServiceQueryStatus="yes"
ServiceStart="yes"
ServiceStop="yes" />
</ServiceInstall>
<ServiceInstall
Id="ServiceInstaller2"
Type="ownProcess"
Name="Matcher2"
DisplayName="Matching Service 2"
Description="test"
Start="auto"
Account="NT AUTHORITY\NETWORK SERVICE"
Interactive="no"
ErrorControl="normal"
Vital="yes">
<util:PermissionEx
User="Everyone"
GenericAll="yes"
ServiceChangeConfig="yes"
ServiceEnumerateDependents="yes"
ChangePermission="yes"
ServiceInterrogate="yes"
ServicePauseContinue="yes"
ServiceQueryConfig="yes"
ServiceQueryStatus="yes"
ServiceStart="yes"
ServiceStop="yes" />
</ServiceInstall>
<ServiceControl Id="StartService1" Stop="both" Remove="uninstall" Name="Matcher1" Wait="yes"/>
<ServiceControl Id="StartService2" Stop="both" Remove="uninstall" Name="Matcher2" Wait="yes"/>
</Component>
This will obviously give compile errors. I succeeded doing this from a batch file like this:
MatchingServer.exe -i 1 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 2 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 3 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 4 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 5 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 6 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 7 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 8 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 9 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 10 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 11 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 12 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 13 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 14 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 15 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 16 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 17 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 18 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 19 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 20 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 21 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 22 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 23 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 24 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 25 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 26 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 27 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 28 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 29 -l "NT AUTHORITY\NETWORKSERVICE"
MatchingServer.exe -i 30 -l "NT AUTHORITY\NETWORKSERVICE"
And making a custom action to exeute this batch file. But I want to do this "inside" the wix.
How can I spawn 30 service instances of the same .exe file, with different names from wix, without going the way around batch file?

Windows services aren't designed to do this. If you need 30 instances of the same server, you need to create 30 unique services inside windows.
I might recommend though that if each service is a copy then you could do this by launching multiple threads, within your app. If you have static classes, you might need to do some fancy footwork with app domains OR you could spawn 30 exes managed by a master exe (your service).

Related

exit-status: -1 when kill process through jenkins

I have used Jenkins for build and deploy my artifacts to server. After deploying files I stopped service by using kill command
kill -9 'pgrep -f service name'
note that the service killed but jenkins job fail with status code -1 although this command works fine when I use it at shell of linux server without jenkins
Please help me why I get -1 exit status?
and how I can kill process at linux server through jenkins job without failure ?
Edit : the below logs which appears after adding /bin/bash -x to my script:
#/bin/bash -x
pid=$(pgrep -f service-name); echo "killing $pid"; kill -9 $pid;
[SSH] executing...
killing 13664
16924
16932
[SSH] completed
[SSH] exit-status: -1
Build step 'Execute shell script on remote host using ssh' marked build as failure
Email was triggered for: Failure - Any
Edit : the output of command ps -ef | grep service-name is :
ps -ef | grep service-name
[SSH] executing...
channel stopped
user 11786 11782 0 15:28 ? 00:00:00 bash -c ps -ef | grep service-name
user 11799 11786 0 15:28 ? 00:00:00 grep service-name
root 19981 11991 0 Aug15 pts/1 00:02:53 java -jar /root/service-name /spring.config.location=/root/service-name/application.properties
[SSH] completed
--- the output of trial script :
#/bin/bash -x
ps -ef | grep service-name
pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do
ps -ef | grep $pid
kill -9 $pid
echo "kill command returns $?"
done
[SSH] executing...
channel stopped
root 56980 11991 37 11:03 pts/1 00:00:33 java -jar /root/service-name --spring.config.location=/root/service-name/application.properties
root 57070 57062 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57079 57070 0 11:05 ? 00:00:00 grep service-name
root 56980 11991 37 11:03 pts/1 00:00:33 java -jar /root/service-name --spring.config.location=/root/service-name/application.properties
root 57083 57081 0 11:05 ? 00:00:00 grep 56980
kill command returns 0
root 57070 57062 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57081 57070 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57085 57081 0 11:05 ? 00:00:00 grep 57070
kill command returns 0
root 57081 1 0 11:05 ? 00:00:00 bash -c #/bin/bash -x ps -ef | grep service-name pgrep -f "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do ps -ef | grep $pid kill -9 $pid echo "kill command returns $?" done
root 57086 57081 0 11:05 ? 00:00:00 ps -ef
root 57087 57081 0 11:05 ? 00:00:00 grep 57081
[SSH] completed
[SSH] exit-status: -1 ```
If you want to kill any process whose full command line match service-name you should change your script:
#/bin/bash
pgrep -f service-name | while read pid; do
ps -ef | grep $pid # so you can see what you are going to kill
kill -9 $pid
done
Command pgrep returns a list of process one per line.
In order to get pid list separated by a space and call kill command once:
#/bin/bash
kill -9 $(pgrep -f service-name -d " ")
In order to view which process are selected by pgrep use:
pgrep -a -f sevice-name
or
ps -ef | grep service-name
use man pgrep to see all options
In your case, job is killed because pgrep match the job script, so you should use a more specific pattern with the -x parameters:
#/bin/bash
pgrep -xf "java -jar /root/service-name --spring.config.location=/root/service-name/application.properties" | while read pid; do
kill -9 $pid
done

docker private registry creation fails with following log msg: level=fatal msg="open /home/vignesh/certs/localregistry.crt: no such file or directory"

I am trying to create a private docker registry.
Following is the certificate directory and contents:
vignesh#vignesh-ThinkPad-E470:~/certs$ pwd
/home/vignesh/certs
vignesh#vignesh-ThinkPad-E470:~/certs$ ls -l
total 16
drwxr-xr-x 2 vignesh vignesh 4096 Jul 21 08:41 certs
-rwxrwxr-x 1 vignesh vignesh 920 Jul 21 08:41 localregistry.crt
-rwxrwxr-x 1 vignesh vignesh 712 Jul 21 08:41 localregistry.csr
-rwxrwxr-x 1 vignesh vignesh 963 Jul 21 08:41 localregistry.key
When I create the container it gets killed soon after create and status goes from "up" to "restarting"
docker run -d \
--restart=always \
--name registry3 \
-v /home/vignesh/certs:/certs \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/home/vignesh/certs/localregistry.crt \
-e REGISTRY_HTTP_TLS_KEY=/home/vignesh/certs/localregistry.key \
-p 443:443 \
registry:2
vignesh#vignesh-ThinkPad-E470:~/certs$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e165c1c3b08 registry:2 "/entrypoint.sh /etc…" 18 seconds ago Up 1 second 0.0.0.0:443->443/tcp, :::443->443/tcp, 5000/tcp registry3
vignesh#vignesh-ThinkPad-E470:~/certs$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e165c1c3b08 registry:2 "/entrypoint.sh /etc…" 19 seconds ago Restarting (1) 2 seconds ago registry3
vignesh#vignesh-ThinkPad-E470:~/certs$
On checking the logs, I see following fatal error saying .crt file not found (several other non fatal messages also seen). But I am able to find the .crt file at path shown in message:
vignesh#vignesh-ThinkPad-E470:~/certs$ docker logs 5e165c1c3b08
time="2021-07-21T03:33:03.10806134Z" level=fatal msg="open /home/vignesh/certs/localregistry.crt: no such file or directory"
vignesh#vignesh-ThinkPad-E470:~/certs$ ls -l /home/vignesh/certs/localregistry.crt
-rwxrwxr-x 1 vignesh vignesh 920 Jul 21 08:41 /home/vignesh/certs/localregistry.crt
Could you please help me here.
Thanks,
Vignesh
The process inside the container sees files in the container's mount namespace, not your host. Since you mapped the directory to a different name in the container, you need to use that path:
docker run -d \
--restart=always \
--name registry3 \
-v /home/vignesh/certs:/certs \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/localregistry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/localregistry.key \
-p 443:443 \
registry:2

reload containerized fluentd configuration

Supposing my container is named fluentd, I'd expect this command to reload the config:
sudo docker kill -s HUP fluentd
Instead it kills the container.
Seems there is some spawning of a few processes from the entrypoint:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /usr/bin/dumb-init /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/pl
5 root 0:00 /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
13 fluent 0:00 /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
14 fluent 0:00 {fluentd} /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
16 fluent 0:00 {fluentd} /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
Tried HUPping from inside the container pid 13 and it seems to work.
Docker is sending the signal to the entrypoint. If I inspect the State.Pid, I see 4450. Here's the host ps:
root 4450 4432 0 18:30 ? 00:00:00 /usr/bin/dumb-init /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c
/fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
root 4467 4450 0 18:30 ? 00:00:00 /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
ubuntu 4475 4467 0 18:30 ? 00:00:00 /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
ubuntu 4476 4475 0 18:30 ? 00:00:00 /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
ubuntu 4478 4476 0 18:30 ? 00:00:00 /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
Any ideas how do reload the conf without a custom script to find the correct process to HUP?
This command should work I believe
sudo docker exec fluentd pkill -1 -x fluentd
I tested it on sleep command inside fluentd container and it works.
In my case fluentd is running as a pod on kubernetes.
The command that works for me is:
kubectl -n=elastic-system exec -it fluentd-pch5b -- kill --signal SIGHUP 7
Where the number 7 is the process id of fluentd inside the container
as you can see below:
root#fluentd-pch5b:/home/fluent# ps -elf
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
4 S root 1 0 0 80 0 - 2075 - 14:42 ? 00:00:00 tini -- /fluentd/entrypoint.sh
4 S root 7 1 0 80 0 - 56225 - 14:42 ? 00:00:02 ruby /fluentd/vendor/bundle/ruby/2.6.0/bin/fluentd -c /fluentd/etc/fluent.co
4 S root 19 7 0 80 0 - 102930 - 14:42 ? 00:00:06 /usr/local/bin/ruby -Eascii-8bit:ascii-8bit /fluentd/vendor/bundle/ruby/2.6.
4 S root 70 0 0 80 0 - 2439 - 14:52 pts/0 00:00:00 bash
0 R root 82 70 0 80 0 - 3314 - 14:54 pts/0 00:00:00 ps -elf

Setting up the docker api - modification of docker.conf

I found a video about setting up the docker remote api by Packt publishing.
In the video we are told to change the /etc/init/docker.conf file by adding "-H tcp://127.0.0.1:4243 -H unix:///var/run/docker/sock" to DOCKER_OPTS=. Then we have to restart docker for the changes to take effect.
However after I do all that, I still can't curl localhost at that port. Doing so returns:
vagrant#vagrant-ubuntu-trusty-64:~$ curl localhost:4243/_ping
curl: (7) Failed to connect to localhost port 4243: Connection refused
I'm relativity new to docker, if somebody could help me out here I'd be very grateful.
Edit:
docker.conf
description "Docker daemon"
start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576
respawn
kill timeout 20
pre-start script
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
exit 0
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
end script
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKER=/usr/bin/$UPSTART_JOB
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock"
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKER" daemon $DOCKER_OPTS
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
"/etc/init/docker.conf" 60L, 1582C
EDIT2: Output of ps aux | grep docker
vagrant#vagrant-ubuntu-trusty-64:~$ ps aux | grep docker
root 858 0.2 4.2 401836 21504 ? Ssl 06:12 0:00 /usr/bin/docker daemon --insecure-registry 11.22.33.44
:5000
vagrant 1694 0.0 0.1 10460 936 pts/0 S+ 06:15 0:00 grep --color=auto docker
The problem
According to the output of ps aux|grep docker it can be noticed that the options the daemon is started with do not match the ones in the docker.conf file. Another file is then used to start the docker daemon service.
Solution
To solve this, track down the file that contains the option "--insecure-registry 11.22.33.44:5000 that may either /etc/default/docker or /etc/init/docker.conf or /etc/systemd/system/docker.service or idk-where-else and modify it accordingly with the needed options.
Then restart the daemon and you're good to go !

rhel docker usage - start process at container start

so i have the following scenario to create my docker image and container. the question is: how can I have my process up at container start up ?
1. create image
cat /var/tmp/mod_sm.tar | docker import - mod_sm_39
2. see images
[root#sl1cdi151 etc]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mod_sm_39 latest 1573470bfa06 2 hours ago 271 MB
mod_site_m0305 latest c029826a2253 4 days ago 53.8 MB
<none> <none> ee67b9aec2d3 4 days ago 163.4 MB
mod_site_soft latest 0933a386d56c 6 days ago 53.8 MB
mod_site_vm151 latest 4461c32e4772 6 days ago 53.8 MB
3. create container
docker run -it --net=host -v /root:/root -v /usr/share/Modules:/usr/share/Modules -v /usr/libexec:/usr/libexec -v /var:/var -v /tmp:/tmp -v /bin:/bin -v /cgroup:/cgroup -v /dev:/dev -v /etc:/etc -v /home:/home -v /lib:/lib -v /lib64:/lib64 -v /sbin:/sbin -v /usr/lib64:/usr/lib64 -v /usr/bin:/usr/bin --name="mod_sm_39_c2" -d mod_sm_39 /bin/bash
4. now in container i go to my application and start the following:
[root#sl1cdi151 ven]# ./service.sh sm_start
5. check if it's up
[root#sl1cdi151 etc]# ps -ef | grep http
root 33 1 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 34 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 36 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 37 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 39 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 41 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 43 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 45 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 47 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
root 80 1 0 13:32 pts/2 00:00:00 grep http
So i need to have that up "./service.sh sm_start" when container is started. how can i implement that. Thank you in advance
Either specify the command as part of the docker run command:
docker run [options] mod_sm_39 /path/to/service.sh sm_start
or specify the command as part of the image's Dockerfile so that it will be run whenever the container is started without an explicit command:
# In the mod_sm_39 Dockerfile:
CMD ["/path/to/service.sh", "sm_start"]

Resources