Installing a Tcl application as a Windows Service error - windows-services

I'm trying to install a Tcl program as a service on my Windows machine using the TclDevKit's TclServiceManager. I'm following the guide here step by step and yet I am experiencing a lot of issues.
If I try to use my raw .tcl file to create the service, I get the following error:
Error 1053: The service did not respond to the start or control request in a timely fashion.
I've followed a solution for this issue here to give the program more time to start up before the Service Control Manager terminates it; to no avail.
Then I decided to try and wrap the program using TclApp and see if that worked. Like the guide says, I used the base-tclsvc-win32-ix86.exe prefix file located in my TclDevKit bin directory. Installing the service that way, and then trying to run it resulted in the following error:
Windows could not start the <service name> service on Local Computer.
Error 1067: The process terminated unexpectedly.
There wasn't much information at all that I could find googling this error. The only Stackoverflow post on it is this one. So I tried installing the service manually through the command prompt using <TheProgram>.exe <Service Name> -install and tried running it - still gave me the same error.
Then I tried to see if I could get any useful information by running <TheProgram>.exe <Service Name> -debug and interestingly enough I got the following output:
Debugging <Service Name>.
InitTypes: failed to find the DictUpdateInfo AuxData type
abnormal program termination
Googling InitTypes: failed to find the DictUpdateInfo AuxData type leads me nowhere, however it seems to be something Tcl related.
Finally, if it means anything, the source code for the program I was trying to install as a service is some simple web server code:
proc Serve {chan addr port} {
fconfigure $chan -translation auto -buffering line
set line [gets $chan]
set path [file join . [string trimleft [lindex $line 1] /]]
if {$path == "."} {set path ./index.html}
if {[catch {
set f1 [open $path]
} err]} {
puts $chan "HTTP/1.0 404 Not Found"
} else {
puts $chan "HTTP/1.0 200 OK"
puts $chan "Content-Type: text/html"
puts $chan ""
puts $chan [read $f1]
close $f1
}
close $chan
}
if {![info exists reload]} {
set sk [socket -server Serve 3000]
puts "Server listening on port 3000"
vwait forever
} else {
unset reload
}
To check and see if the source code was the problem, I tried another, simpler example that simply created a file in a particular directory:
set filePath "C:/some/path/here";
set fileName "Test.txt";
set file [open [file join $filePath $fileName] w];
puts $file "Hello, World";
close $file;
Both programs work if you simply source them from tclsh86.exe, but give the above errors if you try and run them as services unwrapped and wrapped respectively.
Any ideas?

Related

Powershell: Issue redirecting output from error stream when using docker

I am working on a set of build scripts which are called from a ubuntu hosted CI environment. The powershell build script calls jest via react-scripts via npm. Unfortunately jest doesn't use stderr correctly and writes non-errors to the stream.
I have redirected the error stream using 3>&1 2>&1 and this works fine from just powershell core ($LASTEXITCODE is 0 after running, no content from stderr is written in red).
However when I introduce docker via docker run, the build script appears to not behave and outputs the line that should be redirected from the error stream in red (and crashes). i.e. something like: docker : PASS src/App.test.js. Error: Process completed with exit code 1..
Can anyone suggest what I am doing wrong? because I'm a bit stumped. I include the sample PowerShell call below:-
function Invoke-ShellExecutable
{
param (
[ScriptBlock]
$Command
)
$Output = Invoke-Command $Command -NoNewScope | Out-String
if($LASTEXITCODE -ne 0) {
$CmdString = $Command.ToString().Trim()
throw "Process [$($CmdString)] returned a failure status code [$($LASTEXITCODE)]. The process may have outputted details about the error."
}
return $Output
}
Invoke-ShellExecutable {
($env:CI = "true") -and (npm run test:ci)
} 3>&1 2>&1

Batch file to check website status from a text file and restart service based on string

I need some batch guru to assist me in getting this resolved. I have a couple of files via which we are monitoring the response from the websites using wget. When the site is down we get the following response code in test1.txt:
Connecting to 10.x.x.x:443... failed: Bad file descriptor.
whilst when the site is running the response code in test2.txt is
Connecting to 10.x.x.x:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
I do not see any common pattern in both the above outputs based on which I can form a logic. Need some assistance in determining if from the outputs above
if the website is running, do nothing
if the website is down, start service.
Note, we need to do this only on the basis of the output from these files.
Tried the provided solution but it didn't work:
TestScript>wget-1.14.exe --spider --no-check-certificate https://somesite | find "Bad file descriptor" 1>nul
Spider mode enabled. Check if remote file exists.
--2015-10-08 18:15:21-- https://somesite
Connecting to 10.x.x.x:443... failed: Bad file descriptor.
TestScript>if errorlevel 1 (echo site is up ) else (echo site is down )
site is up
Pipe the output of wget to find to look for Bad file descriptor and then use errorlevel:
wget --spider http://someurl 2>&1 | find "Bad file descriptor" >nul
if errorlevel 1 (
echo site is up
) else (
echo site is down
)
2>&1 redirects the messages into the standard output so that it can be piped
--spider makes wget only check the url without saving the result
Alternatively use the file you already have:
if exist test1.txt find "Bad file descriptor" test1.txt >nul
if not errorlevel 1 (echo start the service)

error while executing lua script for redis server

I was following this simple tutorial to try out a simple lua script
http://www.redisgreen.net/blog/2013/03/18/intro-to-lua-for-redis-programmers/
I created a simple hello.lua file with these lines
local msg = "Hello, world!"
return msg
And i tried running simple command
EVAL "$(cat /Users/rsingh/Downloads/hello.lua)" 0
And i am getting this error
(error) ERR Error compiling script (new function): user_script:1: unexpected symbol near '$'
I can't find what is wrong here and i haven't been able to find someone who has come across this.
Any help would be deeply appreciated.
Your problem comes from the fact you are executing this command from an interactive Redis session:
$ redis-cli
127.0.0.1:6379> EVAL "$(cat /path/to/hello.lua)" 0
(error) ERR Error compiling script (new function): user_script:1: unexpected symbol near '$'
Within such a session you cannot use common command-line tools like cat et al. (here cat is used as a convenient way to get the content of your script in-place). In other words: you send "$(cat /path/to/hello.lua)" as a plain string to Redis, which is not Lua code (of course), and Redis complains.
To execute this sample you must stay in the shell:
$ redis-cli EVAL "$(cat /path/to/hello.lua)" 0
"Hello, world!"
If you are coming from windows and trying to run a lua script you should use this format:
redis-cli --eval script.lua
Run this from the folder where your script is located and it will load a multi line file and execute it.
On the off chance that anyone's come to this from Windows instead, I found I had to do a lot of juggling to achieve the same effect. I had to do this:
echo “local msg = 'Hello, world!'; return msg” > hello.lua
for /F "delims=" %i in ('type hello.lua') do #set cmd=%i
redis-cli eval "%cmd%" 0
.. if you want it saved as a file, although you'll have to have all the content on one line. If you don’t just roll the content into a set command
set cmd=“local msg = 'Hello, world!'; return msg”
redis-cli eval "%cmd%" 0

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

PHP CLI doesn't use stderr to output errors

I'm running the PHP CLI through a NSTask in MacOS, but this question is more about the CLI itself.
I'm listening to the stderr pipe, but nothing is output there no matter what file I try to run:
If the file type is not a plain text, stdout sets to ?.
If the file is a php script with errors, the error messages are still printed to stdout.
Is there a switch to the interpreter to handle errors through stderr? Do I have an option to detect errors other than parsing stdout?
The display_errors directive (can be set everywhere) takes optionally the parameter "stderr" for it to report errors to stderr instead of stdout or completely disabled error output. Quoting from the PHP manual entry:
Value "stderr" sends the errors to stderr instead of stdout. The value is available as of PHP 5.2.4.
Alternatively if you're using the commandline interface and you want to output the errors your own you can re-use the command-line nput/output streams:
fwrite(STDERR, 'error message');
Here STDERR is an already opened stream to stderr.
Alternatively if you want to do it just for this script and not in CLI you can open a filed handler to php://stderr and write the error messages there.
$fe = fopen('php://stderr', 'w');
fwrite($fe, 'error message');
If you want the error messages sent by the php interpreter should go to the stderr-pipe, you must set display_errors to stderr
This is required to return from PHP realm into shell environment in order to parse properly error message. You still need to exit(1) or whatever integer in order to return exit status code from PHP to shell.
fwrite(STDERR, 'error message'); //output message into 2> buffer
exit(0x0a); //return error status code to shell
Then, your crontab entry will look like:
30 3 * * * /usr/bin/php /full/path/to/phpFile.php >> /logdir/fullpath/journal.log 2>> /logdir/fullpath/error_journal.log
You can also use file_put_contents() with "php://stderr" to output to standard error, like:
php -r 'file_put_contents("php://stderr", "Hiya, PHP!\n"); echo "Bye!\n";' 1>/dev/null
which outputs "Hiya, PHP!\n" to standard error and nothing to standard output when executed in a Bash shell.

Resources