How to start a background process in QNX using mkifs script? - qnx

I have a mkifs script file with .build extension for building a qnx .ifs image. I would like to start a process in background when my QNX is booted with the image. The process is a service which waits for incoming requests and never ends.
I'm wondering how could I define my process in the .build file to run it in background.

Maybe this could be your answer:
"If you specify an ampersand (&) after the command line, the program runs in the background, and Neutrino doesn't wait for the program to finish before continuing with the next line in the script. If you don't specify the ampersand, and the program doesn't exit, then the rest of the script is never executed. The system isn't fully operational until the boot script finishes."[1]
so put this in your buildfile:
[+script] .script = {
"do-stuff" &
}
your Buildfile should already have the "script" part
[1] http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/building/building_nto.html

Related

Spawning a process with {create_group=True} / set_pgid hangs when starting Docker

Given a Linux system, in Haskell GHCi 8.8.3, I can run a Docker command with:
System.Process> withCreateProcess (shell "docker run -it alpine sh -c \"echo hello\""){create_group=False} $ \_ _ _ pid -> waitForProcess pid
hello
ExitSuccess
However, when I switch to create_group=True the process hangs. The effect of create_group is to call set_pgid with 0 in the child, and pid in the parent. Why does that change cause a hang? Is this a bug in Docker? A bug in System.Process? Or an unfortunate but necessary interaction?
This isn't a bug in Haskell or a bug in Docker, but rather just the way that process groups work. Consider this C program:
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main(void) {
if(setpgid(0, 0)) {
perror("setpgid");
return 1;
}
execlp("docker", "docker", "run", "-it", "alpine", "echo", "hello", (char*)NULL);
perror("execlp");
return 1;
}
If you compile that and run ./a.out directly from your interactive shell, it will print "hello" as you'd expect. This is unsurprising, since the shell will have already put it in its own process group, so its setpgid is a no-op. If you run it with an intermediary program that forks a child to run it (sh -c ./a.out, \time ./a.out - note the backslash, strace ./a.out, etc.), then the setpgid will put it in a new process group, and it will hang like it does in Haskell.
The reason for the hang is explained in "Job Control Signals" in the glibc manual:
Macro: int SIGTTIN
A process cannot read from the user’s terminal while it is running as a background job. When any process in a background job tries to read from the terminal, all of the processes in the job are sent a SIGTTIN signal. The default action for this signal is to stop the process. For more information about how this interacts with the terminal driver, see Access to the Terminal.
Macro: int SIGTTOU
This is similar to SIGTTIN, but is generated when a process in a background job attempts to write to the terminal or set its modes. Again, the default action is to stop the process. SIGTTOU is only generated for an attempt to write to the terminal if the TOSTOP output mode is set; see Output Modes.
When you docker run -it something, Docker will attempt to read from stdin even if the command inside the container doesn't. Since you just created a new process group, and you didn't set it to be in the foreground, it counts as a background job. As such, Docker is getting stopped with SIGTTIN, which causes it to appear to hang.
Here's a list of options to fix this:
Redirect the process's standard input to somewhere other than the TTY
Use signal or sigaction to make the process ignore the SIGTTIN signal
Use sigprocmask to block the process from receiving the SIGTTIN signal
Call tcsetpgrp(0, getpid()) to make your new process group be the foreground process group (note: this is the most complicated, since it will itself cause SIGTTOU, so you'd have to ignore that signal at least temporarily anyway)
Options 2 and 3 will also only work if the program doesn't actually need stdin, which is the case with Docker. When SIGTTIN doesn't stop the process, reads from stdin will still fail with EIO, so if there's actually data you want to read, then you need to go with option 4 (and remember to set it back once the child exits).
If you have TOSTOP set (which is not the default), then you'd have to repeat the fix for SIGTTOU or for standard output and standard error (except for option 4, which wouldn't need to be repeated at all).

Ensuring all .sh curl download scripts download using gnu parallel

I'm executing the following command which executes a group of scripts with each script being a curl download.
parallel --resume-failed --joblog logshd.log {1} ::: SH/*.sh
The set of files downloaded is quite large. I've noticed some files don't download.
I hoped that the resume-failed parameter would ensure that all the downloads that fail resume and complete.
I'm not clear on if that means I need to run the process again a second time or if that should occur when I run the one time.
From the gnu documentation
Where --resume-failed reads the commands from the command line (and
ignores the commands in the joblog), --retry-failed ignores the
command line and reruns the commands mentioned in the joblog.
I'm not clear on what ignoring the command line or ignores the commands in the job log means. Could that be clarified.
Can --resume-failed and --retry-failed be declared within the same command and if so what is the effect of that?
Regards
Conteh
If we assume the download fails intermittently then your answer is --retries 10. It will run the command 10 times before giving up.
--resume-failed and --retry-failed are both used when GNU Parallel has finished, and you then figure out that you want to retry some of the jobs again.
The difference between the two is in how to retry the command.
--retry-failed will run exactly the same command as failed before. It does that by looking in the joblog for the command. This is typically what you want.
--resume-failed is used if you figure out that the failing command actually needed some other parameter: i.e. GNU Parallel should not run exactly the same command, but it should run a (typically slightly changed) command with the same parameters instead.

Docker - Handling multiple services in a single container

I would like to start two different services in my Docker container and exit the container as soon as one of them exits. I looked at supervisor, but I can't find how to get it to quit as soon as one of the managed applications exits. It tries to restart them up to three times, as is the standard setting and then just sits there doing nothing. Is supervisor able to do this or is there any other tool for this? A bonus would be if there also was a way to let both managed programs write to stdout, tagged with their application name, e.g.:
[Program 1] Some output
[Program 2] Some other output
[Program 1] Output again
Since you asked if there was another tool... we designed and wrote a powerful replacement for supervisord that is designed specifically for Docker. It automatically terminates when all applications quit, as well as has special service settings to control this behavior, plus will redirect stdout with tagged syslog-compatible output lines as well. It's open source, and being used in production.
Here is a quick start for Docker: http://garywiz.github.io/chaperone/guide/chap-docker-simple.html
There is also a complete set of tested base-images which are a good example at: https://github.com/garywiz/chaperone-docker, but these might be overkill and the earlier quickstart may do the trick.
I found solutions to both of my requirements by reading through the docs some more.
Exit supervisord on application exit
This can be achieved by using a custom eventlistener. I had to add the following segment into my supervisord configuration file:
[eventlistener:shutdownevent]
command=/shutdownhandler.sh
events=PROCESS_STATE_EXITED
supervisord will start the referenced script and upon the given event being triggered (PROCESS_STATE_EXITED is triggered after the exit of one of the managed programs and it not restarting automatically) will send a line containing data about the event on the scripts stdin.
The referenced shutdownhandler-script contains:
#!/bin/bash
while :
do
echo -en "READY\n"
read line
kill $(cat /supervisord.pid)
echo -en "RESULT 2\nOK"
done
The script has to indicate being ready by sending "READY\n" on its stdout, after which it may receive an event data line on its stdin. For my use case upon receival of a line (meaning one of the managed programs has exited), a SIGTERM is sent to the supervisord process being found by the pid it leaves in its pid file (situated in the root directory by default). For technical completeness, I also included a positive answer for the eventlistener, though that one should never matter.
Tagged output on stdout
I did this by simply starting a tail process in the background before starting supervisord, tailing the programs output log and piping the lines through ts (from the moreutils package) to prepend a tag to it. This way it shows up via docker logs with an easy way to see which program actually wrote the line.
tail -fn0 /var/log/supervisor/program1.log | ts '[Program 1]' &

Run new ant target without killing previous target

I've got an ant target ant server that runs a Java application which logs to the console. I need to run a new ant target ant server-gui which also logs to the console. But when I run ant server the logging prevents me from running any new ant targets.
When I enter ^c (which is the only way I know of to get out of situations like that) it kills the Java application. I need both to run. What keystroke will get me out of that "input" mode and able to run new terminal commands?
UPDATE: I haven't found a direct solution to getting out of that mode I mentioned, but opening a new tab/window in terminal does the trick. I can run as many any commands as I'd like that way. Still looking for a good solution to get out the "input" mode, though!
UPDATE 2: #abcdef pointed out another post that has an even more elegant solution.
There are a few ways to do this, assuming you are on *nix
1) Run the ant command with a & at the end to tell *nix to run the command in the background
2) Run the command with nohup at the beginning (https://en.wikipedia.org/wiki/Nohup)
3) when the process is running press ctrl-z then enter the command bg. This manually forces the command to run in the background
I hope this helps you out

launch a gui program from windows console and then make it 'detach' itself

I'm trying to modify a legacy Delphi 5 app so that it can be launched either from it's icon/via Explorer, or from the console (command-line). When it gets launched from the console, I want the program to detach itself from the console process, so that the console can continue to execute other instructions without waiting for my program to terminate.
I want to use it in a 'batch' file, such that I might have;
#echo off
rem step 1 - do some stuff here
rem
rem step 2 - launch my app
c:\myfolder\myapp
rem
rem step 3 - do some more stuff here
and that the console process moves on to step 3 straight after launching my app in step 2.
I'm sure I've done this before, many years ago, but I'm puzzled as to what exactly I did. I don't want to write a tiny console app 'launcher' for my main Windows app - I'm 95% sure that there was a way of doing this within a 'normal' Delphi GUI app.
I guess I could use vbscript or powershell or something to 'execute' my program with some kind of 'nowait' parameter but the client is familiar with batch files and I don't really want to upset the applecart by suggesting he change his scripts or install additional stuff - I'm making changes to the executable anyway and it would be great to tick this box for him too.
Anyone? :-)
I think the START command is the one you're looking for. It starts a process separately to the console and it's part of cmd.exe so no extra software required.
But I was of the opinion that GUI apps did this anyway. Maybe Delphi is different to MSVC.
Open up a console and type "start /?".
As itowlson states in the comments, GUI application do generally detach themselves. It's the actual cmd.exe shell doing trickery in that it waits for it to finish if it's running from a cmd file.
So "notepad" from the prompt will start it in the background but "notepad" within a cmd file will wait. Within the cmd file, you need to use:
start notepad.exe
or whatever your application is called (not notepad, presumably).
try: start "" c:\myfolder\myapp (with the empty quotes)
I think Microsoft has been solve this problem in Windows Power Shell.
In command prompt, even if you use "start ", you cant detach your process really from cmd. If you close the cmd, you will die, suddenly. But In windows Power Shell, you can detach your program or command from Power Shell as default.
So, if you prefer to use Windows Power Shell instead of Command Prompt, just do this:
PS: X:\> <your command>
Here's one way that I've found. It works quite cleanly and doesn't leave any extra cmd windows around (the recommendation to use start c:\myfolder\myapp does not work:
cmd /c dir && c:\myfolder\myapp
To quote the CMD help:
/C Carries out the command specified by string and then terminates
Note that multiple commands separated by the command separator '&&'
are accepted for string if surrounded by quotes.
Apparently it notices that the dir command terminates and exits, even though your app was launched on the same command. Chalk it up to one of Windows vagaries.
u should use the cd command example
cd/
cd myfolder
start myapp
exit

Resources