How to run u-boot CLI command before the boot delay timer starts? - startup

I have a GPIO pin that is started in a random state. I need to set it to the desired state '0' as quickly as possible after a hard boot. I know the gpio command in u-boot can do this. How can I run that cli command immediately after the device is powered up, before the typical countdown (set by bootdelay)?

Related

Is there a way to have a docker container wait for an external call?

I have looked for a bit on Stack Overflow for a way to have a container start up and wait for an external connection but have not seen anything.
Here is what my process looks like currently:
Non-Docker external process reaches out at X interval and tells system to run a command.
Command runs.
System should remain idle until the next interval.
Now I have seen a few options with --wait or sleep but I would think that would not allow the container to receive the connection.
I also looked at the wait for container script that is often recommended but in this case I need the container to wait for a script to call it on non defined intervals.
I have tried having this just run the help command for my process but it then fails the container after a bit of time and makes it a mess for finding anything.
Additionally I have tried to have the container start with no command just to run the base OS and wait for the call but that did not work either.
I was looking at this wrong.
Ended up just running like any other webserver and database server.

How to stop docker container automatically?

I am trying to stop the docker container automatically after 1 hour. I mean, if there is no process going on or the container is idle for 1 hour, then stop that container. Is this possible to do it programmatically within the Dockfile? Any thoughts would be helpful.
Thanks in advance.
The closest solution that fits your problem supported by Dockerfile would be HEALTHCHECK directive e.g. HEALTHCHECK [OPTIONS] CMD command . Here you can specify interval (e.g. 1 hour) and time out.
--interval=DURATION (default: 30s)
--timeout=DURATION (default: 30s)
--start-period=DURATION (default: 0s)
--retries=N (default: 3)
Other than that you would have to create custom shell script that is triggered by cronjob every 1 hour. In this script you would stop foreground process and by that stooping the running container.
As far as I know such a scenario is not part of the docker workflow.
The container is alive a long as its main process is alive. When that project (PID: 1) exits (with error or success) then the container also stops.
So the only way I see is to either build this logic inside your program (the main process in the container) or wrap the program in a shell script that kills the process based on some rule (like no log entries for a certain amount of time).

Using SLURM to run TCP client, server

I have a Docker image that needs to be run in an environment where I have no admin privileges, using Slurm 17.11.8 in RHEL. I am using udocker to run the container.
In this container, there are two applications that needs to run:
[1] ROS simulation (there is a rosnode that is a TCP client talking to [2])
[2] An executable (TCP server)
So [1] and [2] needs to run together and they shared some common files as well. Usually, I run them in separate terminals. But I have no idea how to do this with SLURM.
Possible Solution:
(A) Use two containers of the same image, but their files will be stored locally. Could use volumes instead. But this requires me to change my code significantly and maybe break compatibility when I am not running it as containers (e.g in Eclipse).
(B) Use a bash script to launch two terminals and run [1] and [2]. Then srun this script.
I am looking at (B) but have no idea how to approach it. I looked into other approaches but they address sequential executions of multiple processes. I need these to be concurrent.
If it helps, I am using xfce-terminal though I can switch to other terminals such as Gnome, Konsole.
This is a shot in the dark since I don't work with udocker.
In your slurm submit script, to be submitted with sbatch, you could allocate enough resources for both jobs to run on the same node(so you just need to reference localhost for your client/server). Start your first process in the background with something like:
udocker container_name container_args &
The & should start the first container in the background.
You would then start the second container:
udocker 2nd_container_name more_args
This would run without & to keep the process in the foreground. Ideally, when the second container completes the script would complete and slurm cleanup would kill the first container. If both containers will come to an end cleanly you can put a wait at the end of the script.
Caveats:
Depending on how Slurm is configured, processes may not be properly cleaned up at the end. You may need to capture the PID of the first udocker as a variable and kill it before you exit.
The first container may still be processing when the second completes. You may need to add a sleep command at the end of your submission script to give it time to finish.
Any number of other gotchas may exist that you will need to find and hopefully work around.

Launch Daemon Script on Jailbroken iOS

Is it possible to run a script through a launch daemon for an indefinite amount of time on jailbroken iOS 9? Would iOS 9 eventually kill a launch daemon that runs a script indefinitely, or would it just let the script keep running? Would a launch daemon be a feasible way of running said script on an iPhone?
Launchd doesn't do anything special if you didn't ask it to. It will parse your plist, launch the binary and that's it. The daemon can run for as long as it wants. You can check out Cydia auto-install script at /Library/LaunchDaemons/com.saurik.Cydia.Startup.plist. Using that plist as a reference you can launch your script that will run indefinitely. Launchd will not do anything to it.
There're other components that can kill your process but there're ways to prevent that. For example, if memory starts running low the kernel will start killing processes and your daemon might be killed as well. That kernel component is called jetsam. All processes have some jetsam priority and memory limit associated with them and depending on that they will or will not be killed when memory runs low. You can read about that here. You can also just tell launchd to relaunch your process automatically if that fits your case.

How can I call a shell script to start a backend Java process?

After completing a Jenkins task, I execute a Linux shell script by using Jenkins' post-condition configuration section.
This Linux shell script wants to launch a standby service on the backend and can NOT cause Jenkins to pause.
I tried to use "nohup+&", etc., but it does not work.
Is there a good way to do it?
Jenkins is probably waiting for some pipes to close. Your background process has inherited some file descriptors and is keeping them open for as long as it runs.
If you are lucky, the only file descriptors are 0, 1 and 2 (the standard ones.) You might want to check the file descriptors of the background process using lsof -p PID where PID is the process id of the background process.
You should make sure all of those file descriptors (both inputs and outputs) are redirected for the background process, so start it with something like:
nohup daemon </dev/null >/dev/null 2>&1 &
Feel free to direct the output to a file other than /dev/null but make sure you keep the order of the redirections. The order is important.
If you plan to start background processes from a Jenkins job, be advised that Jenkins will kill background processes when build ends. See https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller on how to prevent that.
I had a similar problem with running a shell script from Jenkins as a background process. I fixed it by using the below command:
BUILD_ID=dontKillMe nohup ./start-fitnesse.sh &

Resources