Launch Daemon Script on Jailbroken iOS - ios

Is it possible to run a script through a launch daemon for an indefinite amount of time on jailbroken iOS 9? Would iOS 9 eventually kill a launch daemon that runs a script indefinitely, or would it just let the script keep running? Would a launch daemon be a feasible way of running said script on an iPhone?

Launchd doesn't do anything special if you didn't ask it to. It will parse your plist, launch the binary and that's it. The daemon can run for as long as it wants. You can check out Cydia auto-install script at /Library/LaunchDaemons/com.saurik.Cydia.Startup.plist. Using that plist as a reference you can launch your script that will run indefinitely. Launchd will not do anything to it.
There're other components that can kill your process but there're ways to prevent that. For example, if memory starts running low the kernel will start killing processes and your daemon might be killed as well. That kernel component is called jetsam. All processes have some jetsam priority and memory limit associated with them and depending on that they will or will not be killed when memory runs low. You can read about that here. You can also just tell launchd to relaunch your process automatically if that fits your case.

Related

Does Dask LocalCluster Shutdown when kernel restarts

If I restart my jupyter kernel will any existing LocalCluster shutdown or will the dask worker processes keep running?
I know when I used a SLURM Cluster the processes keep running if I restart my kernel without calling cluster.close() and I have to use squeue to see them and scancel to cancel them.
For local processes however how can I tell that all the worker processes are gone after I have restarted my kernel. If they do not disappear automatically how can I manually shut them down if I no longer have access to cluster (the kernel restarted)
I try to remember to call cluster.close but I often forget. Using a context manager doesn't work for my jupyter needs.
During the normal termination of your kernel python process, all objects will be finalised. For the cluster object, this includes calling close() automatically, and you don't normally need to worry about it.
It is perhaps possible that close does not have a chance to run, in the case that the kernel is more forcibly killed as opposed to a normal termination. Since all LocalCluster processes are children of the kernel that started then, this will still result in the cluster stopping, but perhaps with some warnings about connections that didn't have time to clean themselves up. You should be able to ignore such warnings.

Using SLURM to run TCP client, server

I have a Docker image that needs to be run in an environment where I have no admin privileges, using Slurm 17.11.8 in RHEL. I am using udocker to run the container.
In this container, there are two applications that needs to run:
[1] ROS simulation (there is a rosnode that is a TCP client talking to [2])
[2] An executable (TCP server)
So [1] and [2] needs to run together and they shared some common files as well. Usually, I run them in separate terminals. But I have no idea how to do this with SLURM.
Possible Solution:
(A) Use two containers of the same image, but their files will be stored locally. Could use volumes instead. But this requires me to change my code significantly and maybe break compatibility when I am not running it as containers (e.g in Eclipse).
(B) Use a bash script to launch two terminals and run [1] and [2]. Then srun this script.
I am looking at (B) but have no idea how to approach it. I looked into other approaches but they address sequential executions of multiple processes. I need these to be concurrent.
If it helps, I am using xfce-terminal though I can switch to other terminals such as Gnome, Konsole.
This is a shot in the dark since I don't work with udocker.
In your slurm submit script, to be submitted with sbatch, you could allocate enough resources for both jobs to run on the same node(so you just need to reference localhost for your client/server). Start your first process in the background with something like:
udocker container_name container_args &
The & should start the first container in the background.
You would then start the second container:
udocker 2nd_container_name more_args
This would run without & to keep the process in the foreground. Ideally, when the second container completes the script would complete and slurm cleanup would kill the first container. If both containers will come to an end cleanly you can put a wait at the end of the script.
Caveats:
Depending on how Slurm is configured, processes may not be properly cleaned up at the end. You may need to capture the PID of the first udocker as a variable and kill it before you exit.
The first container may still be processing when the second completes. You may need to add a sleep command at the end of your submission script to give it time to finish.
Any number of other gotchas may exist that you will need to find and hopefully work around.

Does a docker container when paused have similar proprieties of a VM when snapshoted?

More specifically, if the memory of a docker container is preserved the same way a snapshoted VM is.
It's not the same. The typical definition of a VM snapshot is the filesystem at a point in time.
The container pause command freezes the process, but the processes still exist in memory. This is not gathering a point in time to revert to, but rather a way to control an application running inside of a container. Originally this was just a SIGSTOP sent to each process but has since been changed to use a cgroup freezer setting that cannot be detected or trapped by the container. See the docs on the pause command here:
https://docs.docker.com/engine/reference/commandline/pause/
If you're looking more for a live migration type of functionality, there have been some projects to do this, but that does not exist directly in docker at this time.

How can I parallelize docker run?

If I run docker run ... to start a container, then run a job and shutdown the container, on 16 containers sequentially, it performs at about the same speed as if I launch 16 different processes each running docker run. This is on a 16 core machine. How can I fix this?
Our docker images are all developed by different researchers and combined into a transcriptomics pipeline. Each researcher needs to install various configurations which prevents image sharing, though many share a common parent.
I need to test each image. Right now it takes a few minutes to run the full tests, whether I run them sequentially or whether I launch 16 processes and run each in its own process.
Anyone know why this is? Does dockerd only use one core?
I am assuming you are running docker natively in linux.
Hard to say without more information. I would guess that it is likely that your containers are using more than one thread each, and they are competing for resources.
I would check the resource use by monitoring 'docker stats' as you launch each container to see whats actually happening. Your performance might be limited by other factors, like I/O

Why would a service that works locally be getting kill signals frequently in docker?

I have a universal react app hosted in a docker container in a minikube (kubernetes) dev environment. I use virtualbox and I actually have more microservices on this vm.
In this react app, I use pm2 to restart my app on changes to server code, and webpack hmr to hot-reload client code on changes to client code.
Every say 15-45 seconds, pm2 is logging the below message to me indicating that the app exited due to a SIGKILL.
App [development] with id [0] and pid [299], exited with code [0] via signal [SIGKILL]
I can't for the life of me figure out why it is happening. It is relatively frequent, but not so frequent that it happens every second. It's quite annoying because each time it happens, my webpack bundle has to recompile.
What are some reasons why pm2 might receive a SIGKILL in this type of dev environment? Also, what are some possible ways of debugging this?
I noticed that my services that use pm2 to restart on server changes do NOT have this problem when they are just backend services. I.e. when they don't have webpack. In addition, I don't see these SIGKILL problems in my prod version of the app. That suggests to me there is some problem with the combination of webpack hmr setup, pm2, and minikube / docker.
I've tried the app locally (not in docker /minikube) and it works fine without any sigkills, so it can't be webpack hmr on its own. Does kubernetes kill services that use a lot of memory? (Maybe it thinks my app is using a lot of memory). If that's not the case, what might be some reasons kubernetes or docker send SIGKILL? Is there any way to debug this?
Any guidance is greatly appreciated. Thanks
I can't quite tell from the error message you posted, but usually this is a result of the kernel OOM Killer (Out of Memory Killer) taking out your process. This can be either because your process is just using up too much memory, or you have a cgroup setting on your container that is overly aggressive and causing it to get killed. You may also have under-allocated memory to your VirtualBox instance.
Normally you'll see Docker reporting that the container exited with code 137 in docker ps -a
dmesg or your syslogs on the node in question may show the kernel OOM killer output.

Resources