How to stop an IPython cluster without the ipcluster command - ipython-parallel

I don't start my IPython cluster with the ipcluster command but with the individual commands ipcontroller and ipengine because I use several machines over a network. When starting the cluster with the ipcluster command, stopping the cluster is rather straightforward:
ipcluster stop
However, I haven't been able to found the procedure when using the individual commands separately.
Thanks for your help

The easiest way is by connecting a Client and issuing a shutdown command:
import ipyparallel as ipp
c = ipp.Client()
c.shutdown(hub=True)
Client.shutdown() shuts down engines; adding hub=True tells it to bring down the central controller process as well.

Related

Kubernetes : simultaneously executing two comands in kubernetes

I know that it is possible to execute multiple commands simultaneously in Kubernetes. I've seen Multiple commands in kubernetes. But what I wanted to know is to execute multiple commands simultaneously.
command: ["/bin/sh","-c"]
args: ["command one; command two"]
Here both command one and command two execute parallel.
As command one starts a server instance and similarly command two start another server.
In my docker environment I have specified one command and then I exec int docker and start another command. But in k8s deployment it won't be possible. What should I do in this situation?
I will be using helm chart, so if there is any trick related to helm charts. I can use that as well.
Fully agree with #David Maze and #masseyb.
Writing answer as community wiki just to index this answer for future researches.
You are not able to execute simultaneously multiple commands. You should create few similar but separate deployments and use there different commands.

Using SLURM to run TCP client, server

I have a Docker image that needs to be run in an environment where I have no admin privileges, using Slurm 17.11.8 in RHEL. I am using udocker to run the container.
In this container, there are two applications that needs to run:
[1] ROS simulation (there is a rosnode that is a TCP client talking to [2])
[2] An executable (TCP server)
So [1] and [2] needs to run together and they shared some common files as well. Usually, I run them in separate terminals. But I have no idea how to do this with SLURM.
Possible Solution:
(A) Use two containers of the same image, but their files will be stored locally. Could use volumes instead. But this requires me to change my code significantly and maybe break compatibility when I am not running it as containers (e.g in Eclipse).
(B) Use a bash script to launch two terminals and run [1] and [2]. Then srun this script.
I am looking at (B) but have no idea how to approach it. I looked into other approaches but they address sequential executions of multiple processes. I need these to be concurrent.
If it helps, I am using xfce-terminal though I can switch to other terminals such as Gnome, Konsole.
This is a shot in the dark since I don't work with udocker.
In your slurm submit script, to be submitted with sbatch, you could allocate enough resources for both jobs to run on the same node(so you just need to reference localhost for your client/server). Start your first process in the background with something like:
udocker container_name container_args &
The & should start the first container in the background.
You would then start the second container:
udocker 2nd_container_name more_args
This would run without & to keep the process in the foreground. Ideally, when the second container completes the script would complete and slurm cleanup would kill the first container. If both containers will come to an end cleanly you can put a wait at the end of the script.
Caveats:
Depending on how Slurm is configured, processes may not be properly cleaned up at the end. You may need to capture the PID of the first udocker as a variable and kill it before you exit.
The first container may still be processing when the second completes. You may need to add a sleep command at the end of your submission script to give it time to finish.
Any number of other gotchas may exist that you will need to find and hopefully work around.

Local Dask worker unable to connect to local scheduler

While running Dask 0.16.0 on OSX 10.12.6 I'm unable to connect a local dask-worker to a local dask-scheduler. I simply want to follow the official Dask tutorial. Steps to reproduce:
Step 1: run dask-scheduler
Step 2: Run dask-worker 10.160.39.103:8786
The problem seems to related to the dask scheduler and not the worker, as I'm not even able to access the port by other means (e.g., nc -zv 10.160.39.103 8786).
However, the process is clearly still running on the machine:
My first guess is that due to network rules your computer may not accept network connections that look like they're coming from the outside world. You might want to try using dask-worker localhost:8786 and see if that works instead.
Also, as a reminder, you can always start a scheduler and worker directly from Python without creating dask-scheduler and dask-worker processes
from dask.distributed import Client
# client = Client('scheduler-address:8786')
client = Client() # create scheduler and worker automatically
As a foolproof method you can also pass processes=False which will avoid networking issues entirely
client = Client(processes=False)

How can I run a dask-distributed local cluster from the command line?

I would like to do the equivalent of Client(LocalCluster()) from the command line.
When interacting with distributed from Jupyter notebooks, I end up restarting my kernel often and starting a new LocalCluster each time, as well as refreshing my bokeh webpage.
I would much rather have a process running in the background that I could just connect to, is this possible?
The relevant doc page here is http://distributed.readthedocs.io/en/latest/setup.html#using-the-command-line
In one terminal, write the following:
$ dask-scheduler
In another terminal, write the following:
$ dask-worker localhost:8786
The defaults are a bit different here. LocalCluster creates N single-threaded workers while dask-worker starts one N-threaded worker. You can change these defaults with the following keywords
$ dask-worker localhost:8786 --nthreads 1 --nprocs 4

How Can an Erlang Virtual Machine be run as a Daemon?

I would like to run Erlang VM as a daemon on a UNIX server, in a non-interactive mode
The simplest thing is to give erl the -detached flag.
There are however many helpers out there for doing this, check out rebars release handling, erlrc and run_erl.
Also rebar can generate a node that can be started as a daemon (with start, stop, restart commands).

Resources