Angstrom start-up processes [beaglebone] - startup

I have a RFID module attached to my beaglebone and reading ID tags with a python code. Now, I want my python code to start running in the background directly when I login to my beaglebone without any commands . Just like adding a program to start-up programs in windows. When you login to your windows account, those programs start instantly. Do you have an idea how this can be done?
Regards

Create a new file in /lib/systemd/system/ (rfidreader.service in my example) with content like:
[Unit]
Description=Start Python RFID reader
[Service]
WorkingDirectory=/...Python script path.../
ExecStart=/usr/bin/python rfidreader.py
KillMode=process
[Install]
WantedBy=multi-user.target
Then execute the following command to install the service:
systemctl enable rfidreader.service
To start the service, you can either reboot or execute:
systemctl start rfidreader.service
To check if the service is running and get the latest outputs from the script:
systemctl status rfidreader.service

Take a look at how nodejs application is running on port 3000 of the board and you can implement you module the same way. I think it's part of init process.
http://www.softprayog.in/tutorials/starting-linux-services-with-init-scripts
http://www.linuxquestions.org/questions/linux-general-1/how-do-i-automatically-start-a-program-at-start-up-102154/

Related

How to get Docker audio and input with Windows or mac host?

I'm trying to create a Docker image that works with a speaker and microphone.
I've got it working with Ubuntu as host using:
docker run -it --device /dev/snd:/dev/snd <docker_container>
I'd also like to be able to use the Docker image on windows and mac hosts, but can't find the equivalent of /dev/snd to make use of the host's speaker/microphone.
Any help appreciated
I was able to get playback on Windows using pulseaudio.exe.
1] Download pulseaudio for windows: https://www.freedesktop.org/wiki/Software/PulseAudio/Ports/Windows/Support/
2] Uncompress and change the config files.
2a] Add the following line to your $INSTALL_DIR/etc/pulse/default.pa:
load-module module-native-protocol-tcp listen=0.0.0.0 auth-anonymous=1
This is an insecure setting: there are IP-based ones that are more secure but there's some Docker sorcery involved in leveraging them I think. While the process is running anyone on your network will be able to push sound to this port; this risk will be acceptable for most users.
2b] Change $INSTALL_DIR/etc/pulse//etc/pulse/daemon.conf line to read:
exit-idle-time = -1
This will keep the daemon open after the last client disconnects.
3) Run pulseaudio.exe. You can run it as
start "" /B "pulseaudio.exe"
to background it but its tricker to kill than just a simple execution.
4) In the container's shell:
export PULSE_SERVER=tcp:127.0.0.1
One of the articles I sourced this from (https://token2shell.com/howto/x410/enabling-sound-in-wsl-ubuntu-let-it-sing/) suggests recording may be blocked in Windows 10.

Services in CentOS 7 Docker image without systemd

I'm trying to create a Docker container based on CentOS 7 that will host R, shiny-server, and rstudio-server, but to I need to have systemd in order for the services to start. I can use the systemd enabled centos image as a basis, but then I need to run the container in privileged mode and allow access to /sys/fs/cgroup on the host. I might be able to tolerate the less secure situation, but then I'm not able to share the container with users running Docker on Windows or Mac.
I found this question but it is 2 years old and doesn't seem to have any resolution.
Any tips or alternatives are appreciated.
UPDATE: SUCCESS!
Here's what I found: For shiny-server, I only needed to execute shiny-server with the appropriate parameters from the command line. I captured the appropriate call into a script file and call that using the final CMD line in my Dockerfile.
rstudio-server was more tricky. First, I needed to install initscripts to get the dependencies in place so that some of the rstudio scripts would work. After this, executing rstudio-server start would essentially do nothing and provide no error. I traced the call through the various links and found myself in /usr/lib/rstudio-server/bin/rstudio-server. The daemonCmd() function tests cat /proc/1/comm to determine how to start the server. For some reason it was failing, but looking at the script, it seems clear that it needs to execute /etc/init.d/rstudio-server start. If I do that manually or in a Docker CMD line, it seems to work.
I've taken those two CMD line requirements and put them into an sh script that gets called from a CMD line in the Dockerfile.
A bit of a hack, but not bad. I'm happy to hear any other suggestions.
You don't necessarily need to use an init system like systemd.
Essentially, you need to start multiple services, there are existing patterns for this. Check out this page about how to use supervisord to achieve the same thing: https://docs.docker.com/engine/admin/using_supervisord/

How to stop an IPython cluster without the ipcluster command

I don't start my IPython cluster with the ipcluster command but with the individual commands ipcontroller and ipengine because I use several machines over a network. When starting the cluster with the ipcluster command, stopping the cluster is rather straightforward:
ipcluster stop
However, I haven't been able to found the procedure when using the individual commands separately.
Thanks for your help
The easiest way is by connecting a Client and issuing a shutdown command:
import ipyparallel as ipp
c = ipp.Client()
c.shutdown(hub=True)
Client.shutdown() shuts down engines; adding hub=True tells it to bring down the central controller process as well.

Using SSH (Scripts, Plugins, etc) to start processes

I'm trying to finish a remote deployment by restarting the two processes that make my Python App work. Like so
process-one &
process-two &
I've tried to "Execute a Shell Script" by doing this
ssh -i ~/.ssh/id_... user#xxx.xxx ./startup.sh
I've tried using the Jekins SSH Plugin and the Publish Over SSH Plugin and doing the same thing. All of the previous steps, stopping the processes, restarting other services, pulling in new code work fine. But when I get to the part where I start the services. It executes those two lines, and none of the Plugins or the Default Script execution can get off of the server. They all either hang until I restart Jekins or time out int he case of the Publish Over SSH plugin. So my build either requires a restart of Jenkins, or is marked unstable.
Has anyone had any success doing something similar? I've tried
nohup process-one &
But the same thing has happened. It's not that the services are messing up either, because they actually start properly, it's just that Jenkins doesn't seem to understand that.
Any help would be greatly appreciated. Thank you.
What probably happens in that the process when spawned (even with the &) is consuming the same input and output as your ssh connection. Jenkins is waiting for these pipes to be emptied before the job closes, thus waits for the processes to exit. You could verify that by killing your processes and you will see that the jenkins job terminates.
Dissociating outputs and starting the process remotely
There are multiple solutions to your problem:
(preferred) use proper daemon control tools. Your target platform probably has a standard way to manage those services, e.g. init.d scripts. Note, when writing init.d scripts, make sure you detach the process in the background AND ensure the input/output of the daemon are detached from the shell that starts them. There are several techniques, like like http://www.unix.com/man-page/Linux/8/start-stop-daemon/ tools, daemonize, daemontools or something like the shell script described under https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+as+a+Unix+daemon (take note of the su -s bin/sh jenkins -c "YOUR COMMAND; ...disown" etc). I also list some python specific techniques under.
ssh server 'program < /dev/null > /dev/null 2>&1 &'
or
ssh server 'program < /dev/null >> logfile.log 2>&1 &' if you want to have a crude output management (no log rotation, etc...)
potentially using setsid (I haven't tried) https://superuser.com/questions/172043/how-do-i-fork-a-process-that-doesnt-die-when-shell-exits . In my quick tests I wasn't able to get it to work though...
Python daemons
The initial question was focused on SSH, so I didn't fully described how to run the python process as daemon. This is mostly covered in other techniques:
with start-stop-daemon: start-stop-daemon and python
with upstart on ubuntu: Run python script as daemon at boot time (Ubuntu)
some more python oriented approaches:
How to make a Python script run like a service or daemon in Linux
Can I run a Python script as a service?

How Can an Erlang Virtual Machine be run as a Daemon?

I would like to run Erlang VM as a daemon on a UNIX server, in a non-interactive mode
The simplest thing is to give erl the -detached flag.
There are however many helpers out there for doing this, check out rebars release handling, erlrc and run_erl.
Also rebar can generate a node that can be started as a daemon (with start, stop, restart commands).

Resources