Starting Nitrogen Web framework at boot time - erlang

I have always started nitrogen to run as a daemon using the command below:
sudo /home/someuser/myapp/bin/nitrogen start
It works well but i have to repeat the same activity should the server reboot.
Most web servers by default start at boot time. When nitrogen is started, it starts the underlying Erlang web server. Unfortunately I have not found any single resource talking about starting nitrogen at boot time.
How do you start nitrogen as a daemon at system boot time?

The easiest solution is to use the /etc/rc.local file. By default, it's empty.
Since rc.local runs as root, you can use it as such (though if you prefer to run Nitrogen as a separate user, using su -c "command" username) is good.
Anyway, the simple solution is to add to your rc.local file the following:
To run as root:
/home/someuser/myapp/bin/nitrogen start
To run as another user:
su -c "/home/someuser/myapp/bin/nitrogen start" someuser
That will launch Nitrogen appropriately and will let you connect to the VM using bin/nitrogen attach.
My previous recommendation of using sudo is not sufficient, as it doesn't reset the environment to the user you want.
I'm using this in production on Ubuntu 14.04 and a linode VPS.
I hope that helps.

Related

Docker containers restart order

I have two containers running on Ubuntu Server 22.04 LTS.
One of them is Selenium Grid and the second one is Python container that works in the connection with mentioned above Selenium container.
How can I get these two containers correctly restarted after system poweroff or reboot?
I tried this:
docker update --restart [on-failure] [always] [unless-stopped] container_grid docker update --restart [on-failure] [always] [unless-stopped] container_python
The Selenium Grid container restarts correctly, but Python container keeps restarting in a loop.
As I can suppose it cannot by some reason establish connection to the second one, exits with the code 1 and keeps restarting.
How can I avoid this? Maybe there is a solution that adds delay or sets the order of containers restart after turning on the system? Or should I simply add some delay in Python code because there is no any simple solution to this?
I am not software developer but automation engineer so could somebody help me with the solution. Maybe it would e Docker Compose or something else.
Thanks in advance.
So) Solved this problem via crontab.
Selenium container starts in accordance with "--restart on-failure" option.
My Python container starts with a delay in accordance with crontab command:
#reboot sleep 20 && docker start [python_container]

Have to restart docker every ddev stop

I'm just starting out playing with DDEV and hit an annoying roadblock.
It doesn't matter if I'm adding a new project or running ddev start on an existing, the project will only start and be accessible the first time after a docker restart. If I restart docker, run ddev start on a site, leave it running for a short time then ddev stop. I have to restart docker to then run ddev start again.
In the terminal, after starting then stopping a site, I run ddev start again and can see ddev run through the steps of creating/re-creating the containers but stops at different points each attempt. If I do a CTRL+C then run ddev start again, another container will get recreated.
I can only do 2 starts and the process will always stop at a Container ddev-projectname-web Recreated line.
The only difference I can see after restarting docker, then running ddev start on the same project, is
Network ddev_default created
Shows on the first line, followed by
Container ddev-ssh-agent Started
I then get
ssh-agent container is running: If you want to add authentication to the ssh-agent container, run 'ddev auth ssh' to enable your keys.
The containers then start (dba, db, web). I get a couple of warnings with the same text:
Project type has no settings paths configured, so not creating settings file.
I can't find much that relates to this message and what I'm experiencing.
The site is then accessible. Running ddev stop then ddev start after gets stuch up to the web container recreating.
There are no messages in docker as the containers don't start. I've done a factory reset on docker which made no difference and I've cleared all images as per the support docs.
Any help around? I moved to Craft Nitro so I wouldn't have to fight MAMP each day but now Nitro has been abandoned and now DDEV is being pushed.

Avoid docker exec zombie processes when connecting to containers via bash

Like most docker users, I periodically need to connect to a running container and execute various arbitrary commands via bash.
I'm using 17.06-CE with an ubuntu 16.04 image, and as far as I understand, the only way to do this without installing ssh into the container is via docker exec -it <container_name> bash
However, as is well-documented, for each bash shell process you generate, you leave a zombie process behind when your connection is interrupted. If you connect to your container often, you end up with 1000s of idle shells -a most undesirable outcome!
How can I ensure these zombie shell processes are killed upon disconnection -as they would be over ssh?
One way is to make sure the linux init process runs in your container.
In recent versions of docker there is an --init option to docker run that should do this. This uses tini to run init which can also be used in previous versions.
Another option is something like the phusion-baseimage project that provides a base docker image with this capability and many others (might be overkill).

Enable etcd service auto start in CoreOS via systemd

I have deployed a CoreOS standealone server with VMware image follow this guide to experience CoreOS.
After deploy success, I found that my CoreOS only enable Docker service, without etcd and fleet service running. Although I know how to use systemd to run etcd and fleet service manually. And I also know use a proper cloud-config can install CoreOS in which etcd and fleet service start automatically.
But I want to know that:
Is it possible to place a unit file in /etc/systemd/system to make systemd starts etcd service automatically?
If can, what is the content of the unit file?
If cannot, what is the other way?
Thanks
Yes. You must have a an etcd.service and fleet.service with an Install section. I've added WantedBy=default.target in mine.
They are already placed on coreos systems within /usr/lib64/systemd/system/. You can copy them to /etc/systemd/system/:
$ cp /usr/lib64/systemd/system/etcd.service /etc/systemd/system/
$ cp /usr/lib64/systemd/system/fleet.service /etc/systemd/system/
$ echo -e '[Install]\nWantedBy=default.target >> /etc/systemd/system/fleet.service
$ echo -e '[Install]\nWantedBy=default.target >> /etc/systemd/system/etcd.service
$ systemctl enable etcd.service
$ systemctl enable fleet.service
I'll also give you the general warning here that I have no idea what changes to /etc/systemd/ do in the long run, given CoreOSs upgrade system. An upgrade could wipe out /etc/systemd/ leaving you in a confused state as to what happened to your customized systemd scripts not managed by cloud-init.
The proper way to do this is with cloud-config. Specifically for VMware, you'll need to serve the cloud-config via config-drive as documented.
It's kind of a pain, but it'll work.

Unicorn init script - not starting at boot

I'm very new to system administration and have no idea how init.d works. So maybe I'm doing something wrong here.
I'm trying to start unicorn on boot, but somehow it just fails to start everytime. I'm able to manually do a start/stop/restart by simply service app_name start. Can't seem to understand why unicorn doesn't start at boot if manual starting stopping of service works. Some user permission issue maybe ??
My unicorn init script and the unicorn config files are available here https://gist.github.com/1956543
I'm setting up a development environment on Ubuntu 11.1 running inside a VM.
UPDATE - Could it be possible because of the VM ? I'm currently sharing the entire codebase (folder) with the VM, which also happens to contain the unicorn config needed to start unicorn.
Any help would be greatly appreciated !
Thanks
To get Unicorn to run when your system boots, you need to associate the init.d script with the default set of "runlevels", which are the modes that Ubuntu enters as it boots.
There are several different runlevels, but you probably just want the default set. To install Unicorn here, run:
sudo update-rc.d <your service name> defaults
For more information, check out the update-rc.d man page.
You can configure a cron job to start the unicorn server on reboot
crontab -e
and add
#reboot /bin/bash -l -c 'service unicorn_<your service name> start >> /<path to log file>/cron.log 2>&1'

Resources