uWSGI: --master with --emperor spawns two emperors - uwsgi

I can see that if I start uwsgi like this:
sudo /usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals --uid www --gid www
it creates one emperor copy. But if I start it with --master additionally (as recommended here), it creates two emperor copies. Does it make sense to use --master with --emperor? I would say no, but if I run it without that option I get this warning:
*** WARNING: you are running uWSGI without its master process manager ***

Here is what official documentation says:
The emperor should generally not be run with --master, unless master
features like advanced logging are specifically needed.
If you're wondering what master option does, here is the answer:
master
uWSGI’s built-in prefork+threading multi-worker management
mode, activated by flicking the master switch on. For all practical
serving deployments it’s not really a good idea not to use master
mode.
So, to summarize:
Use --master for usual uWSGI instance,
Do not use --master for uWSGI Emperor.

I disagree - the documentation says it is not a good idea NOT to use it, in production anyway; I guess the double negative could be written more clearly.
Therefore it would appear it IS a good idea to use it, hence the warning.

I'd like to add some specific information for using harakiri mode for vassals running under emperor. If master is not also in the config for the vassals harakiri will have no effect regardless of whether master/harakiri is specified in the emperor config. Given an example emperor config:
[uwsgi]
emperor = ...
daemonize = ...
emperor-pidfile = ...
vassal-set = enable-metrics=1
emperor-stats = 127.0.0.1:6000
The vassals will need the following for harakiri to work:
[uwsgi]
strict
processes = 4
stats = 127.0.0.1:5000
memory-report
daemonize = ...
pidfile = ...
close-on-exec
py-tracebacker = /tmp/tbsocket
master
harakiri = 5
harakiri-verbose
Note that master and harakiri are both present in the vassal and setting them in emperor config would have no effect on the vassals for uwsgi version 2.0.12

Related

Make one instance of multiple uWSGI workers perform a extra function

I have a python flask app running on uWSGI with a config file that specifics it to spawn multiple workers (which I am assuming are identical processes).
Everything works well except for one part: the python app runs a bash command to download an update a database every day using a scheduler, which needs to run only once but multiple processes means that it runs multiple times at the same time, thus corrupting the downloaded file.
Is there a way to run this bash command on only one instance of uWSGI workers? I can't run the bash command as a separate cron job (the database update has to integrate seamlessly with the app).
Check The uWSGI cron-like interface
uWSGI’s master has an internal cron-like facility that can generate
events at predefined times. You can use it
You can set the options for example to:
[uwsgi]
; every two hours
cron = 0 -2 -1 -1 -1 /usr/bin/backup_my_home --recursive
Is that sufficient?

How to debug an Elixir application in production?

This is not particularly about my current problem, but more like in general. Sometimes I have a problem that only happens in production configuration, and I'd like to debug it there. What is the best way to approach that in Elixir? Production runs without a graphical environment (docker).
In dev I can use IEX.pry, but since mix is unavailable in production, that does not seem to be an option.
For Erlang https://stackoverflow.com/a/21413344/1561489 mentions dbg and redbug, but even if they can be used, I would need help on applying them to Elixir code.
First, start a local node running iex on your dev machine using iex -S mix. If you don't want the application that's running locally to cause breakpoints to be activated, you need to disable the app from starting locally. To do this, you can simply comment out the application function in mix.exs or run iex -S mix run --no-start.
Next, you need to connect to the remote node running on docker from iex on your dev node using Node.connect(:"remote#hostname"). In order to do this, you have to make sure both the epmd and the node ports on the remote machine are reachable from your local node.
Finally, once your nodes are connected, from the local iex, run :debugger.start() which opens the debugger with the GUI. Now in the local iex, run :int.ni(<Module you want to debug>) and it will make the module visible to the debugger and you can go ahead and add breakpoints and start debugging.
You can find a tutorial with steps and screenshots here.
In the case that you are running your production on AWS, then you should first and foremost leverage CloudWatch to your advantage.
In your elixir code, configure your logger like this:
config :logger,
handle_otp_reports: true,
handle_sasl_reports: true,
metadata: [:application, :module, :function, :file, :line]
config :logger,
backends: [
{LoggerFileBackend, :shared_error}
]
config :logger, :shared_error,
path: "#{logging_dir}/verbose-error.log",
level: :error
Inside your Dockerfile, configure an environment variable for where exactly erl_crash.dump gets written to, such as:
ERL_CRASH_DUMP=/opt/log/erl_crash.dump
Then configure awslogs inside a .config file under .ebextensions as follows:
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[erl_crash.dump]
log_group_name=/aws/elasticbeanstalk/your_app/erl_crash.dump
log_stream_name={instance_id}
file=/var/log/erl_crash.dump
[verbose-error.log]
log_group_name=/aws/elasticbeanstalk/your_app/verbose-error.log
log_stream_name={instance_id}
file=/var/log/verbose-error.log
And ensure that you set a volume to your docker under Dockerrun.aws.json
"Logging": "/var/log",
"Volumes": [
{
"HostDirectory": "/var/log",
"ContainerDirectory": "/opt/log"
}
],
After that, you can inspect your error messages under CloudWatch.
Now, if you are using ElasticBeanstalk(which my example above implicitly implies) with Docker deployment as opposed to AWS ECS, then the logs of std_input are redirected by default to /var/log/eb-docker/containers/eb-current-app/stdouterr.log inside CloudWatch.
The main purpose of erl_crash.dump is to at least know when your application crashed, thereby taking the container down. AWS EB will normally restart the container, thus keeping you ignorant about the restart. This understanding can also be obtained from other docker related logs, and you can configure alarms to listen for them and be notified accordingly when your docker had to restart. But another advantage of logging erl_crash.dump to CloudWatch is that if need be, you can always export it later to S3, download the file and import it inside :observer to do analysis of what went wrong.
If after consulting the logs, you still require a more intimate interaction with your production application, then you need to leverage remsh to your node. If you use distillery, you would configure the cookie and the node name of your production application with your release like this:
inside rel/confix.exs, set cookie:
environment :prod do
set include_erts: false
set include_src: false
set cookie: :"my_cookie"
end
and under rel/templates/vm.args.eex you set variables:
-name <%= node_name %>
-setcookie <%= release.profile.cookie %>
and inside rel/config.exs, you set release like this:
release :my_app do
set version: "0.1.0"
set overlays: [
{:template, "rel/templates/vm.args.eex", "releases/<%= release_version %>/vm.args"}
]
set overlay_vars: [
node_name: "p#127.0.0.1",
]
Then you can directly connect to your production node running inside docker by first ssh-ing inside the EC2-instance that houses the docker container, and run the following:
CONTAINER_ID=$(sudo docker ps --format '{{.ID}}')
sudo docker exec -it $CONTAINER_ID bash -c "iex --name q#127.0.0.1 --cookie my_cookie"
Once inside, you can then try to poke around or if need be, at your own peril inject modified code dynamically of the module you would like to inspect. An easy way to do that would be to create a file inside the container and to invoke a Node.spawn_link target_node, fn Code.eval_file(file_name, path) end
In the case your production node is already running and you do not know the cookie, you can go inside your running container and do a ps aux > t.log and do a cat t.log to figure out what random cookie has been applied and use accordingly.
Docker serves as an impediment to the way epmd is able to communicate with other nodes. The best therefore would be to rather create your own AWS AMI image using Packer and do bare metal deployments instead.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I have not tried it out as yet, I may be wrong.
In the case that you are running on a provider other than AWS, then figuring out how to get easy access to your remote logs with some SSM agent or some other service is a must.
I would recommend using some sort of exception handling tools, so far I am having great experiences on Sentry.

Prevent default redirection from port 80 to 5000 on Synology NAS (DSM 5)

I would like to use a nginx front server on my Synology NAS for reverse-proxying pruposes. The goal is to provide a facade for the non-standard port numbers used by diverse webservers hosted the NAS. nginx should be listening on port 80, otherwise all this wouldn't make any sense.
However DSM comes out of the box with an Apache server that is already listening on port 80. What it does is really silly : it simply redirects to port 5000, which is the entry point to the NAS web manager (DSM).
What I would like to do is disable this functionality, making the port 80 available for my nginx server. How can I do this ?
Since Google redirects to here also for recent Synology DSM, I answer for DSM6 (based on http://tonylawrence.com/posts/unix/synology/freeing-port-80/)
From DSM6, nginx is used as HTTP server and redirection place. The following commands will leave ngingx in place, put run it at port 8880 instead of 80.
ssh into your Synology
sudo -s
cd /usr/syno/share/nginx
Make a backup of server.mustache, DSM.mustache, WWWService.mustache
cp server.mustache server.mustache.bak
cp DSM.mustache DSM.mustache.bak
cp WWWService.mustache WWWService.mustache.bak
sed -i "s/80/8880/g" server.mustache
sed -i "s/80/8880/g" DSM.mustache
sed -i "s/80/8880/g" WWWService.mustache
Optionally, you can also move 443 to 8881:
sed -i "s/443/8881/g" server.mustache
sed -i "s/443/8881/g" DSM.mustache
sed -i "s/443/8881/g" WWWService.mustache
Quit the shell (e.g., via Ctrl+D)
Go to the Control Panel and change any setting (e.g. the Application portal -> Reverse Proxy to forward http://YOURSYNOLOGYHOSTNAME:80 to http://localhost:8181 - 8181 is the port suggested by the pi-hole on DSM tutorial).
tl;dr Edit /usr/syno/etc/synoservice.d/httpd-user.cfg to look like:
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
... then reboot.
Background infrormation
The answer given by Backslash36 is not the easiest solution and it may also be more difficult to maintain. Here, I give a solution that also doesn't involve starting webstation, which most other solutions demand. Note, for updated documentation see here, which gives a lot of info in general about the synology systems.
It is important to note that the new DSM (> 5.x) use upstart now, so much of the previous documentation is not correct. There are two httpd jobs which run by default on the synology machines:
httpd-sys : serves the administration page(s) and is located on 5000/5001 by default.
httpd-user : this, somewhat confusingly, always runs even if the webstation program is not enabled.
If webstation:
is enabled: then this program serves the user webpages.
is not enabled: then this program sets /usr/syno/synoman/phpsrc/web as its DocumentRoot (/usr/syno/synoman/phpsrc/web/index.cgi -> /usr/syno/synoman/webman/index.cgi), meaning that a call to http://address.of.my.dsm will call the index.cgi file. This cgi file is what drives the redirect to 5000 (or whatever you have set the admin_port to be).
From the command line, you can check what the [secure_]admin_port is set to:
Syno-Server> get_key_value /etc/synoinfo.conf admin_port
5184
Syno-Server> get_key_value /etc/synoinfo.conf secure_admin_port
5185
where I have set mine differently.
Ok, now to the solution. The best solution is simply to stop the httpd-user daemon from starting. This is presumably what you want anyways (e.g. to start another server like `nginx' in a docker). To do this, edit the relevant upstart configuration file:
Syno-Server> cat /usr/syno/etc/synoservice.d/httpd-user.cfg
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
so that the "auto_start" entry is "no" (as it is above). It will presumably be "yes" on your machine and by default. Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
This last step is to ensure that the httpd-user service does actually start, but then automatically stops. This is because there are otherwise a number of services that depend upon it actually starting. Reboot your machine and you will now see that nothing is listening (or forwarding) on Port 80.
Done ! It was tricky, but now I have it working just fine. Here is how I did it.
What follows requires to connect to the NAS with ssh, and may not be recommended if you want to keep warranty on your product (even though it's completely safe IMHO)
TL;DR : In the following files, replace all occurences of port 80 by a non standard port (for example, 8080). This will release the port 80 and make it available to use by whatever you want.
/etc/httpd/conf/httpd.conf
/etc/httpd/conf/httpd.conf-user
/etc/httpd/conf/httpd.conf-sys
/etc.defaults/httpd/conf/httpd.conf-user
/etc.defaults/httpd/conf/httpd.conf-sys
Note that modifying a subset of these files is probably sufficient (I could observe that the first one is actually computed from several others). I guess modifying the files in /etc.defaults/ would be enough, but if not, worst-case scenario is to modify all those files and you will be just fine.
Once this is done, don't forget to restart your NAS !
For those interested in how I found out
I'm not that familiar with the Linux filesystem, and even less with Apache configuration. But I knew that scripts dealing with startup processes are located in /etc/init. The Apache server that was performing the redirection would be certainly launched from there.
This is where I had to get my hands dirty. I performed some cat <filename> | grep 80 for the files in that directory I considered relevant, hoping to find a configuration line that would set a port number to 80.
That intuition paid off : /etc/init/httpd-user.conf contained the line echo "DocumentRoot \"/usr/syno/synoman/phpsrc/web\"" >> "${HttpdConf}" #port 80 to 5000. Bingo !
Looking at the top of the file, I discovered that the HttpdConf variable was referring to /etc/httpd/conf/httpd.conf. This is where the actual configuration was taking place.
From there it is relatively straightforward, even for those John Snow out there that know nothing about Apache configuration. The trick was to notice that httpd.conf was instantiated from some template at startup (and changing this file was therefore not enough). Performing a find / -name "*httpd.conf*", combined with some grep 80 gave me the list of files to modify.
When you look back all this looks obvious of course.
However I wish Synology gave us more flexibility, so we don't have to perform dirty hacks like that...

Bash Script to run three different rails apps on the local server?

I have three apps that I want to run with the rails server at the same time, and I also want the option to kill all the servers from one location.
I don't have much experience with Bash so I'm not sure what command I would use to launch the server for a specific app. Since the script won't be in the app directory plain rails s won't work.
From there, I suppose if I can gather the PIDs of the processes the three servers are running on, I can have the script prompt for user input and whenever something is entered kill the three processes. I'm just unsure of how to get the PIDs.
Additionally, each app has a few environment variables that I wanted to have different values than those assigned in the apps config files. Previously, I was using export var=value before rails s, but I'm not sure how to guarantee each separate process is getting the right variables.
Any help is much appreciated!
The Script
You could try something like the following:
#!/bin/bash
case "$1" in
start)
pushd app/directory
(export FOO=bar; rails s ...; echo $! > pid1)
(export FOO=bar; rails s ...; echo $! > pid2)
(export FOO=bar; rails s ...; echo $! > pid3)
popd
;;
stop)
kill $(cat pid1)
kill $(cat pid2)
kill $(cat pid3)
rm pid1 pid2 pid3
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
;;
esac
exit 0
Save this script into a file such as script.sh and chmod +x script.sh. You'd start the servers with a ./script.sh start, and you can kill them all with a ./script.sh stop. You'll need to fill in all the details in the three lines that startup the servers.
Explanation
First is the pushd: this will change the directory to where your apps live. The popd after the three startup lines will return you back to the location where the script lives. The parentheses around the (export blah blah) create a subshell so the environment variables that you set inside the parentheses, via export, shouldn't exist outside of the parentheses. Additionally, if your three apps live in different directories, you could put a cd inside each of the three parantheses to move to the app's directory before the rails s. The lines would then look something like: export FOO=bar; cd app1/directory; rails s ...; echo $! > pid1. Don't forget that semicolon after the cd command! In this case, you can also remove the pushd and popd lines.
In Bash, $! is the process ID of the last command. We echo that and redirect (with >) to a file called pid1 (or pid2 or pid3). Later, when we want to kill the servers, we run kill $(cat pid1). The $(...) runs a command and returns the output inline. Since the pid files only contain the process ID, cat pid1 will just return the process ID number, which is then passed to kill. We also delete the pid files after we've killed the servers.
Disclaimer
This script could use some more work in terms of error checking and configuration, and I haven't tested it, but it should work. At the very least, it should give you a good starting point for writing your own script.
Additional Info
My favorite bash resource is the Advanced Bash-Scripting Guide. Bash is actually a fairly powerful language with some neat features. I definitely recommend learning how bash works!
Why don't you try capistrano, framework for executing commands in parallel on multiple remote machines, via SSH. Its has lots of recipes to do this.
You are probably better off setting up pow.cx, which would run each server as it's needed, rather than having to spin up and shut down servers manually.
You could use Foreman to run, monitor, and manage your processes.
I realize I'm late to the party here, but after searching the internet for a good solution to this (and finding this page but few others and none with a full solution) and after trying unsuccessfully to get prax working, I decided to write my own solution to this problem and give it back to the community!
Check out my rdev bash script gist - a bash script you put in your ~/bin directory. This will create a new tab in gnome-terminal for each rails app with the app name and port in the tab's title. It verifies the app launched successfully by checking the port is in use and the process is actually running. It also verifies the rails app shutdown is successful by ensuring the port is no longer in use and the process is no longer running.
Setup is super easy, just change these two config values:
# collection of rails apps you want to start in development (should match directory name of rails project)
# note: the first app in the collection will receive port 3000, the second 3001 and so on
#
rails_apps=(app1 app2 app3 etc)
#
# The root directory of your rails projects (~/ is assumed, do not include)
#
projects_root="ruby/projects/root/path"
With this script you can start all your rails apps in one command or stop them all and you can stop, start and restart individual rails apps as well. While the OP requested 3 apps run, this will allow you to run as many as you need with port being assigned in order starting with 3000 for the first app in the list. Each app is started using the proper ruby version thanks to chruby and the .env is sourced on the way up so your app will have everything it needs. Once you are done developing just rdev stop and all your rails apps will be killed and the terminal windows closed.
# Usage Examples:
#
# Show Help
# ~/> rdev
# Usage: rdev {start|stop|restart} [app port]
#
# start all rails apps
# ~/> rdev start
#
# start a single rails app
# ~/> rdev start app port
#
# stop all rails apps
# ~/> rdev stop
#
# stop a single rails app
# ~/> rdev stop app port
#
# restart a single rails app
# ~/> rdev restart app port
For the record, all testing was done on Ubuntu 18.04. This script requires: bash, chruby, gnome-terminal, lsof and takes advantage of the BASH_POST_RC trick.

how to kill uWSGI process

So I have finally gotten nginx + uWSGI running successfully for my Django install
however the problem I am having now is when I make changes to the code I need to restart the uWSGI process to view my changes
I feel like I am running the correct command here (i am very new to linux as well btw):
uwsgi --stop /var/run/uwsgi.pid
uwsgi --reload /var/run/uwsgi.pid
I get no error when I run these commands however my old code is still what loads
I also know its not a coding issue because I ran my django app in its development server and everything ran fine
The recommended way to signal reloading of application data is to use the --touch-reload option. Sample syntax on a .ini fine is:
touch-reload /var/run/uwsgi/app/myapp/reload
Where myappis your application name. /var/run/uwsgi/app is the recommended place for such files (could be anywhere). The reload file is an empty file whose timestamp is watched by uwsgi, whenever it changes (by, for example, using touch) uWSGI detects that change and restarts the corresponding uWSGI application instance.
So, whenever you update your code you should touch the file in order to update the in-memory version of the application. For example, on bash:
sudo touch /var/run/uwsgi/app/myapp/reload
Note --reload is an undocumented option on current uWSGI version.

Resources