How to debug an Elixir application in production? - erlang

This is not particularly about my current problem, but more like in general. Sometimes I have a problem that only happens in production configuration, and I'd like to debug it there. What is the best way to approach that in Elixir? Production runs without a graphical environment (docker).
In dev I can use IEX.pry, but since mix is unavailable in production, that does not seem to be an option.
For Erlang https://stackoverflow.com/a/21413344/1561489 mentions dbg and redbug, but even if they can be used, I would need help on applying them to Elixir code.

First, start a local node running iex on your dev machine using iex -S mix. If you don't want the application that's running locally to cause breakpoints to be activated, you need to disable the app from starting locally. To do this, you can simply comment out the application function in mix.exs or run iex -S mix run --no-start.
Next, you need to connect to the remote node running on docker from iex on your dev node using Node.connect(:"remote#hostname"). In order to do this, you have to make sure both the epmd and the node ports on the remote machine are reachable from your local node.
Finally, once your nodes are connected, from the local iex, run :debugger.start() which opens the debugger with the GUI. Now in the local iex, run :int.ni(<Module you want to debug>) and it will make the module visible to the debugger and you can go ahead and add breakpoints and start debugging.
You can find a tutorial with steps and screenshots here.

In the case that you are running your production on AWS, then you should first and foremost leverage CloudWatch to your advantage.
In your elixir code, configure your logger like this:
config :logger,
handle_otp_reports: true,
handle_sasl_reports: true,
metadata: [:application, :module, :function, :file, :line]
config :logger,
backends: [
{LoggerFileBackend, :shared_error}
]
config :logger, :shared_error,
path: "#{logging_dir}/verbose-error.log",
level: :error
Inside your Dockerfile, configure an environment variable for where exactly erl_crash.dump gets written to, such as:
ERL_CRASH_DUMP=/opt/log/erl_crash.dump
Then configure awslogs inside a .config file under .ebextensions as follows:
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[erl_crash.dump]
log_group_name=/aws/elasticbeanstalk/your_app/erl_crash.dump
log_stream_name={instance_id}
file=/var/log/erl_crash.dump
[verbose-error.log]
log_group_name=/aws/elasticbeanstalk/your_app/verbose-error.log
log_stream_name={instance_id}
file=/var/log/verbose-error.log
And ensure that you set a volume to your docker under Dockerrun.aws.json
"Logging": "/var/log",
"Volumes": [
{
"HostDirectory": "/var/log",
"ContainerDirectory": "/opt/log"
}
],
After that, you can inspect your error messages under CloudWatch.
Now, if you are using ElasticBeanstalk(which my example above implicitly implies) with Docker deployment as opposed to AWS ECS, then the logs of std_input are redirected by default to /var/log/eb-docker/containers/eb-current-app/stdouterr.log inside CloudWatch.
The main purpose of erl_crash.dump is to at least know when your application crashed, thereby taking the container down. AWS EB will normally restart the container, thus keeping you ignorant about the restart. This understanding can also be obtained from other docker related logs, and you can configure alarms to listen for them and be notified accordingly when your docker had to restart. But another advantage of logging erl_crash.dump to CloudWatch is that if need be, you can always export it later to S3, download the file and import it inside :observer to do analysis of what went wrong.
If after consulting the logs, you still require a more intimate interaction with your production application, then you need to leverage remsh to your node. If you use distillery, you would configure the cookie and the node name of your production application with your release like this:
inside rel/confix.exs, set cookie:
environment :prod do
set include_erts: false
set include_src: false
set cookie: :"my_cookie"
end
and under rel/templates/vm.args.eex you set variables:
-name <%= node_name %>
-setcookie <%= release.profile.cookie %>
and inside rel/config.exs, you set release like this:
release :my_app do
set version: "0.1.0"
set overlays: [
{:template, "rel/templates/vm.args.eex", "releases/<%= release_version %>/vm.args"}
]
set overlay_vars: [
node_name: "p#127.0.0.1",
]
Then you can directly connect to your production node running inside docker by first ssh-ing inside the EC2-instance that houses the docker container, and run the following:
CONTAINER_ID=$(sudo docker ps --format '{{.ID}}')
sudo docker exec -it $CONTAINER_ID bash -c "iex --name q#127.0.0.1 --cookie my_cookie"
Once inside, you can then try to poke around or if need be, at your own peril inject modified code dynamically of the module you would like to inspect. An easy way to do that would be to create a file inside the container and to invoke a Node.spawn_link target_node, fn Code.eval_file(file_name, path) end
In the case your production node is already running and you do not know the cookie, you can go inside your running container and do a ps aux > t.log and do a cat t.log to figure out what random cookie has been applied and use accordingly.
Docker serves as an impediment to the way epmd is able to communicate with other nodes. The best therefore would be to rather create your own AWS AMI image using Packer and do bare metal deployments instead.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I have not tried it out as yet, I may be wrong.
In the case that you are running on a provider other than AWS, then figuring out how to get easy access to your remote logs with some SSM agent or some other service is a must.

I would recommend using some sort of exception handling tools, so far I am having great experiences on Sentry.

Related

Tweak logging of standalone Neo4j server

When running the Neo4j standalone community Docker, some logs are written to stdout and some to a file inside the container: debug.log.
I would like to be able to set log4j options, like log-level, appenders etc. The reasons are:
I can't access the log files when running in e.g AWS ECS
The debug logs are quite verbose
log4j-properties are convenient to deploy and manage
So the question is, how can I set a custom log4j properties for a Neo4j server that's running inside a container?
It's not documented if it's even possible, but I have tried the usual things one starts to thing of. First adding a log4j.xml to the classpath but to no avail. Also I have tried setting dbms.jvm.additional to
-Dlog4j.configuration=<path_to_file>
-Dlog4j.configurationFile=<path_to_file>
-Dlog4j2.configurationFile==<path_to_file>
I've verified that the Neo4j process has the correct jvm-arguments:
/usr/local/openjdk-11/bin/java -cp /var/lib/neo4j/plugins:/var/lib/neo4j/conf:/var/lib/neo4j/lib/*:/var/lib/neo4j/plugins/*
-Xms2048m -Xmx2048m -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch
-XX:+UnlockExperimentalVMOptions -XX:+TrustFinalNonStaticFields -XX:+DisableExplicitGC
-XX:MaxInlineLevel=15 -XX:-UseBiasedLocking -Djdk.nio.maxCachedBufferSize=262144
-Dio.netty.tryReflectionSetAccessible=true -Djdk.tls.ephemeralDHKeySize=2048
-Djdk.tls.rejectClientInitiatedRenegotiation=true -XX:FlightRecorderOptions=stackdepth=256
-XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints -Dlog4j2.disable.jmx=true
-Dlog4j.configuration=file:/var/lib/neo4j/conf/log4j.properties
-Dlog4j.configurationFile=file:/var/lib/neo4j/conf/log4j.xml
-Dlog4j2.configurationFile=file:/var/lib/neo4j/conf/log4j2.xml
-Dfile.encoding=UTF-8 org.neo4j.server.CommunityEntryPoint
--home-dir=/var/lib/neo4j --config-dir=/var/lib/neo4j/conf
I have verified that no lib-jars contains a log4j-properties that takes precedence.
And whatever I try, no changes are made to the way the server is logging. The same events are written to /logs/debug.log. No exceptions regarding bad log4j config or something like that.
I have created a small project for easier debugging.
I would write out the files to stdout.
# forward logs to docker log collector
RUN ln -sf /dev/stdout /logs/debug.log
So this is a 'docker' solution but handy and works also for other usecases.

Automatically Configure Config inside Docker Container

While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)

Looking for a convenient way to start and stop applications with docker-compose

For each of my projects, I have configured a docker development environment consisting of several containers. I often switch between projects. That requires stopping one set of containers and starting another. I currently do it like this:
$ cd project1
$ docker-compose stop
$ cd ../project2
$ docker-compose up -d
So I need to remember which application is currently running, cd into the directory where its docker-compose.yml is, stop it, then remember what other project I want to run, cd there and start it.
Is there a better way? Like a utility that remembers which multicontainer applications I have, can stop the currently running one and run another one without manual cding and docker-composeing?
(By the way, what's the correct term for a set of containers hosting parts of a single application?)
Hope docker-compose-ui will help you in managing applications.
I think the real problem here is this:
That requires stopping one set of containers and starting another.
You shouldn't need to stop one project to start another.
Instead of mapping to the same host ports I would not map any ports at all. Then use a script to lookup the IP of the container, and connect directly to that:
#!/bin/bash
cip=$(docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}} {{ $value.IPAddress}} {{end}}' $1)
This will look up the container ip. Combine that with a command to open the url:
url=http://cip:8080/
xdg-open $url || open $url
All together this will let you run the application without having to map any host ports. When host ports don't exist, you don't have to stop other projects.
If you are ruby proven a bit, you can use scaffolding for this.
A barebone example using thread ( to start different docker-compose session without one process and then stop them all together )
require 'docker-compose'
threads = []
project_paths = %w(/project/path1 /project/path2 /project/path3 /project/path)
project_paths.each do |path|
threads.push Docker::Compose::Session.new(dir:compose_base_path1)
end
begin
threads.each do |thread|
thread.join
end
rescue SystemExit, Interrupt
threads.each do |thread|
thread.kill
end
rescue Exception => e
handle_exception e
end
source
It uses
docker-compose gem
threads
Just set project_paths to the folders of your projects. And if you want to end them all, use CTRL+c
You can of course go beyond that, using a daemon and try to start / stop some of them giving "names" and such, but i guess as a starting point for scaffolding, that should be enaugh

Prevent default redirection from port 80 to 5000 on Synology NAS (DSM 5)

I would like to use a nginx front server on my Synology NAS for reverse-proxying pruposes. The goal is to provide a facade for the non-standard port numbers used by diverse webservers hosted the NAS. nginx should be listening on port 80, otherwise all this wouldn't make any sense.
However DSM comes out of the box with an Apache server that is already listening on port 80. What it does is really silly : it simply redirects to port 5000, which is the entry point to the NAS web manager (DSM).
What I would like to do is disable this functionality, making the port 80 available for my nginx server. How can I do this ?
Since Google redirects to here also for recent Synology DSM, I answer for DSM6 (based on http://tonylawrence.com/posts/unix/synology/freeing-port-80/)
From DSM6, nginx is used as HTTP server and redirection place. The following commands will leave ngingx in place, put run it at port 8880 instead of 80.
ssh into your Synology
sudo -s
cd /usr/syno/share/nginx
Make a backup of server.mustache, DSM.mustache, WWWService.mustache
cp server.mustache server.mustache.bak
cp DSM.mustache DSM.mustache.bak
cp WWWService.mustache WWWService.mustache.bak
sed -i "s/80/8880/g" server.mustache
sed -i "s/80/8880/g" DSM.mustache
sed -i "s/80/8880/g" WWWService.mustache
Optionally, you can also move 443 to 8881:
sed -i "s/443/8881/g" server.mustache
sed -i "s/443/8881/g" DSM.mustache
sed -i "s/443/8881/g" WWWService.mustache
Quit the shell (e.g., via Ctrl+D)
Go to the Control Panel and change any setting (e.g. the Application portal -> Reverse Proxy to forward http://YOURSYNOLOGYHOSTNAME:80 to http://localhost:8181 - 8181 is the port suggested by the pi-hole on DSM tutorial).
tl;dr Edit /usr/syno/etc/synoservice.d/httpd-user.cfg to look like:
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
... then reboot.
Background infrormation
The answer given by Backslash36 is not the easiest solution and it may also be more difficult to maintain. Here, I give a solution that also doesn't involve starting webstation, which most other solutions demand. Note, for updated documentation see here, which gives a lot of info in general about the synology systems.
It is important to note that the new DSM (> 5.x) use upstart now, so much of the previous documentation is not correct. There are two httpd jobs which run by default on the synology machines:
httpd-sys : serves the administration page(s) and is located on 5000/5001 by default.
httpd-user : this, somewhat confusingly, always runs even if the webstation program is not enabled.
If webstation:
is enabled: then this program serves the user webpages.
is not enabled: then this program sets /usr/syno/synoman/phpsrc/web as its DocumentRoot (/usr/syno/synoman/phpsrc/web/index.cgi -> /usr/syno/synoman/webman/index.cgi), meaning that a call to http://address.of.my.dsm will call the index.cgi file. This cgi file is what drives the redirect to 5000 (or whatever you have set the admin_port to be).
From the command line, you can check what the [secure_]admin_port is set to:
Syno-Server> get_key_value /etc/synoinfo.conf admin_port
5184
Syno-Server> get_key_value /etc/synoinfo.conf secure_admin_port
5185
where I have set mine differently.
Ok, now to the solution. The best solution is simply to stop the httpd-user daemon from starting. This is presumably what you want anyways (e.g. to start another server like `nginx' in a docker). To do this, edit the relevant upstart configuration file:
Syno-Server> cat /usr/syno/etc/synoservice.d/httpd-user.cfg
{
"init_job_map":{"upstart":["httpd-user"]},
"user_controllable":"no",
"mtu_sensitive":"yes",
"auto_start":"no"
}
so that the "auto_start" entry is "no" (as it is above). It will presumably be "yes" on your machine and by default. Then edit the stop on runlevel to be [0123456] in /etc/init/httpd-user.conf:
Syno-Server> cat /etc/init/httpd-user.conf
description "start httpd-user daemon"
author "Development Infrastructure Team"
console log
reload signal SIGUSR1
start on syno.share.ready and syno.network.ready
stop on runlevel [0123456]
...
This last step is to ensure that the httpd-user service does actually start, but then automatically stops. This is because there are otherwise a number of services that depend upon it actually starting. Reboot your machine and you will now see that nothing is listening (or forwarding) on Port 80.
Done ! It was tricky, but now I have it working just fine. Here is how I did it.
What follows requires to connect to the NAS with ssh, and may not be recommended if you want to keep warranty on your product (even though it's completely safe IMHO)
TL;DR : In the following files, replace all occurences of port 80 by a non standard port (for example, 8080). This will release the port 80 and make it available to use by whatever you want.
/etc/httpd/conf/httpd.conf
/etc/httpd/conf/httpd.conf-user
/etc/httpd/conf/httpd.conf-sys
/etc.defaults/httpd/conf/httpd.conf-user
/etc.defaults/httpd/conf/httpd.conf-sys
Note that modifying a subset of these files is probably sufficient (I could observe that the first one is actually computed from several others). I guess modifying the files in /etc.defaults/ would be enough, but if not, worst-case scenario is to modify all those files and you will be just fine.
Once this is done, don't forget to restart your NAS !
For those interested in how I found out
I'm not that familiar with the Linux filesystem, and even less with Apache configuration. But I knew that scripts dealing with startup processes are located in /etc/init. The Apache server that was performing the redirection would be certainly launched from there.
This is where I had to get my hands dirty. I performed some cat <filename> | grep 80 for the files in that directory I considered relevant, hoping to find a configuration line that would set a port number to 80.
That intuition paid off : /etc/init/httpd-user.conf contained the line echo "DocumentRoot \"/usr/syno/synoman/phpsrc/web\"" >> "${HttpdConf}" #port 80 to 5000. Bingo !
Looking at the top of the file, I discovered that the HttpdConf variable was referring to /etc/httpd/conf/httpd.conf. This is where the actual configuration was taking place.
From there it is relatively straightforward, even for those John Snow out there that know nothing about Apache configuration. The trick was to notice that httpd.conf was instantiated from some template at startup (and changing this file was therefore not enough). Performing a find / -name "*httpd.conf*", combined with some grep 80 gave me the list of files to modify.
When you look back all this looks obvious of course.
However I wish Synology gave us more flexibility, so we don't have to perform dirty hacks like that...

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

Resources