Industry standard for running daemonized Erlang/OTP applications? - erlang

Some of modern infrastructure components such as Yaws and CouchDB can be installed on Ubuntu with
sudo apt-get install
and then they can be started with
{service_name} start or simply {service_name}
I wonder if there are any differences between this approach and the Erlang/OTP way, that is uploading an OTP release to the production server and then starting it with a rebar-generated script.
What is the industry standard? Install/start a repository-based binary or upload/start a rebar-generated release?
Also, do the repository-based binaries have all the OTP functionality in place, like OTP applications structure, supervision and the option to attach to the live application via console?

Every time, you are starting erlang, you are starting a release. Even, if you only type erl in terminal, it runs a release (you can find the .boot, .rel and .script files in erlang_rootdir/releases/name_of_release.
I think this answers your last question: packages installed from repository still have the OTP structure, supervision and all the goodies. There are just couple of bash scripts, that make starting and stopping them more friendly to someone used to service_name start. You should also be able to find a script similar to the one generated by rebar.
Check, where are those files and simply cat them. You should see, they simply start Erlang release.
The only thing, that you can't do with package managers, is hot code upgrade. Most package managers simply stop the running application, install new version and start it back. In Erlang, you can upgrade running release, but this is quite advanced stuff. Besides that - there are no disadvantages of using OS packages.

Related

Emacs workflow with development containers

New to Emacs and recently been trying to get used to it. loving it so far!
One thing I cannot seem to figure out by myself nor find any proper examples of how to figure out to following workflow:
Since I work on multiple projects with different languages and like to keep my work and private projects separated as much as possible in my OS, ive been working with development containers using docker and VScode for the past years.
This allowed me to keep both my project dependencies and the development tools in one container, where i just attached my VScode instance to a project and used extensions such as Language servers / linting / debugging from within that container.
Currently I can open my projects in emacs as the code is local and mounted to the containers, but im looking for a way to either:
Allow my local emacs to use the language/linting/debugging services installed in the container.
Install emacs in the dev containers and mount my configs to keep this synchronized.
Or better alternatives?
Most valuable would be to get language servers working again.
In case it matters, i'm working in DOOM Emacs on Arch. Mostly Python, PHP and NodeJS projects.
... use the language/linting/debugging services installed in the container
By design this is difficult to do with Docker: by design the host system can't directly access files or binaries installed in a container. Without a lot of tricks around bind mounts and user IDs and paths and permissions it's very difficult to run a program in a container in a way that looks like it's on the host system. A couple of tools have those tricks built in, but it's not at all universal. (Jenkins for example generates about 5 lines' worth of docker run command options if you ask it to run a step inside a container.)
My Emacs experience has generally been much better using a host-based per-language version manager and per-project packaging tool (a per-project node_modules directory, rbenv plus Ruby gem sets, pipenv for Python programs, ...).
In short: Emacs can't use language servers, language interpreters, or other tools from Docker images instead of the host system (without writing a lot of Lisp (and if you do consider publishing it to MELPA (and also to GitHub))).
Most valuable would be to get language servers working again.
M-x lsp-install-server will download one of the language servers lsp-mode knows about and save it in your $HOME/.emacs directory. If you activate lsp-mode and it doesn't already have a language server for the current major mode, it will offer to download it for you. There's not much to "get working" usually.

Docker query on containerizing

Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.
if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...
You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.

erlang zero dependencies installer for Debian and Ubuntu

In order to ease the installation process of RabbitMQ, I want to use a simple, zero dependencies installer for erlang.
There is a great project on Github from the guys at RabbitMQ - erlang-rpm, but this is RPM only.
I'm aware of erlang-solutions, but it not a single, no dependencies installer.
EDIT: Adding some context here. I need this to be able to prepare a simple, offline install to my app, which needs RabbitMQ. Offline means NO access to internet for downloading dependencies.
Any ideas?
There is no Zero-dependency Erlang Ubuntu/Debian for RabbitMQ.
But the Erlang Solution package can be installed easily.
We built the Centos package because the dependencies are hard to install on Centos (if you don't want to use the epel)
If you want to create an offline setup, you can do it like for other packages (https://askubuntu.com/questions/974/how-can-i-install-software-or-packages-without-internet-offline). Or you can create your own package by building Erlang from source code.
Note also RabbitMQ has dependencies as socat

Live application migration between two different versions of Erlang VMs

I have Erlang application running on R13 version of Erlang VM.
I want to upgrade VM to R14 without stopping the application.
How cay I do live migration of the application between
different versions of Erlang VM?
You need two things:
a) Lots and lots of planning.
b) An application that runs on several nodes.
The usual trick is to take down part of your node cluster and upgrade that to the new version. It should be able to talk to the older nodes. Then bump the older nodes when the new version of Erlang has taken over.
Erlang does not provide this kind of upgrade for free. It does make it much less painful though.
Short answer, you can not.
This is because applications compiled for the new version of the BEAM emulator (Erlang VM) would not run properly on older versions of the emulator.
Or in more details:
Upgrading an applications without stopping it can only be done using release handler. But release handler can only upgrade applications running in the VM, not the VM itself. You can still prepare a relup file but when systools:make_relup/3 notices that the new release uses a different version of the BEAM emulator it will add the instruction restart_new_emulator (Low-Level) to restart the node during the upgrade (see the description for restart_new_emulator on the page I mentioned earlier).
What you could do, however, is upgrading the Erlang VM without interrupting the service (assuming your application provides some service to an external party). But that very much depends on the architecture of your application and has no generic answer.

Distribute CouchDB as part of a Rails app?

I am working on a Rails project and the Architect has asked me to investigate bundling CouchDB into to application so that it can be deployed by Capistrano across multiple platforms and managed by Rake.
My expectation was that I could set up the Erlang VM on the various environments and then distribute the CouchDB application with Capistrano. However I can't find any option to download CouchDB without the Erlang runtime. I can, however see an option to build CouchDB from source which I assume is platform dependent.
I am new to Erlang and CouchBD, am I missing something? Is there a way to bundle CouchDB into a Rails app and distribute it across multiple platforms?
Have a look at some of the tools for provisioning Rails services (such as passenger_stack). Passenger Stack will download, make and install the ancillary services for your Rails app ... might be something you can adapt or use as a base to install Erlang and CouchDB.
There are a bunch of alternatives to this as well. Deprec contains recipes for provisioning with Capistrano. Essential idea is the same though.
I think you will not find a silver bullet. Distributing Erlang is similar to distributing Ruby; however Ruby has the advantage of being included in many default OS installs.
I know ejabberd has pre-built binaries for many distros. You might investigate how they do it.
The correct solution probably depends on how many "multiple platforms" you are targeting. If it's "Ubuntu 8.04 plus Ubuntu 10.04" that is different from several Linux distros, plus OSX, plus FreeBSD. Typically only open source projects support those many platforms and ideally you can get patches from the community. For internal projects, I have seen teams standardize on a Linux build and use virtualization on Mac/Windows.
But back to your question:
Building from source is a reasonable option. You could build when you deploy, or pre-build for all platforms and then deploy the binaries. Both Erlang and CouchDB use Autoconf which means you can --prefix them to a dedicated location (more-or-less standalone apps). It will take some trial and error but your build script can
Platform-specific dependency setup: gcc, make, autoconf, everything you need. apt-get on Ubuntu, yum on RHEL, Macports, whatever you need to get a common platform on your development and deployment system
Compile and install the rest using the tools from step 1. Use configure --prefix=/opt/my_software to keep it all in one place. (You can totally uninstall with rm -rf.)
This is an medium-level challenge--mostly trial and error. If possible, work within a build framework such as Rake or Toby's suggestion passenger_stack. Good luck!

Resources