Do you use CEAN, copy the source and compile them, copy the BEAM files, or something else. I need to distribute some Erlang code and I'm not sure which to choose.
I've been working on EPM, an Erlang package manager. It pulls from GitHub. It's non-invasive and doesn't require installing anything on your system other than downloading an escript. It works as follows:
jvorreuter$ ./epm install ibrowse mochiweb
epm v0.1.0, 2010
===============================
Install the following packages?
===============================
+ epm-mochiweb-master
+ cmullaparthi-ibrowse-master
([y]/n) y
+ downloading http://github.com/epm/mochiweb/tarball/master
+ running mochiweb build command
+ running mochiweb install command
+ downloading http://github.com/cmullaparthi/ibrowse/tarball/master
+ running ibrowse build command
+ running ibrowse install command
You can read more about it at http://www.jkvor.com/erlang-package-manager
The linked blog post is blank as of August 2013. The GitHub repository is at https://github.com/JacobVorreuter/epm
I clone the git repository, build it, and add the directory to my ERL_LIBS path. I hack the source for my private customizations. For those commits that are sensible, I publish.
I am a much happier developer after finding git and being able to manage my own changes and still be able to get upstream changes that I can rebase my stuff on.
I realize that this looks raw for end users, but I am my own end user.
If I would ship something to other end users I would look into using .ez zip archive files that the erlang code loader can use. See section "Loading of Code From Archive Files" on that page. Then provide a script that invokes erl with the correct arguments.
If the repository isn't available as a git, I git-svn clone it. If I can't do that, I tend to stay away from it.
I use faxien (a package manager for Erlang releases and applications) from the Erlware project: http://www.erlware.org. It and sinan are essential tools for Erlang development :).
I use Agner on Mac, Linux, and Unix; and CEAN on Windows. CEAN's the only Erlang package manager with Windows support.
I package them on a Debian repository on Launchpad.
Related
I'm trying to figure out how to properly setup an ejabberd project that allows for easy compilation of custom beam files- so far, we've been using an existing project that is cumbersome to manage, and uses erlide as the IDE.
I would like to set up the project in a way that I can use a more helpful IDE like vscode, and somehow streamline the compiling and copying of the beam files and updating the module on the server.
Writing code in Elixir is fine as well- I just want the project to be set up in a way that is dev friendly.
Apologies if the question is too broad, but I'm not exactly sure how else to best phrase it. If you feel like I'm missing something in my current flow, please let me know, as I've basically inherited this project. If there are any clarifications required, let me know as well.
Thanks.
easy compilation of custom beam files
somehow streamline the compiling and copying of the beam files and updating the module on the server.
If the task is about compiling and loading additional modules, a running ejabberd node can compile, load and start additional modules in runtime, see
https://docs.ejabberd.im/developer/extending-ejabberd/modules/#ejabberd-contrib
Usually the modules come from
https://github.com/processone/ejabberd-contrib
but you can tell ejabberd to download other modules from other git repositories, or you can copy modules source code and tell ejabberd to install them. And those modules can be written in Erlang or Elixir. Full example: https://docs.ejabberd.im/developer/extending-ejabberd/elixir/#elixir-module-in-ejabberd-contrib
Basically:
you write the module in your development machine, test it...
when happy with the source code, copy mod_whatever.erl to the production machine, $HOME/.ejabberd-modules/sources/mod_whatever as explained in the example mentioned earlier
run ejabberdctl module_install mod_whatever
In step 2, instead of copying the source code yourself, you can have a git repository just for your module, tell ejabberd the module's git URL, similarly to https://github.com/processone/ejabberd-contrib/tree/master/extra
BTW, for step 3, starting in ejabberd 22.10, there's a page in ejabberd webadmin to install and uninstall those modules (copying the files requires manual administration of course).
I would like to set up the project in a way that I can use a more helpful IDE like vscode
What a coincidence, these days I'm playing with VSCode variants (VSCode, VSCodium, Coder's code-server and Github Codespaces) and how to develop ejabberd using them. This is useful for step 1 that I mentioned earlier (write module and test it). If you are interested in ejabberd + VSCode, tell me.
Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.
We want to have the same VSCode settings for the whole crew of developers. Also it would be fine to have a oneline command to tear VSCode down and restart it from scratch with predefined settings and plugins so that you do not have to worry about trying out plugins and getting beck to the known state. Kind of Config-as-Code for VSCode.
I already found:
https://code.visualstudio.com/docs/editor/extension-gallery#_command-line-extension-management
https://github.com/microsoft/vscode-dev-containers
https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync&ssr=false#qna
https://github.com/gantsign/ansible-role-visual-studio-code-extensions
https://code.visualstudio.com/docs/remote/containers
https://github.com/gantsign/ansible-role-visual-studio-code
But non of these provides a good solution to me
We are using Mac and Windows machines and develop most of the time locally (not remotely in the cloud or the like).
I imagine like having a script like
.... projectname up
or
.... projectname reset
(or
.... projectname down)
to receive/reset the configured settings and newest plugins that have been configured for the project.
Have any ideas or use a similar solution already?
After doing a lot of research, playing with Docker, Ansible and so on... it seems that although I excluded it at first the plugin Settings Sync Plugin from Shan Khan is the way to go. It has round about 1 million installs!
Only dependency - you need a GitHub account to host your configs. That is what held me back at first - but it should be not that much of a problem to get one for everyone in the team and connect it to like a company-github-account.
Copy the files settings.json and keybindings.json to your target machine(s) to copy the settings. You can find those files here:
Win: ~\AppData\Roaming\Code\User
Mac: ~/Library/Application Support/Code/User/
Linux: ~/.config/Code/User
You can copy extentions from ~/.vscode/extensions or C:\Users\username\.vscode\extensions from linux/mac or windows respectively.
FalcoGer's answer should explain how to copy the files in a way VS Code will pick them up. If you only need to copy the config files once, this solution would be fine.
If you need to "sync" these config files on a regular basis, I would advise to create a Git repository where all config files will be stored.
When cloning the repo to local machines, you can symlink the files to the config destinations (see FalcoGer's anwser). Then when you need to "sync", you only have to run git pull and restart VS Code to apply the changes.
For your other script-related question, you could create a CLI for this. Python would be the most friendly way to do this. You can find an example here.
I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.
Someone once showed my an open source tool for downloading building and managing multiple Erlang SDK's on a single machine but I can't for the life of me remember what it was called?
Are you thinking about kerl?
First you'd build a release:
kerl build 18.0 my-18.0
And then install it:
kerl install my-18.0 ~/erl-18.0/
To use it, either run the programs in ~/erl-18.0/bin directly, or source ~/erl-18.0/activate to add that directory to PATH:
source ~/erl-18.0/activate