apache karaf clean start - maven-3

I'm trying to clean start karaf on Windows using clean option.
It does delete data directory with bundles cache but it copy new bundles into data directory from system directory not local maven repository. But system directory has stale jars in comparison to local maven repository which cause karaf to start with stale bundles.
Is this a 'feature' of clean option? Am I missing something? How could I start Karaf with latest code from maven repo not dealing with file system?

You can't as the system directory is per default the one to use.
The clean does mean to clear up bundles in a awkward state and is only rarely used. Sometimes this happens if you start and stop the karaf container very close to each other then bundle might be in an incomplete state and since those bundle state is kept only a clean will help. Another way of cleaning is to delete the data folder.
So what you look like to be intending is to update certain bundles that are installed from the systems folder. For that you need to install the never version cause Karaf nows which versions are in the systems folder, those bundles are defined in the framework feature which is the basic feature to be used by Karaf itself.
If you have your own bundles in the system folder there is no way of updating those as those are regarded to be bootfeatures. In case you want to update those you'll need to make sure those features aren't boot features anymore and after that just do install the never versions of your bundles and uninstall the older ones. This can be done with the command shell.
One side note, it's usually best to ask those questions on the users mailinglist of Karaf, you get more people to answer your questions there.

Related

What files or directories of a release are the bare minimum to run a release?

Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.

Same VSCode-Settings for the whole crew?

We want to have the same VSCode settings for the whole crew of developers. Also it would be fine to have a oneline command to tear VSCode down and restart it from scratch with predefined settings and plugins so that you do not have to worry about trying out plugins and getting beck to the known state. Kind of Config-as-Code for VSCode.
I already found:
https://code.visualstudio.com/docs/editor/extension-gallery#_command-line-extension-management
https://github.com/microsoft/vscode-dev-containers
https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync&ssr=false#qna
https://github.com/gantsign/ansible-role-visual-studio-code-extensions
https://code.visualstudio.com/docs/remote/containers
https://github.com/gantsign/ansible-role-visual-studio-code
But non of these provides a good solution to me
We are using Mac and Windows machines and develop most of the time locally (not remotely in the cloud or the like).
I imagine like having a script like
.... projectname up
or
.... projectname reset
(or
.... projectname down)
to receive/reset the configured settings and newest plugins that have been configured for the project.
Have any ideas or use a similar solution already?
After doing a lot of research, playing with Docker, Ansible and so on... it seems that although I excluded it at first the plugin Settings Sync Plugin from Shan Khan is the way to go. It has round about 1 million installs!
Only dependency - you need a GitHub account to host your configs. That is what held me back at first - but it should be not that much of a problem to get one for everyone in the team and connect it to like a company-github-account.
Copy the files settings.json and keybindings.json to your target machine(s) to copy the settings. You can find those files here:
Win: ~\AppData\Roaming\Code\User
Mac: ~/Library/Application Support/Code/User/
Linux: ~/.config/Code/User
You can copy extentions from ~/.vscode/extensions or C:\Users\username\.vscode\extensions from linux/mac or windows respectively.
FalcoGer's answer should explain how to copy the files in a way VS Code will pick them up. If you only need to copy the config files once, this solution would be fine.
If you need to "sync" these config files on a regular basis, I would advise to create a Git repository where all config files will be stored.
When cloning the repo to local machines, you can symlink the files to the config destinations (see FalcoGer's anwser). Then when you need to "sync", you only have to run git pull and restart VS Code to apply the changes.
For your other script-related question, you could create a CLI for this. Python would be the most friendly way to do this. You can find an example here.

How do I install a project built with bazel?

I am working on a project that is transitioning from CMake to Bazel. One critical feature that we are apparently losing in the bargain is the ability to install the project, so that it can be used by other (not necessarily Bazel) projects.
AFAICT, there is currently no built in support for installing a project?!
I need to create a portable (must work on at least Linux and MacOS) way to install the project. Specifically:
I need to be able to specify libraries, headers, executables, and other files (e.g. LICENSE) that need to be installed.
The user needs to be able to specify an absolute prefix where things should be installed.
I really, really should be able to execute the "install" step more than once, giving different prefixes each time, without Bazel getting confused (i.e. it must not try to "remember" what files it already installed, or if it does, must understand when the prefix is different from last time).
Libraries should be installed to the right place (e.g. lib64), or at least it should be possible for the user to specify the correct libdir.
The install step MUST NOT touch the time stamp on any file from a previous install that has not changed. (Ideally, Bazel itself would handle this; using the install command is tricky and has potential portability issues. Note platform requirements, above.)
What is the best way to go about doing this?
Unless you want to do specific package (e.g. deb or rpm), you probably want to create an executable rule that does the install for you.
You can create a rule that would create an executable (e.g. a shell script) that does the install for you (e.g. do checksums to check if there are change to the installed file and does the actual copy of the files if they have changed). You would have to use the extension language to do, that would look similar to what the docker rules does to load an image with the incremental loader
Addition: I forgot to say that the install itself would be run by using the run command: bazel run install if the rule is named install in the top level BUILD file.

Trigger.io continuous development

I'd like to know if there is any way to develop continuously with Trigger.io and avoid the forge build step with every file change I want to test in my browser or simulator.
I was faced with the same problem and I've got a working solution that uses watchr and watch to automatically rebuild each time I make a change to a source file. If you are running a "web" version of your app you can make a change to a source file and go directly to your browser and see the effect of your changes fairly quickly depending on how long the build takes.
Prerequisites: Ruby, watchr, Unix 'watch', and a terminal.
gem install watchr.
create a new ruby file for watchr to know what files to monitor and what to do when it sees a change. I named my file 'my_watch.rb': https://gist.github.com/3153167
open two terminals. Terminal 1 will run watchr and Terminal two will run 'forge build ...'.
In terminal 1 run 'watchr my_watch.rb' making sure the path to my_watch.rb is correct and make sure you've edited my_watch.rb according to your setup so that the path inside watch(...) reflects the files to be watched. My example watches all files in the same directory (and beneath) as the my_watch.rb script. You can place my_watch.rb in the 'src' folder of your Trigger.io app if you want to match my example and run watchr my_watch.rb directly from the src folder. Also not the shell command and path in the block need to be updated to reflect your environment. Again, in my example 'my_watch.rb' is inside 'src/' so when a change is detected we go up one directory and call 'forge build'.
I tend to develop actively with the 'web' version of my app so I can just open terminal 2 to my forge project directory and 'forge run web'. When I am testing in simulators and on devices, yes I have to run forge build every time I want to see a change. However, I typically don't have to wait for forge build to finish because watchr kicked off the build as soon as I made a change and it happens pretty quickly.
I know this is not an ideal solution but so far developing new features in the 'web' version first and then implementing in mobile versions has been very smooth for me. I've never needed to kill the 'web' version after a build but I maybe just lucky. As for running build each time you want to test the mobile versions if you are good with your keyboard shortcuts it really isn't bad at all. XCode makes you build and run after changes are made to source code when creating native iOS apps so I don't think Trigger is unique in requiring this build step.
I hope this helps and that my answer isn't too specific to me and my setup.
The build phase makes some changes to your source to enable the forge.* APIs - therefore, trying to just use the raw files in your src directory won't work.
You may be tempted to change files directly in the development directory, but this is a pretty bad idea: we delete those files with impunity when we need to!
We have plans on our medium-term roadmap to add a file-system watcher to start builds automatically when changes have occurred.
In the meantime, I just use forge build && forge run PLATFORM which tends to only take a few seconds...
while not perfect... this works for me.
go into development/web
rm src
link to your root src, ie ln -s ../../src src
copy the all.js from the web/forge and add to your index.html
ie
start nodemon web.js
open in browser.
note you will need to comment out the all.js script tag for non web builds.

"Bundling" external libraries in Erlang?

I have an erlang application I have been writing which uses the erldis library for communicating with redis.
Being a bit of a newbie with actually deploying erlang applications to production, I wanted to know if there was anyway to 'bundle' these external libraries with the application rather than installing into my system wide /usr/lib/erlang/lib/ folder.
Currently my directory structure looks like...
\
--\conf
--\ebin
--\src
I have a basic Makefile that I stole from a friend's project, but I am unsure how to write them properly.
I suspect this answer could involve telling me how to write my Makefile properly rather than just which directory to plonk some external library code into.
You should really try to avoid project nesting whenever possible. It can lead to all sorts of problems because of how module/application version is structured within Erlang.
In my development environment, I do a few things to simplify dependancies and multiple developed projects. Specifically, I keep most of my projects sourced in a dev directory and create symlinks into an elibs dir that is set in the ERL_LIBS environmental variables.
~/dev/ngerakines-etap
~/dev/jacobvorreuter-log_roller
~/dev/elib/etap -> ~/dev/ngerakines-etap
~/dev/elib/log_roller -> ~/dev/jacobvorreuter-log_roller
For projects that are deployed, I've either had package-rpm or package-apt make targets that create individual packages per project. Applications get boot scripts and init.d scripts for easy start/stop controls but libraries and dependancy projects just get listed as package dependencies.
I use mochiweb-inspired style. To see example of this get your copy of mochiweb:
svn checkout http://mochiweb.googlecode.com/svn/trunk/ mochiweb
and use
path/to/mochiweb/scripts/new_mochiweb.erl new_project_name
to create sample project of the structure (feel free to delete everything inside src afterwards and use it for your project).
It looks like this:
/
/ebin/
/deps/
/src/
/include/
/support/
/support/include.mk
Makefile
start.sh
ebin contains *.beam files
src contains ***.erl files and local *.hrl files
include contains global *.hrl files
deps contains symlinks to root directories of dependencies
Makefile and include.mk takes care of including appropriate paths when project is built.
start.sh takes care of including appropriate paths when project is run.
So using symlinks in deps directory you are able to fine tune the versions of libraries you use for every project. It is advised to use relative paths, so afterwards it is enough to rsync this structure to the production server and run it.
On more global scale I use the following structure:
~/code/erlang/libs/*/
~/code/category/project/*/
~/code/category/project/*/deps/*/
Where every symlink in deps points to the library in ~/code/erlang/libs/ or to another project in the same category.
The simplest way to do this would be to just create a folder named erldir and put the beams you need into it and then in your start script just use the -pa flag to the erlang runtime to point out where it should fetch the beams.
The correct way (at least if you buy into the OTP distribution model) would be to create a release using reltool (http://www.erlang.org/doc/man/reltool.html) or systools (http://www.erlang.org/doc/man/systools.html) which includes both your application and erldis.
Add the external libraries that you need, anywhere you want them, and add them to your ERL_LIBS environment variable. Separate the paths with colon in unix or semicolon in dos.
Erlang will add the "ebin"-named subdirs to its code loading path.
Have your *.app file point out the other applications it depends on.
This is a good halfway-there approach for setting up larger applications.
Another way is put your lib path in ~/.erlang.
code:add_pathz("/Users/brucexin/sources/mochiweb/ebin").
code:add_pathz("/Users/brucexin/sources/webnesia/ebin").
code:add_pathz("./ebin").
code:add_pathz("/Users/brucexin/sources/erlang-history/ebin/2.15.2").

Resources