I've never really used the .app files for my projects. I understand those are required for loading an application through the application module.
Is there another use of such files?
The *.app along with the *.rel file is used to generate boot scripts. A boot script is used to automatically launch my application when erlang is started up. An application resource file describes what applications need to be running before my application can be launched. For instance, if I use mnesia and indicate that in the .app file for my application, when I generate a boot script and use it to start my application it will start mnesia for me when starting my own application.
While you may get automatic dependency installation/handling with other package managers the boot script is useful for managing dependencies in the startup of your application which is important in an OTP application setup.
note: applications in otp refers to a bundle of running processes and/or code. applications can depend on other applications in a number of ways. Either they require the code to be installed or they require the application to be running.
They're used for when building releases (with *.rel to generate boot scripts). I recommend just starting with the *.app file and the application behaviour callback though. As for starting with OTP. It is a nice sweet spot in development to do:
-module(foo).
-export([start/0]).
start() ->
[application:start(A) || A <- [sasl, inets, x, y, etc]].
to start all the applications you depend on having running for your application with a simple
$ erl -s foo
If your project is to be used as a service or framework to other projects, another benefit is that those Erlang applications can in turn depend on, or include, your application. An Erlang application is the unit in which to provide services and even libraries (stdlib is a library, no "moving parts", just library modules).
Being an Erlang application gives you a simple way to pass configuration parameters to your application. When you pass -mnesia dir '"/some/path"' to erl, it is accessed by the mnesia application as application:get_env(mnesia, dir). If you define an application called foo, you can pass -foo key 'some-Erlang-literal' to erl. The *.app file can contain defaults in the env section, the *.rel file overrides them, and the command line overrides that.
The *.app file has sections where you list the modules and registered processes your application introduces. This is used when building releases to check that there are no collisions between applications.
Some additional features of Erlang applications that I don't know well:
Start phases. Used for applications that have complex start-up needs, such as being "partially running" for a while.
Distributed applications. Used to define where the application should run in a cluster.
Takeover. Used to release a machine from the distributed application, such as if it needs maintenance or upgrading.
Eventually you will begin to want to start your application set in a more structured way, and that is when the whole release / boot-script is going to become more understandable. Or the opposite, you will think it is overkill for you specific needs. Just start with a simple *.app and a module with the application behaviour callbacks. You wont look back.
Related
New to Emacs and recently been trying to get used to it. loving it so far!
One thing I cannot seem to figure out by myself nor find any proper examples of how to figure out to following workflow:
Since I work on multiple projects with different languages and like to keep my work and private projects separated as much as possible in my OS, ive been working with development containers using docker and VScode for the past years.
This allowed me to keep both my project dependencies and the development tools in one container, where i just attached my VScode instance to a project and used extensions such as Language servers / linting / debugging from within that container.
Currently I can open my projects in emacs as the code is local and mounted to the containers, but im looking for a way to either:
Allow my local emacs to use the language/linting/debugging services installed in the container.
Install emacs in the dev containers and mount my configs to keep this synchronized.
Or better alternatives?
Most valuable would be to get language servers working again.
In case it matters, i'm working in DOOM Emacs on Arch. Mostly Python, PHP and NodeJS projects.
... use the language/linting/debugging services installed in the container
By design this is difficult to do with Docker: by design the host system can't directly access files or binaries installed in a container. Without a lot of tricks around bind mounts and user IDs and paths and permissions it's very difficult to run a program in a container in a way that looks like it's on the host system. A couple of tools have those tricks built in, but it's not at all universal. (Jenkins for example generates about 5 lines' worth of docker run command options if you ask it to run a step inside a container.)
My Emacs experience has generally been much better using a host-based per-language version manager and per-project packaging tool (a per-project node_modules directory, rbenv plus Ruby gem sets, pipenv for Python programs, ...).
In short: Emacs can't use language servers, language interpreters, or other tools from Docker images instead of the host system (without writing a lot of Lisp (and if you do consider publishing it to MELPA (and also to GitHub))).
Most valuable would be to get language servers working again.
M-x lsp-install-server will download one of the language servers lsp-mode knows about and save it in your $HOME/.emacs directory. If you activate lsp-mode and it doesn't already have a language server for the current major mode, it will offer to download it for you. There's not much to "get working" usually.
Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.
if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...
You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.
I am very new with electron application. I need some help with election installation.
I have an Electron desktop application and a windows service.
I can start and stop my pre installed services by using sudo-prompt package.
I am creating windows installer by using electron-winstaller package.
But I want to bundle my windows service along with my electron application. My requirement is when I install my electron package then it should install my service also, when I uninstall my package then that service should be uninstalled.
Please help me out. Any clue, Any suggestions will be appreciated.
If you think this should be achieved with something else then please do suggest me.
Electron's windows installer packager strikes me a specific case tool that would likely hit limitations in scenarios like this. I would use a general case tool instead such as the Free and Open Source Windows Installer XML Toolset aka WiX. I would also use with that another FOSS application called Industrial Strength Windows Installer XML aka IsWiX.
WiX allows you to describe and build MSI databases using an XML/XSD domain specific language. It supports MSBuild for easy integration with your CI/CD pipeline. IsWiX* is a set of project templates and graphical designers that provide an opinionated project structuring (scaffolding) and greatly speeds up the learning curve and implementation. For example, this installer you describe could be done without writing a single line of XML.
For more information see: https://github.com/iswix-llc/iswix-tutorials
The desktop-application and windows-service tutorials should** show you everything you need to know to author this installer. Basically follow the desktop-application all the way through and then skip to the final portion of the windows-service tutorial where you define the windows service.
I'm the maintainer of IsWiX
** This assumes your service exe is a proper Windows service that interfaces with the windows service control manager. If it's really just a console app that runs as a service you will need to include a program such as srvany.exe. This will require one line of hand crafted XML to extended the service definition in the registry with the proper command line value to be passed to your exe. An example can be found here: Wix installer to replace INSTSRV and SRVANY for user defined service installation
I'm exploring Yaws and I've read the Yaws manual and the Building Web Applications with Erlang book. I've followed every step in the book's last chapter where a multi_cast app is built, but I can't run it. I believe it's the Erlang code which isn't being found or used.
I've got a project dir in my home directory and I've pointed Yaws docroot for this particular Virtual Server to my project directory where a htdocs dir with the .yaws files reside.
<server localhost>
port = 8001
listen = 127.0.0.1
docroot = <my_path_here>/erlang_yaws/multi_cast
</server>
and I've changed the ebin_dir to also point to the project´s ebin directory:
ebin_dir = <my_path_here>/erlang_yaws/multi_cast/_build/default/lib/multi_cast/ebin
Upon starting Yaws with yaws -i I can go to some simple .yaws files I've got which do not rely on any of the project´s .beam files. However, when I change my browser's location to a .yaws files which rely on the app's compiled files I get the following error:
=ERROR REPORT==== 21-Jan-2017::14:27:17 ===
ERROR erlang code threw an uncaught exception:
File: <my_path_here>/erlang_yaws/multi_cast/htdocs/status.yaws:1
Class: exit
Exception: {noproc,{gen_server,call,[multi_cast_front,{get_etag}]}}
Req: {http_request,'GET',{abs_path,"/htdocs/status.yaws"},{1,1}}
Stack: [{gen_server,call,2,[{file,"gen_server.erl"},{line,204}]},
{m_27449121_1,out,1,
[{file,"<my_path_here>/.yaws/yaws/default/m_27449121_1.erl"},
{line,35}]},
{yaws_server,deliver_dyn_part,8,
[{file,"yaws_server.erl"},{line,2872}]},
{yaws_server,aloop,4,[{file,"yaws_server.erl"},{line,1242}]},
{yaws_server,acceptor0,2,[{file,"yaws_server.erl"},{line,1065}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]
I believe this is due to the fact that my gen_server app isn't starting correctly or not being found? The source code seems to compile successfully.
I'm on MacOS Sierra with Yaws 2.0.4, Erlang 19 and I did a regular Yaws install with homebrew.
Also, is there a resource which thoroughly explains how to structure Yaws and a simple application? Reading both the manual and the book and both fail at explaining this (or I'm really thick), and it's quite frustrating to get stuck from a user experience viewpoint.
You've correctly added your modules to the load path using the ebin_dir configuration variable, but that's not enough to actually start and run your application. The book you're following doesn't address this — see page 101, where it says, "It is also possible to run Yaws as an OTP app inside an existing Erlang setup, but this is beyond the scope of this book."
An easy way to start your server is to use the runmod configuration variable, documented in the yaws_config man page:
runmod = ModuleName At startup Yaws will invoke ModuleName:start() in a separate process. It is possible to have several runmods. This is useful if we want to reuse the Yaws startup shell script for our own application.
Assuming your gen_server provides a start/0 function, set the name of your module as the value of runmod and Yaws will start it. A problem with this approach, though, is that it works fine for experimentation but not for production because there's nothing supervising your gen_server.
A better approach is to have a full OTP application running alongside Yaws. With this approach, your application's supervisors manage your workers. One way to do this is by using yapps (Yaws applications). Another way is to create a release containing both Yaws and your application. Still another way is to let Yaws supervise your workers.
Addressing the second part of your question requires defining what you mean by "a simple application". Is it a simple static page application? A simple application using .yaws pages? A simple SSE application? A simple appmod application? A simple websocket application? A simple application running as a sibling of Yaws? There are a variety of possible definitions. The best short answer is to always structure your application using OTP design principles. You might also consider sending questions on this topic to the Yaws mailing list.
Some of modern infrastructure components such as Yaws and CouchDB can be installed on Ubuntu with
sudo apt-get install
and then they can be started with
{service_name} start or simply {service_name}
I wonder if there are any differences between this approach and the Erlang/OTP way, that is uploading an OTP release to the production server and then starting it with a rebar-generated script.
What is the industry standard? Install/start a repository-based binary or upload/start a rebar-generated release?
Also, do the repository-based binaries have all the OTP functionality in place, like OTP applications structure, supervision and the option to attach to the live application via console?
Every time, you are starting erlang, you are starting a release. Even, if you only type erl in terminal, it runs a release (you can find the .boot, .rel and .script files in erlang_rootdir/releases/name_of_release.
I think this answers your last question: packages installed from repository still have the OTP structure, supervision and all the goodies. There are just couple of bash scripts, that make starting and stopping them more friendly to someone used to service_name start. You should also be able to find a script similar to the one generated by rebar.
Check, where are those files and simply cat them. You should see, they simply start Erlang release.
The only thing, that you can't do with package managers, is hot code upgrade. Most package managers simply stop the running application, install new version and start it back. In Erlang, you can upgrade running release, but this is quite advanced stuff. Besides that - there are no disadvantages of using OS packages.