I am configuring Ideaj to open an Erlang REPL by setting it up as and external tool, however the working directory param is ignored. Is there a way once the REPL is open to switch the working directory?
Within the shell use the command cd("some/path") and it will work pretty much the same way you would expect from an ordinary shell.
This means you can move around your project directories and run c(module_name) and be in the local loading path as well -- which can be pretty convenient when hand-tweaking/testing things.
As an aside... most folks don't use an IDE with Erlang, because the shell has so much stuff already built into it, and your OS itself already has whatever other tools you usually want. I've yet to see someone start with an IDE and stick with it in Erlang (usually wind up becoming either Emacs users or go the vim + coreutils route).
Also, pwd() and ls() work as you'd expect.
Regarding IDEs- I find the Erlang Intellij plugin (http://ignatov.github.io/intellij-erlang/) very usable, and when doing more than relatively short one-offs in vim (with Erlang plugin) the code completion and Find Usages kinds of IDE functionality to be useful.
Give it a shot - YMMV.
Related
I started looking into docker lately and I understand a lot of the benefits it offers I think, you can quickly create a docker container and run it on different machines. Building (compiling) is also relatively easy, you can download the maven image for example and just build your code. That works fine. So, building is easy, testing is easy and deploying (and running) in production is easy.
What I don't understand is how docker can make the development phase easier. And what I mean with the development phase is, starting up your IDE, reading code, quickly navigate through your methods definition using the methods the IDE provides, use intelliSense, etc. Then change something, run a unit test, try a different third party library, etc. All things you can do with your IDE. But I don't understand how to do this with a docker image. I've read a few posts about starting the IDE from within your docker container, but that requires setting things up with a windows manager and I am not sure if that's the way to go.
Of course I can set up my laptop in such a way that I can do all of this with my IDE, but that way I bypass all of the benefits docker should offer. I still have to download dependencies, set up environment variables, do a lot of manual settings etc. And not just me, but everyone in the team.
So, not a very concrete question, possibly a duplicate, but I just can't wrap my head around it, how to use an IDE together with docker?
Yeah it's hard. It also depends on what language/framework you're using. But the things you mention are all easy to accomplish. For example we use Ruby a lot and someone in my team uses RubyMine to work with his code. That source code is mapped onto the container so the changes are reflected immediately. If you want to run a test, I'm sure you can override the command your IDE brings by default with something custom like docker run --rm myapp ./run_tests.sh or similar. At least that's what I do with VIM.
Probably the most important missing part when doing dev with Docker is debugging. I think JetBrains is starting to add features to their IDE's but I'm not sure on the status of that.
Also, almost every IDE or good editor has an integrated console. You could maintain a docker exec sessions opened there and run all your app commands, like tests, generators or any other. Even do some basic debugging.
Hope it helps.
Is there any VM for Erlang that allows you to do compilation on the fly instead of compiling before?
There is a possibility to compile from the shell, thanks Martin.
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
Is there any pros or cons with doing this?
Will you still be able to hot swap code?
Is it the regular use-case to handle code?
What benefits does the compiler give you in the end then?
From the Erlang shell, you can compile a module on the fly using c("path/to/module.erl"). You can also access this functionality through the compile module, specifically the compile:file/{1,2} functions.
For example, suppose we have a file mymod.erl:
-module(mymod).
-export([myfun/0]).
myfun() -> io:format("Hello Joe~n").
Now, from the Erlang shell (or some other module!):
1> compile:file("mymod.erl").
{ok,mymod}
2> mymod:myfun().
Hello Joe
See Erldocs on the compile module for more information.
You can do a great deal with the Erlang compiler in runtime. For example, you can dynamically generate code for a module (use erl_syntax!) and then compile it without even writing it to a file using compile:forms/{1,2}.
(Insert standard speech on great power and great responsibility.)
Will you still be able to hot swap code?
Yes.
Is it the regular use-case to handle code?
No. Normally Erlang code is compiled ahead of time into BEAM bytecode. Depending on whether Erlang was started in embedded or interactive mode, the modules are either loaded on startup, or dynamically as they are referenced. If you are building a release, you basically have to compile ahead of time.
What benefits does the compiler give you in the end then?
Well, for one thing, we can build compact releases without unnecessary components like the compiler. Of course, we also get all the traditional benefits of ahead-of-time compilation, particularly that of not having to waste time compiling all the time.
To sum it up, unless you fully understand the implications and have a very good reason not to compile your code ahead of time, please follow the standard practices.
The Erlang VM can only run compiled code! If you want to interpret Erlang code then you need an interpreter. The module erl_eval implements an Erlang interpreter and is part of the standard Erlang/OTP distribution. It is used by the Erlang shell to interpret the expressions entered.
All code handling in the Erlang VM, whether compiling, loading or updating, is done at the module level so it is impossible to compile or load a just one function. The Erlang compiler is written in Erlang and always available and can compile to either a file or a binary which can be immediately loaded into the system. As #MartinTörnwall has pointed out compiling a module from the shell using c(module) is in essence compiling on the fly.
So there would be no problems in automatically compiling code on the fly when it is used, at the module level. It is just that the current system is not designed to work that way and by default when it tries to load a module it only looks for the pre-compiled object file, the .beam file.
Erlang has an interpreter escript. Entire Erlang archive can be written in script. Almost all features are available.
By default, the script will be interpreted. You can force it to be compiled by including the -mode(compile). in the script.
Though it depends on the way you design your application, regular practice is to have .erl files which are compiled and run than having escript files.
So now you have many options.
Compile .erl file to .beam using c(my_module) this auto loads the .beam file. So the existing VM can run it on the fly. On in code you can use compile module functions like file, purge and load to load and run it on the fly.
Compile and keep the .erl files using erlc, erl -make, rebar, etc (Erlang has rich support) and then run it. You can build archives, boot scripts, rel etc to manage running and release of the Erlang software. This usually is the practice for production.
Use escript and run everything in interpreted mode.
Use escript and give -mode(compile) option to tell Erlang VM that at runtime (when starting to run escript) compile the code and run the compiled code (in memory)
Is there any pros or cons with doing this?
Compiled code is faster than interpreted code. I dont see any other right now in Erlang as pretty much everything is supported in both. Erlang even supports combination (Calling compiled code from interpreted code)
Will you still be able to hot swap code?
Yes in all cases. Your code also should be able to handle this.
Is it the regular use-case to handle code?
Option 2 for production. Option for 1 for learning / simple development. Option 3 and 4 in need basis for specific requirements (May be one time running).
What benefits does the compiler give you in the end then?
To make it clear, erlc program provides a common way to run all compilers in the Erlang system and compile module gives an interface to Erlang compilers. Compiler gives intermediate binary .beam file which helps in running Erlang code faster than interpreted counterpart. They also catch syntax errors (compilation errors).
My company has a program that uses Lua embedded in its runtime, loading up .lua files from disk and executing functions defined in them repeatedly.
Is there a way to attach to the running process and set breakpoints in my .lua files? (I'd accept either gdb-style command-line debugging as part of the Lua distribution, or perhaps a third-party IDE that provides Visual-Studio-like GUI breakpoints.)
Or is what I'm asking for entirely nonsensical and impossible given the nature of the runtime loading up random files from disk?
Edit: Looks like it's not nonsensical, given that Lua's own debug.getinfo() function can determine the source file for a given function, and debug.sethook() allows a callback for each new line of code entered. So, it's reasonable to load source code from disk and be able to tell when the interpreter is executing a particular line of code from that file. The question remains: how do I latch onto an existing process that has a Lua interpreter and inject my own trace function (which can then watch for file/line number pairs and pause execution)?
If you can modify the .lua files, you can insert the following call just before anything you need to debug:
require 'remdebug.engine'.start()
It starts the RemDebug Lua debugger engine and tries to connect to a controller. If it cannot connect, it will just continue running as normal. I did some fixes to the debugger engine, such as dealing with temporary variables, and my student is working on a debugger GUI (due next year).
In the meantime, you can try if Lua Development Tools works for you. It features a debugger similar to RemDebug, which should be possible to set up as follows:
require("debugger")(host, port, idekey)
Alternatively, you can use SciTE-debug, which is an extension to the SciTE editor, and can serve as a controller to RemDebug. Just make sure you insert the call to remdebug.engine.start somewhere in your Lua code and insert this into the SciTE output window:
:debug.target=remote.lua
When you start your program, SciTE should show the source and current line.
I've been using Decoda editor for that. It allows you to attach to a running C++ application, after that it detects that you're running a Lua Interpreter within your C++ code and show your Lua source code, where you can add beakpoints and inspect variables as usual.
This is an alternative I use after much searching. If you have an external executable that loads lua, I got this working in a few minutes. The op is very responsive, it has an interactive debugger which loads your code you can place debug points interactively. It doesn't have an editor, but I use scite or crimson editor and start the executable, one line in your main lua module enables the debugger.
http://www.cushy-code.com/grld/ - this link seems dead now
I've moved to eclipse.org/ldt it has an ide and integrated debugger, recommended
hth
The Lua plugin for IntelliJ has a working debugger with no special setup required other than pointing to your Lua interpreter.
Here's a screencast of it:
http://www.screencast.com/t/CBWIkoZPg
Similar to what Michal Kottman described, I have implemented a debugger based on RemDebug, but with additional fixes and features (on github: https://github.com/pkulchenko/MobDebug).
You can update your .lua file with require("mobdebug").start("localhost", 8171) at the point where you want the debugging to start. You can then use the command line debugger to execute commands, set breakpoints, evaluate/execute expressions and so on.
As an alternative, you can use ZeroBrane Studio IDE, which integrates with the debugger and gives you a front-end to load your code and execute same debugger commands in a nice GUI. If you want to see the IDE in action, I have a simple demo here: http://notebook.kulchenko.com/zerobrane/live-coding-in-lua-bret-victor-style.
You should probably use Decoda.
Go to Debug -> Processes -> Attach to attach your process. This should work fine.
Well the easiest way is this, thanks to the genius author
https://github.com/slembcke/debugger.lua
you don't need to setup a remote debug server ,just require one file,and simplely call dbg() and it will pause,just like gdb
an tutorial is also shipped with it, check it out.
I'm trying to learn Lua, but I don't really know which binary to download. There's 2 choices:
Lua Binaries
Lua for Windows
The second option Lua for Windows seems to be the recommended option, but the installer weighs in at 26.6Mb, which is pretty hefty for what is supposed to be a v.lightweight language.
I'm thinking of using Lua as a scripting language for games, and perhaps as a fast development language for file processing like how Python or Ruby does it. So it must be something lightweight, not a 26.6Mb file.
Which is the appropriate one to download and start?
Luaforwindows, no doubt. It's simpler, easier and faster.
The installer comes with lots of stuff (Scite editor & several extra libs if I remember well). But the installer asks you before installing all those extra stuff. Just install the minimum and you will be fine.
Lua for Windows includes a handful of other, useful libraries and tools. The actual Lua executable included is still tiny, in the 1-2MB range as expected.
Having the extras there already will only make things easier, and disk space is cheap: go with Lua for Windows.
You may also want to check ZeroBrane Studio, which is only 4M download on Windows and is based on the same editor as SciTE that comes with Lua for Windows. ZBS also comes with 50+ Lua examples and few simple lessons to get started quickly with Lua programming.
Quoting from here.
Installation
The LuaBinaries files are intended for advanced users and programmers who want to incorporate Lua in their applications or distributions and would like to keep compatibility with LuaBinaries, so they also will be compatible with many other modules available on the Internet.
If what you want is a full Lua installation, please check other projects such as the Lua for Windows and LuaRocks.
Seems quite clear to me that you should download Lua for Windows.
My problem is to create an ant target for automating our installer running in console mode.
The installer is created using InstallAnywhere 2008, which UniversalExtractor recognizes as a 7-zip archive. Once I have the archive unpacked, it appears that the task can use an input file to drive the console (at the very least, it appears that emitting a quit shuts everything down correctly, and output is captured).
So it looks to me as though I have all of the pieces I need for proving out this idea except a clean way to perform-self-extraction-then-stop. Searching for a command-line argument to stop the auto execution has not produced a likely candidate, and the only suitable ant task I've found ( http://www.pharmasoft.be/7z/ ) isn't so clearly documented that I have a lot of confidence in it.
The completed completed is expected to work in Windows, Linux, and a small handful of other Unix environments.
What's the best practice to use here?
Since you control the installer creation, can you run the self-extraction step on your machine, package the results before the installer is launched in a ZIP file, etc. and use that instead of the single file executable? Not very elegant but it may work.
Also, I am a bit hesitant to blatantly promote my project :) but since it has been a while since you asked the question and nobody has answered, have you considered an alternative? Our project InstallBuilder allows you to install in unattended mode directly, without having to autoextract the contents. Just invoke the executable with --mode unattended, pass any additional options you may need from the command line or an external file and you are good to go. We have a lot of ex-InstallAnywhere customers :)