What is the recommended way to load+reload fsx files? Just experimenting... yes yes right language right job ect ect..
I love how the following can be done in FSI:
#load "script.fsx";
open Script
> let p = script.x 1
Error: This expression was expected to have type string but here has int...
(* edit script.fsx x to make it int -> int *)
>
> #load "script.fsx"
> let p = script.x 1
val it : int = 2
But how do we do this for an application that we are running via fsi blah.fsx? Maybe something that is sitting in a while loop. It seems #load and #use must not be inside let or module.. i.e. you cannot use #load like let reload script = #load script, wonder why?
My original method was to have .fs files and recompile + relaunch each time I wanted to add/fix something. This method feels primitive.
Second method was to attempt to use the #load directive inside of a module, which turns out to not work (kind of makes sense in terms of scoping)...
module test1 =
#load #"C:\users\pc\Desktop\test.fsx"
open Test
module test2 =
...
Another way would be to create a new process for every module by loading fsi module.fsx with process diagnostics, but this seems horrible, inefficient and ugh.
I have a feeling deep in my heart that this will not be trivial inside .NET, but I would like to pose the question anyway, FSI does it... I wonder if I can leverage the FSI API or something (or at the least to copy their code)?
TL;DR I read the following about erlang and want it for myself in F#.
Erlang: Is there a way to reload changed modules into an already running node with rebar?
"...any time a module in your program changes on disk, the reloader will replace the running copy automatically."
I don't know if this would work in FS but in ML you can load a master file that loads all your files in your project and then executes any code that you need to use to knit them together and runs your application. To see an example of a massive app run from inside of a REPL look at the Isabelle/HOL site at the Cambridge laboratory of Computational Science http://www.cl.cam.ac.uk/research/hvg/Isabelle/installation.html. After downloading the app look in the src code directory for any file called root.ml. There will be half a dozen of them that control various levels of implementation. This is recursive because a top level file can call a file in several sub-directories that loads that particular sub-feature. This allows targeting your application to various scenarios depending on which top level file is executed.
Typical .NET Framework applications cannot unload/reload assemblies unless they are in an App Domains that are separate from the primary one that starts up with the application. This is essentially how most plugin systems are designed for applications that run on the full .NET Framework. Things may be changing post .NET Standard 2.0 in .NET Core with the Collectible Assemblies feature.
References:
https://github.com/dotnet/coreclr/issues/552
https://github.com/dotnet/corefx/issues/19773
Related
I want to block the io library so that community created scripts don't have access to it.
I could simply create a Lua state without it, but here's the dilemma: community scripts should still be able to use the io library by calling loadfile() on libraries created by the dev team that have wrapped io functions in them.
I found no way to achieve this duality of blocking functions/libraries from community scripts while still allowing said scripts to run the offending functions/libraries if they are wrapped (for sanitization purposes) in another dev-maintainted library which community scripts can load with loadfile(). I'm resorting to the ugly method of blacklisting certain strings so if the script has them, it doesn't run. BTW, the blacklist is checked from the C++ side where the script to run is a string variable that is fed to the VM if it's clean.
If I blacklist...
"_G", "io.", "io .", "io}", "io }", "io ", "=io", "= io", "{io", "{ io", ",io", ", io", " io"
...is it still possible to call io library functions, or do I have everything covered? The reason blocking _G is a must is this:
a = "i"
b = "o"
boom = _G[a..b]
I want to know if another 'hack' is possible. Also, I welcome alternatives on how I can achieve the aforementioned duality without blacklisting strings.
Write your own loadfile function that will check the location of the loaded files (presumably all dev-maintained libraries have a defined location) and add the io library to the environment available to the loaded scripts (using env parameter in Lua 5.2+). The sandbox itself won't have access to the io library, but the dev libraries will.
I'm reading some source codes of a project, which is a combination of c++ and lua, they are interwined through luabind.
There is a la.lua file, in which there is a function exec(arg). The lua file also uses functions/variables from other lua file, so it has statements as below in the beginning
module(..., package.seeall);
print("Loading "..debug.getinfo(1).source.."...")
require "client_config"
now I want to run la.exec() from interactive terminal(on linux), but I get errors like
attempt to index global 'lg' (a nil value)
if I want to import la.lua, I get
require "la"
Loading #./la.lua...
./la.lua:68: attempt to index global 'ld' (a nil value)
stack traceback:
./lg.lua:68: in main chunk
[C]: in function 'require'
stdin:1: in main chunk
[C]: ?
what can I do?
Well, what could be going wrong?
(Really general guesswork following, there's not much information in what you provided…)
One option is that you're missing dependencies because the files don't properly require all the things they depend on. (If A depends on & requires B and then C, and C depends on B but doesn't require it because it's implicitly loaded by A, directly loading C will fail.) So if you throw some hours at tracking down & fixing dependencies, things might suddenly work.
(However, depending on how the modules are written this may be impossible without a lot of restructuring. As an example, unless you set package.loaded["foo"] to foo's module table in foo before loading submdules, those submodules cannot require"foo". (Luckily, module does that, in newer code without module that's often forgotten – and then you'll get an endless loop (until the stack overflows) of foo loading other modules which load foo which loads other modules which …) Further, while "fixing" things so they load in the interpreter you might accidentally break the load order used by the program/library under normal operation which you won't notice until you try to run that one normally again. So it may simply cost too much time to fix dependencies. You might still be able to track down enough to construct a long lua -lfoo-lbar… one-off dependency list which might get things to run, but don't depend on it.)
Another option is that there are missing parts provided by C(++) modules. If these are written in the style of a Lua library (i.e. they have luaopen_FOO), they might load in the interpreter. (IIRC that's unlikely for C++ because it expects the main program to be C++-aware but lua is (usually? always?) plain C.) It's also possible that these modules don't work that way and need to be loaded in some other way. Yet another possibility might be that the main program pre-defines things in the Lua state(s) that it creates, which means that there is no module that you could load to get those things.
While there are some more variations on the above, these should be all of the general categories. If you suspect that your problem is the first one (merely missing dependency information), maybe throw some more time at this as you have a pretty good chance of getting it to work. If you suspect it's one of the latter two, there's a very high chance that you won't get it to work (at least not directly).
You might be able to side-step that problem by patching the program to open up a REPL and then do whatever it is you want to do from there. (The simplest way to do that is to call debug.debug(). It's really limited (no multiline, no implicit return, crappy error information), but if you need/want something better, something that behaves very much like the normal Lua REPL can be written in ~30 lines or so of Lua.)
Let's say we have a fsi script like this:
#r "System.Core.dll"
let script = "Console.WriteLine(\"Hello, World!\")"
// fsi script ???
We can use #load to invoke fsi on a file, but is it possible to somehow invoke fsi on an in-memory string without writing that to a file first?
The use-case is an API compatibility tester: given a dll, I would like to create a script that invocations all its public APIs and compile that script against a different version of the same dll.
I could always write the generated script to disk, but it would be much cleaner if I could run it directly.
The tasks of kind you outlined in your question may be performed with tools provided within Microsoft.FSharp.Compiler.Interactive namespace, in particular, with the help of type FsiEvaluationSession of Microsoft.FSharp.Compiler.Interactive.Shell.
This gist authored by Ryan Riley demoes exactly your scenario using a thin wrapper type FSharpEngine over FsiEvaluationSession, making programmatic use of fsi as convenient as:
....
let engine = new FSharpEngine()
engine.Execute("<some F# code>") |> processOutput
....
engine.Dispose()
I would like to convert a bunch of fs files to fsx files.
Each of those fs file reference class defined in, say, base.fs
So instead of being compiled in the project and relying on the compiler resolution, all would be file based.
That means if I have all those file to include base.fsx, and that one file references another, base.fsx would be included twice.
Does anyone know how to make a conditional include with fsx files ?
The preprocessor documentation states
There is no #define preprocessor directive in F#. You must use the
compiler option or project settings to define the symbols used by the
#if directive.
If you're loading all the files from a single fsx script, then you can load the individual files from the project in the right order and the individual library files do not need to load base.fs directly - the code will be defined, because it has been loaded before.
For example, if you have base.fs:
module Base
let test() = 10
and you have more.fs which does not load base.fs but uses the functions defined there:
module More
let more () =
Base.test() + 1
then you can load all files in F# interactive (in, say, script.fsx) and it will work fine:
#load "base.fs"
#load "more.fs"
More.more()
The only disadvantage is that you won't get IntelliSense when editting more.fs (because the editor does not know about base.fs). To workaround that, it is probably a good idea to keep the files in a project in Visual Studio. But you can still load them in F# interactive for experimentation & testing.
I'm currently writing some functions that are related to lists that I could possibly be reused.
My question is:
Are there any conventions or best practices for organizing such functions?
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
Note that I'm not looking to fork and change the standard lists module, but to find a way to define new functions in a new module also called Lists, but avoid the consequent naming collisions by using some kind namespacing scheme.
Any advice or references would be appreciated.
Cheers.
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
No, so far as I know.
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
They are experimental and not generally used. You could have a module called lists in a different namespace, but you would have trouble calling functions from the standard module in this namespace.
I give you reasons why not to use lists:your_function() and instead use lists_extension:your_function():
Generally, the Erlang/OTP Design Guidelines state that each "Application" -- libraries are also an application -- contains modules. Now you can ask the system what application did introduce a specific module? This system would break when modules are fragmented.
However, I do understand why you would want a lists:your_function/N:
It's easier to use for the author of your_function, because he needs the your_function(...) a lot when working with []. When another Erlang programmer -- who knows the stdlb -- reads this code, he will not know what it does. This is confusing.
It looks more concise than lists_extension:your_function/N. That's a matter of taste.
I think this method would work on any distro:
You can make an application that automatically rewrites the core erlang modules of whichever distribution is running. Append your custom functions to the core modules and recompile them before compiling and running your own application that calls the custom functions. This doesn't require a custom distribution. Just some careful planning and use of the file tools and BIFs for compiling and loading.
* You want to make sure you don't append your functions every time. Once you rewrite the file, it will be permanent unless the user replaces the file later. Could use a check with module_info to confirm of your custom functions exist to decide if you need to run the extension writer.
Pseudo Example:
lists_funs() -> ["myFun() -> <<"things to do">>."].
extend_lists() ->
{ok, Io} = file:open(?LISTS_MODULE_PATH, [append]),
lists:foreach(fun(Fun) -> io:format(Io,"~s~n",[Fun]) end, lists_funs()),
file:close(Io),
c(?LISTS_MODULE_PATH).
* You may want to keep copies of the original modules to restore if the compiler fails that way you don't have to do anything heavy if you make a mistake in your list of functions and also use as source anytime you want to rewrite the module to extend it with more functions.
* You could use a list_extension module to keep all of the logic for your functions and just pass the functions to list in this function using funName(Args) -> lists_extension:funName(Args).
* You could also make an override system that searches for existing functions and rewrites them in a similar way but it is more complicated.
I'm sure there are plenty of ways to improve and optimize this method. I use something similar to update some of my own modules at runtime, so I don't see any reason it wouldn't work on core modules also.
i guess what you want to do is to have some of your functions accessible from the lists module. It is good that you would want to convert commonly used code into a library.
one way to do this is to test your functions well, and if their are fine, you copy the functions, paste them in the lists.erl module (WARNING: Ensure you do not overwrite existing functions, just paste at the end of the file). this file can be found in the path $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/src/lists.erl. Make sure that you add your functions among those exported in the lists module (in the -export([your_function/1,.....])), to make them accessible from other modules. Save the file.
Once you have done this, we need to recompile the lists module. You could use an EmakeFile. The contents of this file would be as follows:
{"src/*", [verbose,report,strict_record_tests,warn_obsolete_guard,{outdir, "ebin"}]}.
Copy that text into a file called EmakeFile. Put this file in the path: $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/EmakeFile.
Once this is done, go and open an erlang shell and let its pwd(), the current working directory be the path in which the EmakeFile is, i.e. $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/.
Call the function: make:all() in the shell and you will see that the module lists is recompiled. Close the shell.
Once you open a new erlang shell, and assuming you exported you functions in the lists module, they will be running the way you want, right in the lists module.
Erlang being open source allows us to add functionality, recompile and reload the libraries. This should do what you want, success.