i have one yaws file(let say a.yaws) inside that I have a lot of function which i m using again and again .so i have decided to put those common function inside the other yaws file (let say common.yaws) and include this yaws to a.yaws.
so what is the correct syntax for this.
i m using it but seems not including the file
-include("common.yaws").
thanx in adavance.
If the functions you have in your common file are basically erlang functions you can simply put those functions in an erlang module and simply call the function directly. As an example:
(in one.yaws)
<erl>
out(Arg) ->
mycommonstuff:doIt(Arg).
</erl>
where mycommonstuff.erl contains the exported function doIt.
If your common.yaws file actually contains yaws type features you could use the server side include feature of yaws - that is explained here:
http://yaws.hyber.org/ssi.yaws
Related
I want to make a small "library" to be used by my future maxima scripts, but I am not quite sure on how to proceed (I use wxMaxima). Maxima's documentation covers the save(), load() and loadFile() functions, yet does not provide examples. Therefore, I am not sure whether I am using the proper/best way or not. My current solution, which is based on this post, stores my library in the *.lisp format.
As a simple example, let's say that my library defines the cosSin(x) function. I open a new session and define this function as
(%i0) cosSin(x) := cos(x) * sin(x);
I then save it to a lisp file located in the /tmp/ directory.
(%i1) save("/tmp/lib.lisp");
I then open a new instance of maxima and load the library
(%i0) loadfile("/tmp/lib.lisp");
The cosSin(x) is now defined and can be called
(%i1) cosSin(%pi/4)
(%o1) 1/2
However, I noticed that a substantial number of the libraries shipped with maxima are of *.mac format: the /usr/share/maxima/5.37.2/share/ directory contains 428 *.mac files and 516 *.lisp files. Is it a better format? How would I generate such files?
More generally, what are the different ways a library can be saved and loaded? What is the recommended approach?
Usually people put the functions they need in a file name something.mac and then load("something.mac"); loads the functions into Maxima.
A file can contain any number of functions. A file can load other files, so if you have somethingA.mac and somethingB.mac, then you can have another file that just says load("somethingA.mac"); load("somethingB.mac");.
One can also create Lisp files and load them too, but it is not required to write functions in Lisp.
Unless you are specifically interested in writing Lisp functions, my advice is to write your functions in the Maxima language and put them in a file, using an ordinary text editor. Also, I recommend that you don't use save to save the functions to a file as Lisp code; just type the functions into a file, as Maxima code, with a plain text editor.
Take a look at the files in share to get a feeling for how other people have gone about it. I am looking right now at share/contrib/ggf.mac and I see it has a lengthy comment header describing its purpose -- such comments are always a good idea.
For principiants, like me,
Menu Edit:configure:Startup commands
Copy all the functions you have verified in the first box (this will write your wxmaxima-init.mac in the location indicated below)
Restart Wxmaxima.
Now you can access the functions whitout any load() command
I have written some code and I want to provide it in a package, but I want to also expose it to package consumers as a worker. For this purpose I have created a wrapper class, that runs an isolate internally and using the send command and listeners communicate with the Isolate to provide the functionality.
The problem arises when I want to use this wrapper class from bin or web directory: the Uri provided is interpolated from the directory of the running/main Isolate instead of from the package root. For bin it is packagename|bin/ and for web it is packagename|web.
I would like to export this class to the consumers so they can chose an easier approach than to construct their own Isolate, but I am not sure how to specify the main file that will be used in spawnUri.
Is there a way to specify the file so it will always be resolved to the correct file regardless of where the main Isolate is run from.
Structure:
// Exports the next file so the class in it will be package visible
packageroot -> lib/package_exports_code_that_spawns_isolate.dart
// This file should contain URI that always resolve to the next file
packageroot -> lib/code_that_spawns_isolate.dart
// The main worker/Isolate file
packageroot -> lib/src/worker/worker.dart
Thanks.
To refer to a library in your package, you should use a package: URI.
Something like:
var workerUri = Uri.parse("package:myPackage/src/worker/worker.dart");
var isolate = await Isolate.spawnUri(workerUri,...);
It's not perfect because it requires you to hard-wire your package name into the code, but I believe it's the best option currently available.
The Isolate.spawnUri function doesn't (and can't) resolve a relative URI reference wrt. the source file that called it - nothing in the Dart libraries depends on where it's called from, that's simply too fragile - so a relative URI isn't going to work. The only absolute URI referencing your worker is a package: URI, so that's what you have to use.
I like using .fsi signature files to control visibility. However, if I have both Foo.fsi and Foo.fs files in my solution, and #load "Foo.fs" in a script, it doesn't seem like the corresponding signature file gets used. If I do:
#load "Foo.fsi"
#load "Foo.fs"
... then the desired visibility control happens. Is this the recommended way to achieve this, or is there a better way to do it? In a perfect world, one would like to see the signature file automatically loaded, too.
Not a final answer, but a better way.
From reading Expert F# 4.0 one can do
#load "Foo.fsi" "Foo.fs" "Foo.fsx"
All three loads are on one line.
TL;DR
The link to the book is via WolrdCat just put in a zip code and it will show you locations near there where the book can be found.
I'm currently writing some functions that are related to lists that I could possibly be reused.
My question is:
Are there any conventions or best practices for organizing such functions?
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
Note that I'm not looking to fork and change the standard lists module, but to find a way to define new functions in a new module also called Lists, but avoid the consequent naming collisions by using some kind namespacing scheme.
Any advice or references would be appreciated.
Cheers.
To frame this question, I would ideally like to "extend" the existing lists module such that I'm calling my new function the following way: lists:my_funcion(). At the moment I have lists_extensions:my_function(). Is there anyway to do this?
No, so far as I know.
I read about erlang packages and that they are essentially namespaces in Erlang. Is it possible to define a new namespace for Lists with new Lists functions?
They are experimental and not generally used. You could have a module called lists in a different namespace, but you would have trouble calling functions from the standard module in this namespace.
I give you reasons why not to use lists:your_function() and instead use lists_extension:your_function():
Generally, the Erlang/OTP Design Guidelines state that each "Application" -- libraries are also an application -- contains modules. Now you can ask the system what application did introduce a specific module? This system would break when modules are fragmented.
However, I do understand why you would want a lists:your_function/N:
It's easier to use for the author of your_function, because he needs the your_function(...) a lot when working with []. When another Erlang programmer -- who knows the stdlb -- reads this code, he will not know what it does. This is confusing.
It looks more concise than lists_extension:your_function/N. That's a matter of taste.
I think this method would work on any distro:
You can make an application that automatically rewrites the core erlang modules of whichever distribution is running. Append your custom functions to the core modules and recompile them before compiling and running your own application that calls the custom functions. This doesn't require a custom distribution. Just some careful planning and use of the file tools and BIFs for compiling and loading.
* You want to make sure you don't append your functions every time. Once you rewrite the file, it will be permanent unless the user replaces the file later. Could use a check with module_info to confirm of your custom functions exist to decide if you need to run the extension writer.
Pseudo Example:
lists_funs() -> ["myFun() -> <<"things to do">>."].
extend_lists() ->
{ok, Io} = file:open(?LISTS_MODULE_PATH, [append]),
lists:foreach(fun(Fun) -> io:format(Io,"~s~n",[Fun]) end, lists_funs()),
file:close(Io),
c(?LISTS_MODULE_PATH).
* You may want to keep copies of the original modules to restore if the compiler fails that way you don't have to do anything heavy if you make a mistake in your list of functions and also use as source anytime you want to rewrite the module to extend it with more functions.
* You could use a list_extension module to keep all of the logic for your functions and just pass the functions to list in this function using funName(Args) -> lists_extension:funName(Args).
* You could also make an override system that searches for existing functions and rewrites them in a similar way but it is more complicated.
I'm sure there are plenty of ways to improve and optimize this method. I use something similar to update some of my own modules at runtime, so I don't see any reason it wouldn't work on core modules also.
i guess what you want to do is to have some of your functions accessible from the lists module. It is good that you would want to convert commonly used code into a library.
one way to do this is to test your functions well, and if their are fine, you copy the functions, paste them in the lists.erl module (WARNING: Ensure you do not overwrite existing functions, just paste at the end of the file). this file can be found in the path $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/src/lists.erl. Make sure that you add your functions among those exported in the lists module (in the -export([your_function/1,.....])), to make them accessible from other modules. Save the file.
Once you have done this, we need to recompile the lists module. You could use an EmakeFile. The contents of this file would be as follows:
{"src/*", [verbose,report,strict_record_tests,warn_obsolete_guard,{outdir, "ebin"}]}.
Copy that text into a file called EmakeFile. Put this file in the path: $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/EmakeFile.
Once this is done, go and open an erlang shell and let its pwd(), the current working directory be the path in which the EmakeFile is, i.e. $ERLANG_INSTALLATION_FOLDER/lib/stdlib-{$VERSION}/.
Call the function: make:all() in the shell and you will see that the module lists is recompiled. Close the shell.
Once you open a new erlang shell, and assuming you exported you functions in the lists module, they will be running the way you want, right in the lists module.
Erlang being open source allows us to add functionality, recompile and reload the libraries. This should do what you want, success.
The include directive is usually used for a .hrl file at the top of an .erl file.
But, I would like to use include from the Erlang console directly.
I am trying to use some functions in a module. I have compiled the erl file from the console. But, the functions I want to use do not work without access to the hrl file.
Any suggestions?
"But, the functions I want to use do not work without access to the hrl file."
This can't be true, but from this I'll take a shot at guessing that you want access to records in the hrl file that you don't (normally) have in the shell.
If you do rr(MODULE) you will load all records defined in MODULE(including those defined in an include file included by MODULE).
Then you can do everything you need to from the shell.
(Another thing you may possibly want for testing is to add the line -compile(export_all) to your erl file. Ugly, but good sometimes for testing.)
Have you tried the compile:file option? You can pass a list of modules to be included thus:
compile:file("myfile.erl", [{i, "/path/1/"}, {i, "/path/2/"}])
It's worth nothing that jsonerl.hrl doesn't contain any functions. It contains macros. As far as I know, macros are a compile-time-only construct in Erlang.
The easiest way to make them available would be to create a .erl file yourself that actually declares functions that are implemented in terms of the macro. Maybe something like this:
-module(jsonerl_helpers).
-include("jsonerl.hrl").
record_to_struct_f(RecordName, Record) ->
?record_to_struct(RecordName, Record).
... which, after you compile, you could call as:
jsonerl_helpers:record_to_struct_f(RecordName, Record)
I don't know why the author chose to implement those as macros; it seems odd, but I'm sure he had his reasons.