F# interactive script load management - f#

Suppose I have the following scripts:
C.fsx
namespace FruitSalad
type Cherry =
{
Cherry : int
}
B.fsx
#load "./C.fsx"
namespace FruitSalad
type Banana =
{
Banana : int
Cherry : Cherry
}
A.fsx
#load "B.fsx"
#load "C.fsx"
open FruitSalad
type Apple =
{
Apple : int
Banana : Banana
Cherry : Cherry
}
let c =
{
Cherry = 3
}
let a =
{
Apple = 1
Banana =
{
Banana = 2
Cherry = c
}
Cherry = c
}
printfn "%A" a
Running A.fsx gives this error:
error FS0001: This expression was expected to have type
'FSI_0001.FruitSalad.Cherry'
but here has type
'FSI_0002.FruitSalad.Cherry'
I can fix this by removing #load "C.fsx" from A.fsx.
However, maintaining this is awkward for much larger script projects.
Does F# support something like "include once" so that this is not necessary?

I agree with the comment by Fyodor that having a project might be easier. Note that you can create a project with a number of *.fs files in it, but still also have a *.fsx file that loads all the files from the project. This way, the IDE will read information from the project (and all your autocomplete will work), but you will also be able to run everything from a script file, which just needs to #load all the *.fs files in the correct order.
Another option that might help you if you prefer to stick to scripts is to use #load with multiple files at the same time. In the case above, you should be able to do:
#load "./B.fsx" "./C.fsx"
This way, F# interactive will know that you are loading these two files at the same time and will not create a separate version of C.fsx while loading B.fsx.

Related

in DXL, how to get the handle of a module that I don't open myself in the DXL script

I have a DXL script that open (read or edit) modules and put them in a skip list (so that I could close them at the end)
The skip list store the module handle of each module read or edit :
if (MODIF_OTHER_MODULES)
{
modSrc = edit(modSrc_Name, false)
} else
{
modSrc = read(modSrc_Name, false)
}
put(skp_openmodule, modSrc, modSrc)
But sometimes modules are already open outside my DXL script so following check is KO :
mvSource = sourceVersion lr
modSrc_data = data mvSource
modSrc_Name = fullName(source lr)
if (null modSrc_data)
"read/edit modSrc_Name module and add module in the skip list" : OK DONE
else
"the module is already open but maybe I don't open it myself"
"so I WANT TO CHECK if module is already in the skiplist and ADD module of modSrc_data in the precedent skip list if it isn't " : I DONT KNOW HOW !
"
Is there a way to get module of modSrc_data so that it could be added in skp_openmodule if it is not already present in the list ?
I don't want to read/edit it again because I don't know in which mode it was open previously and I would prefer to avoid it because I will do it for each objet and each link !
also it would be great if I could also retrieve the information about how the module was open (read or edit)
I tried :
module (modSrc_data)
and
module(modSrc_Name)
but it doesn't work.
Not sure if this is due to the excerpt you posted, but you should always turn off the autodeclare option and ensure that you always use the correct types for your variables by either checking the DXL manual or by using alternative sources like the "undocumented perms list" . data performed on a ModuleVersion returns type Module. So you already have what you need. An alternative would be the perm bool open(ModName_ modRef). And note that the perm module does not return a Module, but a ModName_.
Also, in addition to bool isRead(Module m), bool isEdit(Module m) and bool isShare(Module m)(!!) when you really want to close modules that have been opened previously, you might want to check bool unsaved(Module m)
ModuleVersion mvSource = sourceVersion lr
Module modSrc_data = data mvSource
string modSrc_Name = fullName(source lr)
if (null modSrc_data)
print "read/edit modSrc_Name module and add module in the skip list"
else
{
print "the module is already open but maybe I don't open it myself"
if isRead (modSrc_data) then print " - read"
if isEdit (modSrc_data) then print " - edit"
if isShare (modSrc_data) then print " - shared mode"
if unsaved (modSrc_data) then print " - do not silently close me, otherwise the user might be infuriated"
}

What are the step by step actions needed to upgrade a holochain rust back end from 0.0.1 to 0.0.2?

I started by reviewing the Api notes and comparing them:
https://developer.holochain.org/api/
What I have done so far:
Preparation:
Downloaded and installed 0.0.2, and then updated the bash_profile following this link:
https://developer.holochain.org/start.html
JSON PARSE/Stringify update
Updated all of the tests to remove any JSON.parse and JSON.stringify calls as they are no longer needed, for example replacing this:
JSON.stringify({})
with this:
{}
Derive function update
Updated all derive functions in zome definition files ( lib.rs ) to include Debug and DefaultJSON, like this:
#[derive(Serialize, Deserialize, Debug, DefaultJson)]
Json String update
Did a global find and replace for all zome files on the JsonString
changing the serde_json call to look like this :
replacing
-> serde_json::Value
with
-> JsonString
so it looks like this:
fn handle_create_action(action: Action, user_address: HashString) ->
JsonString { ...
Current errors
I am running to these errors:
error: cannot find derive macro DefaultJson in this scope
error[E0412]: cannot find type JsonString in this scope
how can we import these into the lib.rs files?
Update
This is by no means a comprehensive answer, but here are some of the additional steps I have found with help.
You will also need to edit the cargo.toml file of each zome, the dependencies part, to look like this:
serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
hdk = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
holochain_core_types = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
holochain_core_types_derive = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
holochain_wasm_utils = { git = "https://github.com/holochain/holochain-rust", branch = "master" }
This was found with the specification app which is already up to date with the release that happened last night, at this page:
https://github.com/holochain/dev-camp-tests-rust/blob/master/zomes/people/code/Cargo.toml
Each zome needed this as a replacement for everything above the #derive function:
#![feature(try_from)]
#[macro_use]
extern crate hdk;
extern crate serde;
#[macro_use]
extern crate serde_derive;
#[macro_use]
extern crate serde_json;
extern crate holochain_core_types;
#[macro_use]
extern crate holochain_core_types_derive;
use hdk::{
holochain_core_types::{
dna::zome::entry_types::Sharing,
hash::HashString,
json::JsonString,
entry::Entry,
entry::entry_type::EntryType,
error::HolochainError,
cas::content::Address,
},
};
This resolved the initial errors on compile, and showed me the next layer of changes needed via terminal feedback when I ran hc test to compile, build and test the app... this is what I am seeing now..
Error 1
error[E0061]: this function takes 1 parameter but 2 parameters were supplied
--> src/lib.rs:56:11
|
56 | match hdk::commit_entry("metric", json!(metric)) {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected 1 parameter
Error 2
error[E0308]: mismatched types
--> src/lib.rs:60:24
|
60 | return json!({"link error": link_result.err().unwrap()});
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `holochain_core_types::json::JsonString`, found enum `serde_json::Value`
I will attempt to resolve this one by replacing the serde_json calls in the zome code with JsonString...
Error 3
error[E0609]: no field `links` on type `hdk::holochain_wasm_utils::api_serialization::get_links::GetLinksResult`
--> src/lib.rs:82:18
|
82 | .links
| ^^^^^ unknown field
Error 4
error[E0599]: no method named `to_json` found for type `hdk::error::ZomeApiError` in the current scope
--> src/lib.rs:97:32
|
97 | "error": hdk_error.to_json()
| ^^^^^^^
Update 2
#connorturlands answer got me through most of those errors, and now there appears to be just one more.
^^^^^^^^
|
= note: #[warn(unused_imports)] on by default
error[E0063]: missing fields `active`, `date_time`, `description` and 12 other fields in initializer of `Metric`
--> src/lib.rs:48:68
|
48 | let metric_entry = Entry::new(EntryType::App("metric".into()), Metric{
| ^^^^^^ missing `active`, `date_time`, `description` and 12 other fields
error: aborting due to previous error
For more information about this error, try `rustc --explain E0063`.
error: Could not compile `metrics`.
Which is in response to this zome definition:
fn handle_create_metric(metric: Metric, user_address: HashString) -> JsonString {
let metric_entry = Entry::new(EntryType::App("metric".into()), Metric{
// => Here is where the error triggers... it wants me to define 'title, time, etc' but as a core function, I don't see the point, those will be inputted.. not sure how to address this
});
match hdk::commit_entry(&metric_entry) {
Ok(metric_address) => {
match hdk::link_entries(
&user_address,
&metric_address,
"metric_by_user"
) {
Ok(link_address) => metric_address.into(),
Err(e) => e.into(),
}
}
Err(hdk_error) => hdk_error.into(),
}
}
For error 1, just check this example, and copy it:
https://developer.holochain.org/api/0.0.2/hdk/api/fn.commit_entry.html
For error 2, just do
link_result.into()
which converts it into a JsonString
For error 3, use
.addresses()
instead of .links, this can be seen here: https://developer.holochain.org/api/0.0.2/holochain_wasm_utils/api_serialization/get_links/struct.GetLinksResult.html
And for error 4 just do
hdk_error.into()
and remove it from the json! wrapping that it looks like you're attempting :)
In general, if you see a reference to something relating to the hdk, use the search feature of the API ref to find out more about it, its very good
Migrating from 0.0.1 to 0.0.2 was exactly what I did recently for the todo-list example. I just created a branch for the old version so you can compare the two
https://github.com/willemolding/holochain-rust-todo
https://github.com/willemolding/holochain-rust-todo/tree/working-on-v0.0.1
From memory some of the gotchas are:
commit_entry now takes a single reference to an Entry object
Links must be included as part of the define_zome! or they cannot be created
move from serde_json Value to JsonString
Need to include holochain_core_types_derive = { git = "https://github.com/holochain/holochain-rust" , tag = "holochain-cmd-v0.0.2" } in the cargo.toml
Responses from get_entry are generic entry types and can be converted into a local type, say ListEntry, as ListEntry::try_from(entry.value())
There are a whole lot more so its probably best to check out the repo.

Can't get OpenCover to work in fake

EDITED to show the ignore return as pointed out by Fyodor and the resulting error
I have a .fsx file with several targets that work as expected, but I can't get a target for OpenCover to work. This is what I have for the Target code:
Target "Coverage" (fun _ ->
OpenCover
(fun p -> { p with ExePath = "./packages/OpenCover.4.6.519/tools/OpenCover.Console.exe"
TestRunnerExePath = "./packages/Machine.Specifications.Runner.Console.0.10.0-Unstable0005/tools/mspec-clr4.exe"
Output = reportDir + "MspecOutput.xml"
Register = "-register:user"
}
)
testDir ## "FakeTest2UnitTesting.dll" + "--xml " + reportDir + "MspecOutput.xml" |> ignore
)
But I now get the following build error:
build.fsx(45,3): error FS0039: The value or constructor 'OpenCover' is not defined. Maybe you want one of the following:
OpenCoverHelper
NCover
I don't know what I am doing wrong. Can someone show me how to use the OpenCoverHelper from the FAKE API?
Thanks
After a lot of playing around an googling, I finally came up with the solution. The basic problem was that I didn't open the OpenCoverHelper. I made the assumption that it was included in FAKE as it is in the Api and there was no documentation saying anything else. So, here is the code I use:
// include Fake lib
#r #"packages/FAKE.4.61.2/tools/FakeLib.dll"
open Fake
open Fake.OpenCoverHelper
Target "Coverage" (fun _ ->
OpenCover (fun p -> { p with
ExePath = "./packages/OpenCover.4.6.519/tools/OpenCover.Console.exe"
TestRunnerExePath = "./packages/Machine.Specifications.Runner.Console.0.10.0-Unstable0005/tools/mspec-clr4.exe"
Output = "./report/MspecOutput.xml"
Register = RegisterUser
})
"./test/FakeTest2UnitTesting.dll + --xml ./report/MspecOutput.xml"
)
Hopefully this will help someone in the future.

Groovy - How to reflect code from another groovy app to find Classes/Properties/Types within

First, I came from a .NET background so please excuse my lack of groovy lingo. Back when I was in a .NET shop, we were using TypeScript with C# to build web apps. In our controllers, we would always receive/respond with DTOs (data xfer objects). This got to be quite the headache every time you create/modify a DTO you had to update the TypeScript interface (the d.ts file) that corresponded to it.
So we created a little app (a simple exe) that loaded the dll from the webapp into it, then reflected over it to find the DTOs (filtering by specific namespaces), and parse through them to find each class name within, their properties, and their properties' data types, generate that information into a string, and finally saved as into a d.ts file.
This app was then configured to run on every build of the website. That way, when you go to run/debug/build the website, it would update your d.ts files automatically - which made working with TypeScript that much easier.
Long story short, how could I achieve this with a Grails Website if I were to write a simple groovy app to generate the d.ts that I want?
-- OR --
How do I get the IDE (ex IntelliJ) to run a groovy file (that is part of the app) that does this generation post-build?
I did find this but still need a way to run on compile:
Groovy property iteration
class Foo {
def feck = "fe"
def arse = "ar"
def drink = "dr"
}
class Foo2 {
def feck = "fe2"
def arse = "ar2"
def drink = "dr2"
}
def f = new Foo()
def f2 = new Foo2()
f2.properties.each { prop, val ->
if(prop in ["metaClass","class"]) return
if(f.hasProperty(prop)) f[prop] = val
}
assert f.feck == "fe2"
assert f.arse == "ar2"
assert f.drink == "dr2"
I've been able to extract the Domain Objects and their persistent fields via the following Gant script:
In scripts/Props.groovy:
import static groovy.json.JsonOutput.*
includeTargets << grailsScript("_GrailsBootstrap")
target(props: "Lists persistent properties for each domain class") {
depends(loadApp)
def propMap = [:].withDefault { [] }
grailsApp.domainClasses.each {
it?.persistentProperties?.each { prop ->
if (prop.hasProperty('name') && prop.name) {
propMap[it.clazz.name] << ["${prop.name}": "${prop.getType()?.name}"]
}
}
}
// do any necessary file I/O here (just printing it now as an example)
println prettyPrint(toJson(propMap))
}
setDefaultTarget(props)
This can be run via the command line like so:
grails props
Which produces output like the following:
{
"com.mycompany.User": [
{ "type": "java.lang.String" },
{ "username": "java.lang.String" },
{ "password": "java.lang.String" }
],
"com.mycompany.Person": [
{ "name": "java.lang.String" },
{ "alive": "java.lang.Boolean" }
]
}
A couple of drawbacks to this approach is that we don't get any transient properties and I'm not exactly sure how to hook this into the _Events.groovy eventCompileEnd event.
Thanks Kevin! Just wanted to mention, in order to get this to run, here are a few steps I had to make sure to do in my case that I thought I would share:
-> Open up the grails BuildConfig.groovy
-> Change tomcat from build to compile like this:
plugins {
compile ":tomcat:[version]"
}
-> Drop your Props.groovy into the scripts folder on the root (noting the path to the grails-app folder for reference)
[application root]/scripts/Props.groovy
[application root]/grails-app
-> Open Terminal
gvm use grails [version]
grails compile
grails Props
Note: I was using Grails 2.3.11 for the project I was running this on.
That gets everything in your script to run successfully for me. Now to modify the println portion to generate Typescript interfaces.
Will post a github link when it is ready so be sure to check back.

I want Canopy web testing results to show in VS 2013 test explorer... and I'm SO CLOSE

I'm trying to figure out how to get the test results for Canopy to show in the VS test explorer. I can get my tests to show up and it will run them but it always shows a pass. It seems like the Run() function is "eating" the results so VS never sees a failure.
I'm sure it is a conflict between how Canopy is nicely interpreting the exceptions it gets into test results because normally you'd want Run() to succeed regardless of the outcome and report its results using its own reports.
Maybe I should be redirecting output and interpreting that in the MS testing code?
So here is how I have it set up right now...
The Visual Studio Test Runner looks at this file for what it sees as tests, these call the canopy methods that do the real testing.
open canopy
open runner
open System
open Microsoft.VisualStudio.TestTools.UnitTesting
[<TestClass>]
type testrun() =
// Look in the output directory for the web drivers
[<ClassInitialize>]
static member public setup(context : TestContext) =
// Look in the output directory for the web drivers
canopy.configuration.ieDir <- "."
canopy.configuration.chromeDir <- "."
// start an instance of the browser
start ie
()
[<TestMethod>]
member x.LocationNoteTest() =
let myTestModule = new myTestModule()
myTestModule.all()
run()
[<ClassCleanup>]
static member public cleanUpAfterTesting() =
quit()
()
myTestModule looks like
open canopy
open runner
open System
type myTestModule() =
// some helper methods
member x.basicCreate() =
context "The meat of my tests"
"Test1" &&& fun _ ->
// some canopy test statements like...
url "http://theURL.com/"
"#title" == "The title of my page"
//Does the text of the button match expectations?
"#addLocation" == "LOCATION"
// add a location note
click ".btn-Location"
member x.all() =
x.basicCreate()
// I could add additional tests here or I could decide to call them individually
I have it working now. I put the below after the run() for each test.
Assert.IsTrue(canopy.runner.failedCount = 0,results.ToString())
so now my tests look something like:
[<TestMethod>]
member x.LocationNoteTest() =
let locationTests = new LocationNote()
// Add the test to the canopy suite
// Note, this just defines the tests to run, the canopy portion
// of the tests do not actually execute until run is called.
locationTests.all()
// Tell canopy to run all the tests in the suites.
run()
Assert.IsTrue(canopy.runner.failedCount = 0,results.ToString())
Canopy and the UnitTesting infrastructure have some overlap in what they want to take care of. I want the UnitTesting infrasturcture to be the thing "reporting" the summary of all tests and details so I needed to find a way to "reset" the canopy portion so that I didn't have to track the last known state from canopy and then compare. So for this to work your canopy suite can only have one test but we want to have as many as we want at the UnitTesting level. To adjust for that we do the below in the [].
runner.suites <- [new suite()]
runner.failedCount <- 0
runner.passedCount <- 0
It might make sense to have something within canopy that could be called or configured when the user wants to use a different unit testing infrastructure around canopy.
Additionally I wanted the output that includes the error information to appear as it normally does when a test fails so I capture the console.out in a stringBuilder and clear that in []. I set it up in by including the below [] where common.results is the StringBuilder I then use in the asserts.
System.Console.SetOut(new System.IO.StringWriter(common.results))
Create a mutable type to pass into the 'myTestModule.all' call which can be updated accordingly upon failure and asserted upon after 'run()' completes.

Resources