I want to write a quantum program in F# but I don't know how to call Q# operations from F#. How exactly would I do this?
I've tried reading the C# version first but it doesn't seem to translate well to F#.
TL;DR: You have to create a Q# library project (which will yield a .csproj project with only Q# files in it) and to reference it from a purely F# application.
You can not mix F# and Q# in the same project, because it won't compile: Q# compiles to C# for local simulation, and you can't have C# and F# in the same projects. However, you can have two separate projects in different languages which both compile to MSIL and can reference each other.
The steps are:
Create Q# library QuantumCode and write your code in it.
Let's say your code has an entry point with the signature operation RunAlgorithm (bits : Int[]) : Int[] (i.e., it takes an array of integers as a parameter and returns another array of integers).
Create an F# application (for simplicity let's make it a console app targeting .NET Core) FsharpDriver.
Add a reference to the Q# library to the F# application.
Install the NuGet package Microsoft.Quantum.Development.Kit which adds Q# support to the F# application.
You will not be writing any Q# code in FsharpDriver, but you will need to use functionality provided by the QDK to create a quantum simulator to run your quantum code on, and to define data types used to pass the parameters to your quantum program.
Write the driver in F#.
// Namespace in which quantum simulator resides
open Microsoft.Quantum.Simulation.Simulators
// Namespace in which QArray resides
open Microsoft.Quantum.Simulation.Core
[<EntryPoint>]
let main argv =
printfn "Hello Classical World!"
// Create a full-state simulator
use simulator = new QuantumSimulator()
// Construct the parameter
// QArray is a data type for fixed-length arrays
let bits = new QArray<int64>([| 0L; 1L; 1L |])
// Run the quantum algorithm
let ret = QuantumCode.RunAlgorithm.Run(simulator, bits).Result
// Process the results
printfn "%A" ret
0 // return an integer exit code
I posted a full example of the project code here (originally that project dealt with using Q# from VB.NET, but for F# all the steps are the same).
Related
After implementing a type checker in Rascal using TypePal, how do I hook it up to the Eclipse IDE? Any resources or repositories to the solution to this problem would be appreciated, Thanks.
in Eclipse search path of the Rascal Navigator view in your project you will find the rascal_eclipse library which contains a good example: demo::lang::Pico::Plugin
in this module you see how to register a language with Eclipse:
registerLanguage("Pico Language", "pico", parsePico); where parsePico is a function reference. Pass your own parameters here. The registerLanguage function comes from util::IDE.
now you can open files with the "IMP editor" in Eclipse and they will use your parser and show syntax highlighting.
next up is registering other effects with the IDE. The library function to call is registerAnnotator. You pass it a function that takes a parse tree for your language and annotates it with error messages:
the messages may be distributed over the tree, using #message or
using a list of #messages at the top of the tree
the error messages will be added as annotations in the editor and registered with the Problem View automatically.
So you have to wire the output of TypePal into the annotator function yourself. Which should be a one-liner.
Alternatively, running type-checks can also be useful after a "save" action. In this case you can register another type of contribition (also in the Pico demo), called builder: builder(set[Message] ((&T<:Tree) tree) messages), and register that with the registerContributions function.
The Message ADT is the same for the annotator and the builder. Both have the effect of adding editor annotations and problems in the view.
Here is some example code taken from an older open-source DSL project called "Bird", https://github.com/SWAT-engineering/bird:
Tree checkBird(Tree input){
model = birdTModelFromTree(input, pathConf = config(input#\loc)); // your function that collects & solves
types = getFacts(model);
return input[#messages={*getMessages(model)}]
[#hyperlinks=getUseDef(model)]
[#docs=(l:"<prettyPrintAType(types[l])>" | l <- types)]
;
}
birdTModelFromTree calls collectAndSolve from TypePal and returns the TModel
getMessage comes from TypePal's Utilities and extracts a list[Message] from the TModel that can be directly communicated to Eclipse via the #messages annotation.
The entire checkBird function is registered as an annotator using the util::IDE function registerAnnotator.
This code hasn't been used for a while, so if you run into trouble, please contact again?
Elixir source may be injected using Code.eval_string/3. I don't see mention of running raw Erlang code in the docs:
https://hexdocs.pm/elixir/Code.html#eval_string/3
I am coming from a Scala world in which Java objects are callable using Scala syntax, and Scala is compiled into Java and visible by intercepting the compiler output (directly generated with scalac).
I get the sense that Elixir does not provide such interoperating features, nor allow injection of custom Erlang into the runtime. Is this the case?
You can use the erlang standard library modules from Elixir, as described here or here.
For example:
def random_integer(upper) do
:rand.uniform(upper) # rand is an erlang library
end
You can also add erlang packages to your mix.exs dependencies and use them in your project, as long as these packages are published on hex or on github.
You can also use erlang and elixir code together in a project as described here.
So yeah, it's perfectly possible to call erlang code from elixir.
Vice-versa is also possible, see here for more information:
Elixir compiles into BEAM byte code (via Erlang Abstract Format). This
means that Elixir code can be called from Erlang and vice versa,
without the need to write any bindings.
Expanding what #zwippie have written:
All remote function calls (by that I mean calling function with explicitly set module/alias) are in form of:
<atom with module name>.<function name>(<arguments>)
# Technically it is the same as:
# apply(module, function_name_as_atom, [arguments])
And all "upper case module names" in Elixir are just atoms:
is_atom(Foo) == true
Foo == :"Elixir.Foo" # => true
So from Elixir viewpoint there is no difference between calling Erlang functions and Elixir functions. It is just different atom passed as the receiving module.
So you can easily call Erlang modules from Elixir. That mean that without much of the hassle you should be able to compile Erlang AST from within Elixir as well:
"rand:uniform(100)"
|> :merl.quote()
|> :erl_eval.expr(#{})
No need for any mental translation.
Additionally you can without any problems mix Erlang and Elixir code in single Mix project. With tree structure like:
.
|`- mix.exs
|`- src
| `- example.erl
`- lib
`- example.ex
Where example.erl is:
-module(example).
-export([hello/0]).
hello() -> <<"World">>.
And example.ex:
defmodule Example do
def print_hello, do: IO.puts(:example.hello())
end
You can compile project and run it with
mix run -e "Example.print_hello()"
And see that Erlang module was successfully compiled and executed from within Elixir code in the same project without problems.
One more thing to watch for when calling erlang code from elixir. erlang uses charlists for strings. When you call a erlang function that takes a string, convert the string to a charlist and convert returned string to a string.
Examples:
iex(17)> :string.to_upper "test"
** (FunctionClauseError) no function clause matching in :string.to_upper/1
The following arguments were given to :string.to_upper/1:
# 1
"test"
(stdlib 3.15.1) string.erl:2231: :string.to_upper/1
iex(17)> "test" |> String.to_charlist() |> :string.to_upper
'TEST'
iex(18)> "test" |> String.to_charlist() |> :string.to_upper |> to_string
"TEST"
iex(19)>
I'm trying to understand what is happening behind the Rcpp::sourceCpp() call on a parallelized environment. Recently, this was partially addressed in the question: Using Rcpp function in parLapply on Windows.
Within this post, Dirk said,
"You need to run the sourceCpp() call in each spawned process, or else get them your code."
This was in response to questioner's use of distributing the Rcpp function to the worker processes. The questioner was sending the Rcpp function via:
clusterExport(cl = cl, varlist = "payoff")
I'm confused as to why this doesn't work. My thoughts are that this was what the objective of the clusterExport() is for.
The issue here is that the compiled code is not "exportable" to the spawned processes without being embedded in a package due to how binaries are linked into R's processes.
Traditionally, the clusterExport() statement allows for R specific code to be distributed to workers.
By using clusterExport() on an Rcpp function, you are only receiving the R declaration and not the underlying shared library. That is to say, the R CMD SHLIB given in Attributes.R is not shared with / exported to the workers. As a result, when a call is then made to an Rcpp function on the worker, R cannot find the correct shared library.
Take the previous question's function:
Rcpp::cppFunction("NumericVector payoff( double strike, NumericVector data) {
return pmax(data - strike, 0);
}")
Note: I'm using cppFunction() instead of sourceCpp() but the results are equivalent since cppFunction() calls sourceCpp() to create the function.
Typing the function name:
payoff
Yields the R declaration with a shared library pointer.
function (strike, data)
.Primitive(".Call")(<pointer: 0x1015ec130>, strike, data)
This shared library is only available on process that compiled the function.
Hence, why it is always ideal to embed compiled code within a package and then distribute the package.
I'm just learning F#, and setting up a FAKE build harness for a hello-world-like application. (Though the phrase "Hell world" does occasionally come to mind... :-) I'm using a Mac and emacs (generally trying to avoid GUI IDEs by preference).
After a bit of fiddling about with documentation, here's how I'm invoking the F# compiler via FAKE:
let buildDir = #"./build-app/" // Where application build products go
Target "CompileApp" (fun _ -> // Compile application source code
!! #"src/app/**/*.fs" // Look for F# source files
|> Seq.toList // Convert FileIncludes to string list
|> Fsc (fun p -> // which is what the Fsc task wants
{p with //
FscTarget = Exe //
Platform = AnyCpu //
Output = (buildDir + "hello-fsharp.exe") }) // *** Writing to . instead of buildDir?
) //
That uses !! to make a FileIncludes of all the sources in the usual way, then uses Seq.toList to change that to a string list of filenames, which is then handed off to the Fsc task. Simple enough, and it even seems to work:
...
Starting Target: CompileApp (==> SetVersions)
FSC with args:[|"-o"; "./build-app/hello-fsharp.exe"; "--target:exe"; "--platform:anycpu";
"/Users/sgr/Documents/laboratory/hello-fsharp/src/app/hello-fsharp.fs"|]
Finished Target: CompileApp
...
However, despite what the console output above says, the actual build products go to the top-level directory, not the build directory. The message above looks like the -o argument is being passed to the compiler with an appropriate filename, but the executable gets put in . instead of ./build-app/.
So, 2 questions:
Is this a reasonable way to be invoking the F# compiler in a FAKE build harness?
What am I misunderstanding that is causing the build products to go to the wrong place?
This, or a very similar problem, was reported in FAKE issue #521 and seems to have been fixed in FAKE pull request #601, which see.
Explanation of the Problem
As is apparently well-known to everyone but me, the F# compiler as implemented in FSharp.Compiler.Service has a practice of skipping its first argument. See FSharp.Compiler.Service/tests/service/FscTests.fs around line 127, where we see the following nicely informative comment:
// fsc parser skips the first argument by default;
// perhaps this shouldn't happen in library code.
Whether it should or should not happen, it's what does happen. Since the -o came first in the arguments generated by FscHelper, it was dutifully ignored (along with its argument, apparently). Thus the assembly went to the default place, not the place specified.
Solutions
The temporary workaround was to specify --out:destinationFile in the OtherParams field of the FscParams setter in addition to the Output field; the latter is the sacrificial lamb to be ignored while the former gets the job done.
The longer term solution is to fix the arguments generated by FscHelper to have an extra throwaway argument at the front; then these 2 problems will annihilate in a puff of greasy black smoke. (It's kind of balletic in its beauty, when you think about it.) This is exactly what was just merged into the master by #forki23:
// Always prepend "fsc.exe" since fsc compiler skips the first argument
let optsArr = Array.append [|"fsc.exe"|] optsArr
So that solution should be in the newest version of FAKE (3.11.0).
The answers to my 2 questions are thus:
Yes, this appears to be a reasonable way to invoke the F# compiler.
I didn't misunderstand anything; it was just a bug and a fix is in the pipeline.
More to the point: the actual misunderstanding was that I should have checked the FAKE issues and pull requests to see if anybody else had reported this sort of thing, and that's what I'll do next time.
I come from C# background to F#. So far I wrote simple programs and spent a lot of time in F# interactive.
I'm stuck creating a VS F# project with two .fs files.
Sample code:
// part 1: functions
let rec gcd (a : uint64) (b : uint64) =
if b = 0UL then a
else gcd b (a % b)
// part 2: main()
let a, b = (13UL, 15UL)
do printfn "gcd of %d %d = %d" a b (gcd a b)
I'd like to have two .fs files, namely, Alg.fs and Program.fs, so that Program.fs would contain the code I'm working and Alg.fs having algorithms.
Taken steps:
I've created the two files. Compiler gave an error: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule'
I've inserted module Program and module Alg. The complied program executes only the code from Alg.fs completely ignoring Program.fs...
I'm using F# 2.0 in Visual Studio 2010.
P.S. I've googled and checked some posts, and read documentation on modules and saw relative questions before asking.
Sounds like this is an order-of-files-in-the-project issue. The last file is the entry point ("main method"), sounds like you have Alg.fs last, and you need Program.fs last. You can re-order them via the right-click context menu in VS Solution Explorer.
There are at least three separate things that need to be looked at here:
As mentioned by #Brian, the order of source control files is also the compile order. This matters in F# where type inference is heavily used. Make sure Alg.fs comes before Program.fs in your Visual Studio file list (try this: select Program.fs and hit Alt+Down Arrow until it's at the bottom).
Since Alg.fs and Program.fs are now in modules, you need to actually open the Alg module in Program to get access to its bindings (open Alg), or add the [<AutoOpen>] attribute on Alg.
As #Daniel says, the last problem could be the definition of the entry point to the program. You need either an [<EntryPoint>] attribute on a top level binding that is also the last function in the last file. Alternatively, this defaults to the last binding in the last file anyway, just make sure it has the right signature (see Daniel's link).