Why are the elements of the matrix and vector types in the F# Powerpack mutable? - f#

F# is often promoted as a functional language where data is immutable by default, however the elements of the matrix and vector types in the F# Powerpack are mutable. Why is this?
Furthermore, for which reason are sparse matrices implemented as immutable as opposed to normal matrices?

The standard array type ('T[]) in F# is also mutable. You're mostly correct -- F# is a functional language where data immutability is encouraged, but not required. Basically, F# allows you to do write both mutable/imperative code and immutable/functional code; it's up to you to decide the best way to implement the code for your specific application.
Another reason for having mutable arrays and matrices is performance -- it is possible to implement very fast algorithms with immutable types, but users writing scientific computations usually only care about one thing: achieving maximum performance. That being that case, it follows that the arrays and matrices should be mutable.

For truly high performance, mutability is required, in one specific case : Provided that your code is perfectly optimized and that you master everything it is doing down to the cache (L1, L2) pattern of access of your program, then nothing beats a low level, to the metal approach.
This happens mostly only when you have one well specified problem that stays constant for 20 years, aka mostly in scientific tasks.
As soon as you depart from this specific case, in 99.99% the bottlenecks arise from having a too low level representation (induced by a low level langage) in which you can't express the final, real-world optimization trade-off of your problem at hand.
Bottom line, for performance, the following approach is the only way (i think) :
High level / algorithmic optimization first
Once every high level ways has been explored, low level optimization
You can see how as a consequence of that :
You should never optimize anything without FIRST measuring the impact : improvements should only be made if they yield enormous performance gains and/or do not degrade your domain logic.
You eventually will reach, if your problem is stable and well defined, the point where you will have no choice but to go to the low level, and play with memory/mutability

Related

Why the uniform representation of immediate values in GC'd languages?

...Or is the correlation not a causation?
It seems to be the norm that garbage collected languages follow the lisp tradition of having all values be machine word sized—even smaller values like bytes and short ints.
It is only the exception when they deviate from that, and usually within another (boxed) data structure like a bytestring or array, for more compact memory usage. In many cases that optimization is even hardcoded in the language and there is not a first-class facility that the users of the language can exploit to represent their data with less padding.
So my question is, why is it like that? Is there something inherent to GC performance, or the way a tracing GC is architected, when all values are of the same size.. that goes away as soon as we have different sized values? Are there counter examples of garbage collectors that handle non-uniform data?

How determinate number of rounds in TFF context

In TFF, It is necessary to determinate number of rounds. So, to obtain optimal performance of our model, How we can know the optimal number of rounds?
TFF does not necessarily need you to specify the number of rounds for federated training beforehand. TFF is more about specifying the federated aspect of your computation (which you can essentially think of as specifying the communication), and considers actually "running" the rounds to be at the system level.
When you write TFF, generally you are writing at three levels (explanation of this statement here); the question you are asking (and every concern TFF considers a "system concern") is at the Python level. Since Python controls the actual invocation of your computation written in TFF, you can stop training with any criterion expressible in Python. E.g. if you want to monitor performance on a validation set and use that as a stopping criteria, this is entirely doable. If you have a tff.utils.IterativeProcess ip, and evaluation function eval_fn (see here for an example), this could be implemented as something like:
while True:
data = sample_client_data()
state, metrics = ip.next(state, data)
eval_metrics = eval_fn(state)
if condition(eval_metrics):
break
Abstractly: since the Python drives the experiment process, you can stop whenever you want to, based on any observable characteristic of the training procedure. Therefore you do not in fact need to know how many rounds you will be running beforehand.
A more direct answer to the original question is, I think at this point in the history of FL, not quite achievable for the general case; nobody (as far as I am aware) knows of reliable system-level settings for FL at this point. This is not surprising; it is somewhat akin to knowing beforehand how many epochs one should specify in datacenter training, which I think tends to be quite problem-dependent. FL is similar in this regard. Practically speaking, my advice tends to be: monitor performance on a validation set, run for as long as you can, and keep the state of your highest-performing model on the val set around. I think a more general answer than this may be quite difficult.

Incorporating Transition Probabilities in SARSA

I am implementing a SARSA(lambda) model in C++ to overcome some of the limitations (the sheer amount of time and space DP models require) of DP models, which hopefully will reduce the computation time (takes quite a few hours atm for similar research) and less space will allow adding more complexion to the model.
We do have explicit transition probabilities, and they do make a difference. So how should we incorporate them in a SARSA model?
Simply select the next state according to the probabilities themselves? Apparently SARSA models don't exactly expect you to use probabilities - or perhaps I've been reading the wrong books.
PS- Is there a way of knowing if the algorithm is properly implemented? First time working with SARSA.
The fundamental difference between Dynamic Programming (DP) and Reinforcement Learning (RL) is that the first assumes that environment's dynamics is known (i.e., a model), while the latter can learn directly from data obtained from the process, in the form of a set of samples, a set of process trajectories, or a single trajectory. Because of this feature, RL methods are useful when a model is difficult or costly to construct. However, it should be notice that both approaches share the same working principles (called Generalized Policy Iteration in Sutton's book).
Given they are similar, both approaches also share some limitations, namely, the curse of dimensionality. From Busoniu's book (chapter 3 is free and probably useful for your purposes):
A central challenge in the DP and RL fields is that, in their original
form (i.e., tabular form), DP and RL algorithms cannot be implemented
for general problems. They can only be implemented when the state and
action spaces consist of a finite number of discrete elements, because
(among other reasons) they require the exact representation of value
functions or policies, which is generally impossible for state spaces
with an infinite number of elements (or too costly when the number of
states is very high).
Even when the states and actions take finitely many values, the cost
of representing value functions and policies grows exponentially with
the number of state variables (and action variables, for Q-functions).
This problem is called the curse of dimensionality, and makes the
classical DP and RL algorithms impractical when there are many state
and action variables. To cope with these problems, versions of the
classical algorithms that approximately represent value functions
and/or policies must be used. Since most problems of practical
interest have large or continuous state and action spaces,
approximation is essential in DP and RL.
In your case, it seems quite clear that you should employ some kind of function approximation. However, given that you know the transition probability matrix, you can choose a method based on DP or RL. In the case of RL, transitions are simply used to compute the next state given an action.
Whether is better to use DP or RL? Actually I don't know the answer, and the optimal method likely depends on your specific problem. Intuitively, sampling a set of states in a planned way (DP) seems more safe, but maybe a big part of your state space is irrelevant to find an optimal pocliy. In such a case, sampling a set of trajectories (RL) maybe is more effective computationally. In any case, if both methods are rightly applied, should achive a similar solution.
NOTE: when employing function approximation, the convergence properties are more fragile and it is not rare to diverge during the iteration process, especially when the approximator is non linear (such as an artificial neural network) combined with RL.
If you have access to the transition probabilities, I would suggest not to use methods based on a Q-value. This will require additional sampling in order to extract information that you already have.
It may not always be the case, but without additional information I would say that modified policy iteration is a more appropriate method for your problem.

Cutting down on Stanford parser's time-to-parse by pruning the sentence

We are already aware that the parsing time of Stanford Parser increases as the length of a sentence increases. I am interested in finding creative ways in which we prune the sentence such that the parsing time decreases without compromising on accuracy. For e.g. we can replace known noun phrases with one word nouns. Similarly can there be some other smart ways of guessing a subtree before hand, let's say, using the POS Tag information? We have a huge corpus of unstructured text at our disposal. So we wish to learn some common patterns that can ultimately reduce the parsing time. Also some references to publicly available literature in this regards will also be highly appreciated.
P.S. We already are aware of how to multi-thread using Stanford Parser, so we are not looking for answers from that point of view.
You asked for 'creative' approaches - the Cell Closure pruning method might be worth a look. See the series of publications by Brian Roark, Kristy Hollingshead, and Nathan Bodenstab. Papers: 1 2 3. The basic intuition is:
Each cell in the CYK parse chart 'covers' a certain span (e.g. the first 4 words of the sentence, or words 13-18, etc.)
Some words - particularly in certain contexts - are very unlikely to begin a multi-word syntactic constituent; others are similarly unlikely to end a constituent. For example, the word 'the' almost always precedes a noun phrase, and it's almost inconceivable that it would end a constituent.
If we can train a machine-learned classifier to identify such words with very high precision, we can thereby identify cells which would only participate in parses placing said words in highly improbable syntactic positions. (Note that this classifier might make use of a linear-time POS tagger, or other high-speed preprocessing steps.)
By 'closing' these cells, we can reduce both the the asymptotic and average-case complexities considerably - in theory, from cubic complexity all the way to linear; practically, we can achieve approximately n^1.5 without loss of accuracy.
In many cases, this pruning actually increases accuracy slightly vs. an exhaustive search, because the classifier can incorporate information that isn't available to the PCFG. Note that this is a simple, but very effective form of coarse-to-fine pruning, with a single coarse stage (as compared to the 7-stage CTF approach in the Berkeley Parser).
To my knowledge, the Stanford Parser doesn't currently implement this pruning technique; I suspect you'd find it quite effective.
Shameless plug
The BUBS Parser implements this approach, as well as a few other optimizations, and thus achieves throughput of around 2500-5000 words per second, usually with accuracy at least equal to that I've measured with the Stanford Parser. Obviously, if you're using the rest of the Stanford pipeline, the built-in parser is already well integrated and convenient. But if you need improved speed, BUBS might be worth a look, and it does include some example code to aid in embedding the engine in a larger system.
Memoizing Common Substrings
Regarding your thoughts on pre-analyzing known noun phrases or other frequently-observed sequences with consistent structure: I did some evaluation of a similar idea a few years ago (in the context of sharing common substructures across a large corpus, when parsing on a massively parallel architecture). The preliminary results weren't encouraging.In the corpora we looked at, there just weren't enough repeated substrings of substantial length to make it worthwhile. And the aforementioned cell closure methods usually make those substrings really cheap to parse anyway.
However, if your target domains involved a lot of repetition, you might come to a different conclusion (maybe it would be effective on legal documents with lots of copy-and-paste boilerplate? Or news stories that are repeated from various sources or re-published with edits?)

F# Set.union argument order performance

Is there any recommended way to call a Set.union if I know one of the two sets will be larger than the other?
Set.union large small
or
Set.union small large
Thanks
Internally, sets are represented as ballanced tree (you can check the source online). When calculating the union of sets, the algorithm splits the smaller set (tree) based on the value from the root of the larger set (tree) into a set of smaller and a set of larger elements. The splitting is always performed on the smaller set to do less work. Then it recursively unions the two left and right subsets and performs some re-ballancing.
The summary is, the algorithm does not really depend on which of the sets is the first and which of them is the second argument. It will always choose the better option depending on the size of set (which is stored as part of the data structure).
The intent behind your question seems to improve performance when using Set.union by exploiting undocumented features of this function implementation. But Set.union abstracts you from implementation complexity leaving just set-theoretic meaning of union operation that is agnostic to the argument properties. Purposedly breaking through this abstraction layer adversely affects complexity and maintainability of your code and should be avoided.
Although sometimes you have no choice but deal with leaky abstractions, Set.union is definitely not the case. And it is good to hear from Tomas that Set.union implementation does not have leaky abstraction flaws.
Do whatever you want. You can also do small + large, and large - small for difference (of course also small - large).

Resources