How can I change an Optuna's trial result? - machine-learning

I'm using optuna on a complex ML algorithm, where each trial takes around 3/4 days. After a couple of trials, I noticed that the values that I was returning to optuna were incorrect, but I do have the correct results on another file (saving as a backup). Is there any way I could change this defectives results directly in the study object?
I know I can export the study in a pandas dataframe using study.trials_dataframe() and then change it there? However, I need to visualize it in optuna-dashboard, so I would need to directly change it in the study file. Any suggestions?

Create a new Study, use create_trial to create trials with correct values and use Study.add_trials to insert them into the new study.
old_trials = old_study.get_trials(deepcopy=False)
correct_trials = [
optuna.trial.create_trial(
params=trial.params,
distributions=trial.distributions,
value=correct_value(trial.params)
) for trial in old_trials]
new_study = optuna.create_study(...)
new_study.add_trials(correct_trials)
Note that Optuna doesn't allow you to change existing trials once they are finished, i.e., successfully returned a value, got pruned, or failed. (This is an intentional design; Optuna uses caching mechanisms intensively and we don't want to have inconsistencies during distributed optimization.)
You can only create a new study containing correct trials, and optionally delete the old study.

Related

How to access "key" in combine.perKey in beam

In How to create custom Combine.PerKey in beam sdk 2.0, I asked and got a correct answer on how to create a custom Combine.PerKey in the new beam sdk 2.0. However, I now need to create a custom combinePerKey such that within my custom CombinePerKey logic, I need to be able to access the contents of the key. This was easily possible in dataflow 1.x, but in the new beam sdk 2.0, I'm unsure how to do so. Any little code snippet/example would be extremely useful.
EDIT #1 (per Ben Chambers's request)
The real use case is hard to explain, but I'm going to try:
We have a 3d space composed of millions of little hills. We try to determine the apex of these millions of hills as follows: we create billions of "rectangular probes" for the whole 3d space, and then we ask each of these billions of probes to "move" in a greedy way to the apex. Once it hits the apex, it stops. The probe then returns the apex and itself. The apex is the KEY for which we'll do a custom combine by key.
Now, the custom combine function is going to finally return a final object (called a feature) which is derived from all the probes that reach the same apex (ie the same key). When generating this "feature" object, we need to know infomration about the final apex/key (ie the top of the hill). Hence, I need this key info.
One way to solve this is using a group by key, but that was slow (at least in df 1.x); we got it to be fast (in df 1.x) using a custom combine fn. So, we'd like the key. That said, groupbykey works in beam skd 2.0.
Alternatively, we could stick the "apex" information into the "probe" objects itself, but this means that each of our billions of probe objects now needs to be tripled in size just to hold this apex information (and this apex information repeats itself, since there are only say 1 million apexes but 1 billion probes, so this intuitively feels highly inefficient.)
Rather than relying on the CombineFn to compute the entire result, could you instead have the ComibneFn compute some partial result based only on information about the probes? Then your Combine.perKey(...) returns a PCollection<KV<Apex, InfoAboutProbes>> and you can use a ParDo to combine the information about the apex with the summary information about the probes. This allows you to use the CombineFn for efficiently combining information about many probes, while using a ParDo to access the key.

ELKI: Normalization undo for result

I am using the ELKI MiniGUI to run LOF. I have found out how to normalize the data before running by -dbc.filter, but I would like to look at the original data records and not the normalized ones in the output.
It seems that there is some flag called -normUndo, which can be set if using the command-line, but I cannot figure out how to use it in the MiniGUI.
This functionality used to exist in ELKI, but has effectively been removed (for now).
only a few normalizations ever supported this, most would fail.
there is no longer a well defined "end" with the visualization. Some users will want to visualize the normalized data, others not.
it requires carrying over normalization information along, which makes data structures more complex (albeit the hierarchical approach we have now would allow this again)
due to numerical imprecision of floating point math, you would frequently not get out the exact same values as you put in
keeping the original data in memory may be too expensive for some use cases, so we would need to add another parameter "keep non-normalized data"; furthermore you would need to choose which (normalized or non-normalized) to use for analysis, and which for visualization. This would not be hard with a full-blown GUI, but you are looking at a command line interface. (This is easy to do with Java, too...)
We would of course appreciate patches that contribute such functionality to ELKI.
The easiest way is this: Add a (non-numerical) label column, and you can identify the original objects, in your original data, by this label.

Why does ALS.trainImplicit give better predictions for explicit ratings?

Edit: I tried a standalone Spark application (instead of PredictionIO) and my observations are the same. So this is not a PredictionIO issue, but still confusing.
I am using PredictionIO 0.9.6 and the Recommendation template for collaborative filtering. The ratings in my data set are numbers between 1 and 10. When I first trained a model with defaults from the template (using ALS.train), the predictions were horrible, at least subjectively. Scores ranged up to 60.0 or so but the recommendations seemed totally random.
Somebody suggested that ALS.trainImplicit did a better job, so I changed src/main/scala/ALSAlgorithm.scala accordingly:
val m = ALS.trainImplicit( // instead of ALS.train
ratings = mllibRatings,
rank = ap.rank,
iterations = ap.numIterations,
lambda = ap.lambda,
blocks = -1,
alpha = 1.0, // also added this line
seed = seed)
Scores are much lower now (below 1.0) but the recommendations are in line with the personal ratings. Much better, but also confusing. PredictionIO defines the difference between explicit and implicit this way:
explicit preference (also referred as "explicit feedback"), such as
"rating" given to item by users. implicit preference (also referred
as "implicit feedback"), such as "view" and "buy" history.
and:
By default, the recommendation template uses ALS.train() which expects explicit rating values which the user has rated the item.
source
Is the documentation wrong? I still think that explicit feedback fits my use case. Maybe I need to adapt the template with ALS.train in order to get useful recommendations? Or did I just misunderstand something?
A lot of it depends on how you gathered the data. Often ratings that seem explicit can actually be implicit. For instance, assume you give the option of allowing users to rate items that they have purchased / used before. This means that the very fact that they have spent their time evaluating that particular item means that the item is of a high quality. As such, items of poor quality are not rated at all because people do not even bother to use them. As such, even though the dataset is intended to be explicit, you may get better results because if you consider the results to be implicit. Again, this varies significantly based on how the data is obtained.
The explict data (like ratings) normally comes with bias - people go and rate a product because they like it! Think about your experience shopping and then rating on Amazon.com :-)
On the contrary, implict info often can truly reflect user's favor on a product, like viewing duration, comment length, etc. Even a like/dislike is better that rating because it provides a very simple 'bad' option without bothering a user to think "if I should give a 3, 3.5, or 4?".

Assigning labels to triples

I am currently trying to do stream reasoning using Jena, so I want to be able to reason over a certain set of triples that have occurred in a particular window of time, also taking into account some background static knowledge.
My problem is that I have an ontology that I read from several files, however I wish for the triples I obtain to have time stamps for when I receive them, which I thought I could just do by applying labels to the triples (I am just giving them all random time stamps for the moment as this is only a test).
While I didn't think that this would be problem, I am struggling at the initial step of just applying a label to an existing triple and selecting it. I cannot not seem to be able to access triples from the ontModel without having to transform it into a Graph, and while I could then create quads with the extra value being some literal for time, I can't find a way to then reason over this graph.
Any light that people can shed on this issue would help. I hope I am being clear.
I'm not sure exactly how you're putting labels on your triples, but you can get Statements from an OntModel, and Statement implements FrontsTriple through which you can access a corresponding Triple.

Mahout: how to make recommendations for new users

We plan to use Mahout for a movie recommendation system. And we also plan to
use SVD for model building.
When a new user comes we will require him/her to rate a certain number of movies (say 10).
The problem is that, in order to make a recommendation to this new user we have to rebuild the entire model again.
Is there a better way to this?
Thanks
Yes... though not in Mahout. The implementations there are by nature built around periodic reloading and rebuilding of a data model. In some implementations this still lets you use new data on the fly, like neighborhood-based implementations. I don't think the SVD-based in-memory one does this (I didn't write it.)
In theory, you can start making recommendations from the very first click or rating, by projecting the target item/movie back into the user-feature space via fold-in. To greatly simplify -- if your rank-k approximate factorization of input A is Ak = Uk * Sk * Vk', then for a new user u, you want a new row Uk_u for your update. You have A_u.
Uk = Ak * (Vk')^-1 * (Sk)^-1. The good news is that those two inverses on the right are trivial. (Vk')^-1 = Vk because it has orthonormal columns. (Sk)^-1 is just a matter of taking the reciprocal of Sk's diagonal elements.
So Uk_u = Ak_u * (Vk')^-1 * (Sk)^-1. You don't have Ak_u, but, you have A_u which is approximately the same, so you use that.
If you like Mahout, and like matrix factorization, I suggest you consider the ALS algorithm. It's a simpler process, so is faster (but makes the fold-in a little harder -- see the end of a recent explanation I gave). It works nicely for recommendations.
This also exists in Mahout, though the fold-in isn't implemented. Myrrix, which is where I am continuing work from Mahout, implements all of this.

Resources