How to set up a specific seedvalue in gds.beta.node2vec.write? - neo4j

I'm running node2vec in neo4j but when the algorithm runs again with the same parameters, the result changes. So, I read the configuration and I see that there is a seedvalue.
I tried to set the seedvalue in a specific number, but nothing changes..
https://github.com/aditya-grover/node2vec/issues/83

If you check the documentation you can see that the randomSeed parameters is available. However, the results are still non-deterministic as per the docs:
Seed value used to generate the random walks, which are used as the
training set of the neural network. Note, that the generated
embeddings are still nondeterministic.

Related

How can I change an Optuna's trial result?

I'm using optuna on a complex ML algorithm, where each trial takes around 3/4 days. After a couple of trials, I noticed that the values that I was returning to optuna were incorrect, but I do have the correct results on another file (saving as a backup). Is there any way I could change this defectives results directly in the study object?
I know I can export the study in a pandas dataframe using study.trials_dataframe() and then change it there? However, I need to visualize it in optuna-dashboard, so I would need to directly change it in the study file. Any suggestions?
Create a new Study, use create_trial to create trials with correct values and use Study.add_trials to insert them into the new study.
old_trials = old_study.get_trials(deepcopy=False)
correct_trials = [
optuna.trial.create_trial(
params=trial.params,
distributions=trial.distributions,
value=correct_value(trial.params)
) for trial in old_trials]
new_study = optuna.create_study(...)
new_study.add_trials(correct_trials)
Note that Optuna doesn't allow you to change existing trials once they are finished, i.e., successfully returned a value, got pruned, or failed. (This is an intentional design; Optuna uses caching mechanisms intensively and we don't want to have inconsistencies during distributed optimization.)
You can only create a new study containing correct trials, and optionally delete the old study.

How to avoid data averaging when logging to metric across multiple runs?

I'm trying to log data points for the same metric across multiple runs (wandb.init is called repeatedly in between each data point) and I'm unsure how to avoid the behavior seen in the attached screenshot...
Instead of getting a line chart with multiple points, I'm getting a single data point with associated statistics. In the attached e.g., the 1st data point was generated at step 1,470 and the 2nd at step 2,940...rather than seeing two points, I'm instead getting a single point that's the average and appears at step 2,205.
My hunch is that using the resume run feature may address my problem, but even testing out this hunch is proving to be cumbersome given the constraints of the system I'm working with...
Before I invest more time in my hypothesized solution, could someone confirm that the behavior I'm seeing is, indeed, the result of logging data to the same metric across separate runs without using the resume feature?
If this is the case, can you confirm or deny my conception of how to use resume?
Initial run:
run = wandb.init()
wandb_id = run.id
cache wandb_id for successive runs
Successive run:
retrieve wandb_id from cache
wandb.init(id=wandb_id, resume="must")
Is it also acceptable / preferable to replace 1. and 2. of the initial run with:
wandb_id = wandb.util.generate_id()
wandb.init(id=wandb_id)
It looks like you’re grouping runs so that could be why it’s appearing as averaging across step - this might not be the case but it’s worth trying. Turn off grouping by clicking the button in the centre above your runs table on the left - it’s highlighted in purple in the image below.
Both of the ways you’re suggesting resuming runs seem fine.
My hunch is that using the resume run feature may address my problem,
Indeed, providing a cached id in combination with resume="must" fixed the issue.
Corresponding snippet:
import wandb
# wandb run associated with evaluation after first N epochs of training.
wandb_id = wandb.util.generate_id()
wandb.init(id=wandb_id, project="alrichards", name="test-run-3/job-1", group="test-run-3")
wandb.log({"mean_evaluate_loss_epoch": 20}, step=1)
wandb.finish()
# wandb run associated with evaluation after second N epochs of training.
wandb.init(id=wandb_id, resume="must", project="alrichards", name="test-run-3/job-2", group="test-run-3")
wandb.log({"mean_evaluate_loss_epoch": 10}, step=5)
wandb.finish()

How does Machine Learning algorithm retain learning from previous execution?

I am reading Hands on Machine Learning book and author talks about random seed during train and test split, and at one point of time, the author says over the period Machine will see your whole dataset.
Author is using following function for dividing Tran and Test split,
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
Usage of the function like this:
>>>train_set, test_set = split_train_test(housing, 0.2)
>>> len(train_set)
16512
>>> len(test_set)
4128
Well, this works, but it is not perfect: if you run the program again, it will generate a different test set! Over time, you (or your Machine Learning algorithms) will get to see the whole dataset, which is what you want to avoid.
Sachin Rastogi: Why and how will this impact my model performance? I understand that my model accuracy will vary on each run as Train set will always be different. How my model will see the whole dataset over a time ?
The author is also providing a few solutions,
One solution is to save the test set on the first run and then load it in subsequent runs. Another option is to set the random number generator’s seed (e.g., np.random.seed(42)) before calling np.random.permutation(), so that it always generates the same shuffled indices.
But both these solutions will break next time you fetch an updated dataset. A common solution is to use each instance’s identifier to decide whether or not it should go in the test set (assuming instances have a unique and immutable identifier).
Sachin Rastogi: Will it be a good train/test division? I think No, Train and Test should contain elements from across dataset to avoid any bias from the Train set.
The author is giving an example,
You could compute a hash of each instance’s identifier and put that instance in the test set if the hash is lower or equal to 20% of the maximum hash value. This ensures that the test set will remain consistent across multiple runs, even if you refresh the dataset.
The new test set will contain 20% of the new instances, but it will not contain any instance that was previously in the training set.
Sachin Rastogi: I am not able to understand this solution. Could you please help?
For me, these are the answers:
The point here is that you should better put aside part of your data (which will constitute your test set) before training the model. Indeed, what you want to achieve is to be able to generalize well on unseen examples. By running the code that you have shown, you'll get different test sets through time; in other words, you'll always train your model on different subsets of your data (and possibly on data that you've previously marked as test data). This in turn will affect training and - going to the limit - there will be nothing to generalize to.
This will be indeed a solution satisfying the previous requirement (of having a stable test set) provided that new data are not added.
As said in the comments to your question, by hashing each instance's identifier you can be sure that old instances always get assigned to the same subsets.
Instances that were put in the training set before the update of the dataset will remain there (as their hash value won't change - and so their left-most bit - and it will remain higher than 0.2*max_hash_value);
Instances that were put in the test set before the update of the dataset will remain there (as their hash value won't change and it will remain lower than 0.2*max_hash_value).
The updated test set will contain 20% of the new instances and all of the instances associated to the old test set, letting it remain stable.
I would also suggest to see here for an explanation from the author: https://github.com/ageron/handson-ml/issues/71.

Parameter sharing in network with nn.SpatialBatchNormalization

I have a network with three parallel branches, and I want to share all their parameters so that they are identical at the end of the training.
Let some_model be a standard nn.Sequential module made of cudnn.SpatialConvolution, nn.PReLU, nn.SpatialBatchNormalization. Additionally, there is a nn.SpatialDropout, but its probability is set to 0, so it has no effect.
ptb=nn.ParallelTable()
ptb:add(some_model)
ptb:add(some_model:clone('weight','bias', 'gradWeight','gradBias'))
ptb:add(some_model:clone('weight','bias', 'gradWeight','gradBias'))
triplet=nn.Sequential()
triplet:add(ptb)
I don't think the loss function is relevant, but just in case, I use nn.DistanceRatioCriterion. To check that all weights are correctly shared, I pass a table of three identical examples {A,A,A} to the network. Obviously, if the weights are correctly shared, then the output of all three branches should be the same. This holds at the moment of network initialization, but once the paramerters have been updated (say, after one mini-batch iteration), the results of the three branches become different. Through layer by layer inspection, I have noticed that this discrepancy in the output comes from the nn.SpatialBatchNormalization layers in some_model. Therefore, it seems that the parameters from those layers are not properly shared. Following this, I have tried calling clone with the additional parameters running_mean and running_std, but the ouptut of the batchnorm layers still differ. Moreover, this seems to be cancelling the sharing of all other network parameters as well. What is the proper way of sharing parameters between nn.SpatialBatchNormalization modules?
Ok, I found the solution! It seems that the parameter running_std has been changed to running_var since the discussion I had linked to in the question. Calling the constructor with
ptb:add(some_model:clone('weight','bias', 'gradWeight','gradBias','running_mean','running_var'))
Solves the problem.

using butterworth filter in a case structure

I'm trying to use butterworth filter. The input data comes from an "index array" module (the data is acquired through DAQ and I want to process the voltage signal which is in an array of waveforms). when I use this filter in a case structure, it doesn't work. yet, when I use the filters in the "waveform conditioning" section, there is no problem. what exactly is the difference between these two types of filters?
a little add on to my problem: the second picture is from when i tried to reassemble the initial combination, and the error happened
You are comparing offline filtering to online filtering.
In LabVIEW, the PtbyPt-VIs are intended to be used in an online setting, that is - iteratively.
For each new sample that is obtained, it would be input directly into the VI. The VI stores the states of the previous iterations to perform the filtering.
The "normal" filter VIs are intended for offline analysis and expects an array containing the full data of the signal.
The following whitepaper explains Point-by-Point-VIs. Note that this paper is quite old, so it should explain the concepts - but might be otherwise outdated.
http://www.ni.com/pdf/manuals/370152b.pdf
If VoltageBuf is an array of consecutive values of the same signal (the one that you want to filter) you only need to connect VoltageBuf directly to the filter.

Resources