I would like to implement a neural network architecture in Caffe which will perform differently based on some iterable variable. For example: the full network might use 10 layers for 4 out of 5 training or testing iterations, but for all other iterations it will truncate the network and only use the last 5 layers. This would require that the input to the first layer and the input to the 5th layer have the same dimensionality of course, but my primary question is how to implement this switching between the two architectures during training/testing.
I guess you can do that using pycaffe and caffe.NetSpec(), but this is not going to be a very nice code...
On the other hand, why don't you train for i iterations the full net, save a snapshot, and then "warm start" the reduced model with the snapshot you saved?
That is: have 'full_trainval.prototxt' with 'full_solver.prototxt' configured to train the full net for i iterations, and 'top_trainval.prototxt' with 'top_solver.prototxt' configured to train only the top layers of the net. Then
~$ $CAFFE_ROOT/build/tools/caffe train -solver full_solver.prototxt
When this stage is through, make sure you have the final sanpshot saved, and then
~$ $CAFFE_ROOT/build/tools/caffe train -solver top_solver.prototxt -snapshot full_train_last_snapshot.solverstate
Finally, you could use net_surgery to merge the weights of the two phases into a single full net.
Related
I'd like to retrain a YOLOv7-tiny Object Detector on a custom dataset with 4 classes.
According to the README of the official git repo in order to transfer learn one should pass the pre-trained weights as argument:
python train.py --workers 8 --device 0 --batch-size 32 --data data/custom.yaml --img 640 640 --cfg cfg/training/yolov7-custom.yaml --weights 'yolov7_training.pt' --name yolov7-custom --hyp data/hyp.scratch.custom.yaml
As for the normal YOLOv7 model, there are the weight files yolov7.pt and yolov7_training.pt present in the repo (see here). What´s the difference between these two and which should be used in which occasion?
Also there's a yolov7-tiny.pt weight file, but no respective yolov7-tiny_training.pt file. Is there a particular reason for that?
Also I learned that for Transfer Learning it's helpful to "freeze" the base models weights (make them untrainable) first, then train the new model on the new dataset, so only the new weights get adjusted. After that you can "unthaw" the frozen weights to fine-tune the entire model. The train.py script has a --freeze argument to freeze backbone layers.
Is this approach recommended for retraining YOLOv7? If so, should you freeze all 50 backbone layers of YOLOv7 (and would that command be --freeze 50 or smth. different)? And how would you then unthaw these layers and resume the training of said model?
I am playing some demos about recurrent neural network.
I noticed that the scale of my data in each column differs a lot. So I am considering to do some preprocess work before I throw data batches into my RNN. The close column is the target I want to predict in the future.
open high low volume price_change p_change ma5 ma10 \
0 20.64 20.64 20.37 163623.62 -0.08 -0.39 20.772 20.721
1 20.92 20.92 20.60 218505.95 -0.30 -1.43 20.780 20.718
2 21.00 21.15 20.72 269101.41 -0.08 -0.38 20.812 20.755
3 20.70 21.57 20.70 645855.38 0.32 1.55 20.782 20.788
4 20.60 20.70 20.20 458860.16 0.10 0.48 20.694 20.806
ma20 v_ma5 v_ma10 v_ma20 close
0 20.954 351189.30 388345.91 394078.37 20.56
1 20.990 373384.46 403747.59 411728.38 20.64
2 21.022 392464.55 405000.55 426124.42 20.94
3 21.054 445386.85 403945.59 473166.37 21.02
4 21.038 486615.13 378825.52 461835.35 20.70
My question is, is preprocessing the data with, say StandardScaler in sklearn necessary in my case? And why?
(You are welcome to edit my question)
It will be beneficial to normalize your training data. Having different features with widely different scales fed to your model will cause the network to weight the features not equally. This can cause a falsely prioritisation of some features over the others in the representation.
Despite that the whole discussion on data preprocessing is controversial either on when exactly it is necessary and how to correctly normalize the data for each given model and application domain there is a general consensus in Machine Learning that running a Mean subtraction as well as a general Normalization preprocessing step is helpful.
In the case of Mean subtraction, the mean of every individual feature is being subtracted from the data which can be interpreted as centering the data around the origin from a geometric point of view. This is true for every dimensionality.
Normalizing the data after the Mean subtraction step results in a normalization of the data dimensionality to approximately the same scale. Note that the different features will loose any prioritization over each other after this step as mentioned above. If you have good reasons to think that the different scales in your features bear important information that the network may need to truly understand the underlying patterns in your dataset, then a normalization will be harmful. A standard approach would be to scale the inputs to have mean of 0 and a variance of 1.
Further preprocessing operations may be helpful in specific cases such as performing PCA or Whitening on your data. Look into the awesome notes of CS231n (Setting up the data and the model) for further reference on these topics as well as for a more detailed explenation of the topics above.
Definetly yes. Most of neural networks work best with data beetwen 0-1 or -1 to 1(depends on output function). Also when some inputs are higher then others network will "think" they are more important. This can make learning very long. Network must first lower weights in this inputs.
I found this https://arxiv.org/abs/1510.01378
If you normalize it may improve convergence so you will get lower training times.
I would like to use the yolo architecture for object detection. Before training the network with my custom data, I followed these steps to train it on the Pascal VOC data: https://pjreddie.com/darknet/yolo/
The instructions are very clear.
But after the final step
./darknet detector train cfg/voc.data cfg/yolo-voc.cfg darknet19_448.conv.23
darknet immediately stops training and announces that weights have been written to the backups/ directory.
At first I thought that the pretraining was simply too good and that the stopping criteria would be reached at once.
So I've used the ./darknet detect command with these weights on one of the test images data/dog. Nothing is found.
If I don't use any pretrained weights, the network does train.
I've edited cfg/yolo-voc.cfg to use
# Testing
#batch=1
#subdivisions=1
# Training
batch=32
subdivisions=8
Now the training process has been runnning for many hours and is keeping my gpu warm.
Is this the intended way to train darknet ?
How can I use pretrained weights correctly, without training just breaking off ?
Is there any setting to create checkpoints, or get an idea of the progress ?
Adding -clear 1 at the end of your training command will clear the stats of how many images this model has seen in previous training. Then you can fine-tune your model on new data(set).
You can find more info about the usage in the function signature
void train_detector(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear)
at https://github.com/pjreddie/darknet/blob/b13f67bfdd87434e141af532cdb5dc1b8369aa3b/examples/detector.c
I doubt it that increasing the max number of iterations is a good idea, as the learning rates are usually associated with current # of iteration. We usually increase the max # of iterations, when we want to resume a previous training task that ended because of reaching the max # of iterations, but we believe that with more iterations, it will give better results.
FYI, when you have a small dataset, training on it from scratch or from a classification network may not be a great idea. You may still want to re-use the weights from a detection network trained on large dataset like Coco or ImageNet.
This is an old question so I hope you have your answer by now, but here is mine just in case it helps.
After working with darknet for about a month, I've run into most of the roadblocks that people have asked/posted about on forums. In your case, I'm pretty certain it's because the weights have been trained for the max number of batches already, and when the pre-trained weights were read in darknet assumed training was done.
Relevant personal experience: when I used one of the pretrained weights files, it started from iteration 40101 and ran until 40200 before cutting off.
I would stick to training from scratch if you have custom data, but if you want to try the pre-trained weights again, you might find that changing max batches in the cfg file helps.
Also if using AlexeyAB/darknet they might have a problem with -clear option,
in detector.c:
if (clear) *nets[k].seen = 0
should really be:
if (clear) {*nets[k].seen = 0;*nets[k].cur_iteration = 0;}
otherwise the training loop will exit immediately.
Modify OpenCV number in your darknet/Makefile to 0
OpenCV=0
I am experimenting with the tensorflow seq2seq_model.py model.
The target vocab size I have is around 200.
The documentation the says:
For vocabularies smaller than 512, it might be a better idea to just use a standard softmax loss.
The source-code also has the check:
if num_samples > 0 and num_samples < self.target_vocab_size:
Running the model with only 200 target output vocabulary does not invoke the if statement.
Do I need to write a "standard" softmax loss function to ensure a good training, or can I just let the model run as it comes?
Thanks for the help!
I am doing the same thing. In order to just get my fingers wet with different kinds of structures in the training data I am working in an artificial test-world with just 117 words in the (source and) target vocabulary.
I asked myself the same question and decided to not go through that hassle. My models train well even though I didn't touch the loss, thus still using the sampled_softmax_loss.
Further experiences with those small vocab sizes:
- batchsize 32 is best in my case (smaller ones make it really unstable and I run into nan-issues quickly)
- I am using AdaGrad as the optimizer and it works like magic
- I am working with the model_with_buckets (addressed through translate.py) and having size 512 with num_layers 2 produces the desired outcomes in many cases.
I have 3113 training examples, over a dense feature vector of size 78. The magnitude of features is different: some around 20, some 200K. For example, here is one of the training examples, in vowpal-wabbit input format.
0.050000 1 '2006-07-10_00:00:00_0.050000| F0:9.670000 F1:0.130000 F2:0.320000 F3:0.570000 F4:9.837000 F5:9.593000 F6:9.238150 F7:9.646667 F8:9.631333 F9:8.338904 F10:9.748000 F11:10.227667 F12:10.253667 F13:9.800000 F14:0.010000 F15:0.030000 F16:-0.270000 F17:10.015000 F18:9.726000 F19:9.367100 F20:9.800000 F21:9.792667 F22:8.457452 F23:9.972000 F24:10.394833 F25:10.412667 F26:9.600000 F27:0.090000 F28:0.230000 F29:0.370000 F30:9.733000 F31:9.413000 F32:9.095150 F33:9.586667 F34:9.466000 F35:8.216658 F36:9.682000 F37:10.048333 F38:10.072000 F39:9.780000 F40:0.020000 F41:-0.060000 F42:-0.560000 F43:9.898000 F44:9.537500 F45:9.213700 F46:9.740000 F47:9.628000 F48:8.327233 F49:9.924000 F50:10.216333 F51:10.226667 F52:127925000.000000 F53:-15198000.000000 F54:-72286000.000000 F55:-196161000.000000 F56:143342800.000000 F57:148948500.000000 F58:118894335.000000 F59:119027666.666667 F60:181170133.333333 F61:89209167.123288 F62:141400600.000000 F63:241658716.666667 F64:199031688.888889 F65:132549.000000 F66:-16597.000000 F67:-77416.000000 F68:-205999.000000 F69:144690.000000 F70:155022.850000 F71:122618.450000 F72:123340.666667 F73:187013.300000 F74:99751.769863 F75:144013.200000 F76:237918.433333 F77:195173.377778
The training result was not good, so I thought I would normalize the features to make them in the same magnitude. I calculated mean and standard deviation for each of the features across all examples, then do newValue = (oldValue - mean) / stddev, so that their new mean and stddev are all 1. For the same example, here is the feature values after normalization:
0.050000 1 '2006-07-10_00:00:00_0.050000| F0:-0.660690 F1:0.226462 F2:0.383638 F3:0.398393 F4:-0.644898 F5:-0.670712 F6:-0.758233 F7:-0.663447 F8:-0.667865 F9:-0.960165 F10:-0.653406 F11:-0.610559 F12:-0.612965 F13:-0.659234 F14:0.027834 F15:0.038049 F16:-0.201668 F17:-0.638971 F18:-0.668556 F19:-0.754856 F20:-0.659535 F21:-0.663001 F22:-0.953793 F23:-0.642736 F24:-0.606725 F25:-0.609946 F26:-0.657141 F27:0.173106 F28:0.310076 F29:0.295814 F30:-0.644357 F31:-0.678860 F32:-0.764422 F33:-0.658869 F34:-0.674367 F35:-0.968679 F36:-0.649145 F37:-0.616868 F38:-0.619564 F39:-0.649498 F40:0.041261 F41:-0.066987 F42:-0.355693 F43:-0.638604 F44:-0.676379 F45:-0.761250 F46:-0.653962 F47:-0.668194 F48:-0.962591 F49:-0.635441 F50:-0.611600 F51:-0.615670 F52:-0.593324 F53:-0.030322 F54:-0.095290 F55:-0.139602 F56:-0.652741 F57:-0.675629 F58:-0.851058 F59:-0.642028 F60:-0.648002 F61:-0.952896 F62:-0.629172 F63:-0.592340 F64:-0.682273 F65:-0.470121 F66:-0.045396 F67:-0.128265 F68:-0.185295 F69:-0.510251 F70:-0.515335 F71:-0.687727 F72:-0.512749 F73:-0.471032 F74:-0.789335 F75:-0.491188 F76:-0.400105 F77:-0.505242
However, this yields basically the same testing result (if not exactly the same, since I shuffle the examples before each training).
Wondering why there is no change in the result?
Here is my training and testing commands:
rm -f cache
cat input.feat | vw -f model --passes 20 --cache_file cache
cat input.feat | vw -i model -t -p predictions --invert_hash readable_model
(Yes, I'm testing on the training data right now since I have only very few data examples to train on.)
More context:
Some of the features are "tier 2" - they were derived by manipulating or doing cross products on "tier 1" features (e.g. moving average, 1-3 order of derivatives, etc). If I normalize the tier 1 features before calculating the tier 2 features, it would actually improve the model significantly.
So I'm puzzled as why normalizing tier 1 features (before generating tier 2 features) helps a lot, while normalizing all features (after generating tier 2 features) doesn't help at all?
BTW, since I'm training a regressor, I'm using SSE as the metrics to judge the quality of the model.
vw normalizes feature values for scale as it goes, by default.
This is part of the online algorithm. It is done gradually during runtime.
In fact it does more than that, vw enhanced SGD algorithm also keeps separate learning rates (per feature) so rarer feature learning rates don't decay as fast as common ones (--adaptive). Finally there's an importance aware update, controlled by a 3rd option (--invariant).
The 3 separate SGD enhancement options (which are all turned on by default) are:
--adaptive
--invariant
--normalized
The last option is the one that adjust values for scale (discounts large values vs small). You may disable all these SGD enhancements by using the option --sgd. You may also partially enable any subset by explicitly specifying it.
All in all you have 2^3 = 8 SGD option combinations you can use.
The Possible reason is that whatever Training algorithm that you used to get the result already did the normalization process for you!.In fact many algorithms do the normalization process before working on it.Hope it helps you :)