does CPU inference make accuracy lower? - machine-learning

I currently do text-to-speech using tacotron2 and hifi-gan. it working well with GPU but after deploying into server and use CPU to run the model, the result is not as good as before.
so my question is : does inference with CPU lower the model accuracy ?
if yes please kindly explain or send me any reference paper or article.
one more thing , I noticed that when running
model.cuda().eval().half()
and save the tacotron2 model , the model size reduce to half and it seem to run find ,so if I use this half-size model , will it lower the accuracy too ?

You want to look into mixed-precision training NVIDIA, Tensorflow.
Machine learning doesn't usually need high precision floating point.
GPUs & Frameworks can take this into account to speed up training.
However, during deployment the model doesn't take this into account.
The deeper your model, the more this may be an issue because the slight differences add up.

Related

Do you need to train your ml model equal no. of times before and after fine tuning while using transfer learning?

I'm trying to make a model that can classify 7 different denomination of bank note. I'm using VGG19 as a convolution base. I've a dataset of more than 10000 images with each category containing more than 1k. How many layers should I add after convolution base? and also what would be the size of each layer.
This question is too vague. Choosing right architecture is not a simple task. It depends on your domain. Using ready-made architectures you're prone to overshoot the problem. Capacity of such networks might be an overkill. You'll get nice low entropy in your net but it will overfit as crazy. Rule of thumb: start with smaller nets, slowly build up and compare your metrics.
There's similar thread here.
There's ongoing research regarding Network Architecture Search. Afaik the only available solution is Google's Auto-ML. Metaheuristics-based NAS is still in its infancy and won't be widely used for some time.
Most popular open-source NAS is AutoKeras.

Should I adapt loss weights for misclassified samples during epochs?

I am using FCN (Fully Convolutional Networks) and trying to do image segmentation. When training, there are some areas which are mislabeled, however further training doesn't help much to make them go away. I believe this is because network learns about some features which might not be completely correct ones, but because there are enough correctly classified examples, it is stuck in local minimum and can't get out.
One solution I can think of is to train for an epoch, then validate the network on training images, and then adjust weights for mismatched parts to penalize mismatch more there in next epoch.
Intuitively, this makes sense to me - but I haven't found any writing on this. Is this a known technique? If yes, how is it called? If no, what am I missing (what are the downsides)?
It highly depends on your network structure. If you are using the original FCN, due to the pooling operations, the segmentation performance on the boundary of your objects is degraded. There have been quite some variants over the original FCN for image segmentation, although they didn't go the route you're proposing.
Just name a couple of examples here. One approach is to use Conditional Random Field (CRF) on top of the FCN output to refine the segmentation. You may search for the relevant papers to get more idea on that. In some sense, it is close to your idea but the difference is that CRF is separated from the network as a post-processing approach.
Another very interesting work is U-net. It employs some idea from the residual network (RES-net), which enables high resolution features from lower levels can be integrated into high levels to achieve more accurate segmentation.
This is still a very active research area. So you may bring the next break-through with your own idea. Who knows! Have fun!
First, if I understand well you want your network to overfit your training set ? Because that's generally something you don't want to see happening, because this would mean that while training your network have found some "rules" that enables it to have great results on your training set, but it also means that it hasn't been able to generalize so when you'll give it new samples it will probably perform poorly. Moreover, you never talk about any testing set .. have you divided your dataset in training/testing set ?
Secondly, to give you something to look into, the idea of penalizing more where you don't perform well made me think of something that is called "AdaBoost" (It might be unrelated). This short video might help you understand what it is :
https://www.youtube.com/watch?v=sjtSo-YWCjc
Hope it helps

Image Recognition Techniques for Identifying Logos (Brands)

I started learning Image Recognition a few days back and I would like to do a project in which it need to identify different brand logos in Android.
For Ex: If I take a picture of Nike logo in an Android device then it needs to display "Nike".
Low computational time would be the main criteria for me.
For this, I have done some work and started learning OpenCV sample examples.
What would be the best Image Recognition that would be used for me.
1) I came to know from Template Matching that their applicability is limited mostly by the available computational power, as identification of big and complex templates can be time consuming. (and so I don't want to use it)
2) Feature Based detectors like SIFT/SURF/STAR (As per my knowledge this would be a better option for me)
3) How about Deep Learning and Pattern recognition concepts? (I was digging on this and don't know whether it would be an option for me). Can any of you let me know whether I can use this and why it would be an better choice for me when compared with 1 and 2.
4) Haar caascade classifiers (From one of the posts in SO, I came to know that by using Haar it doesn't work in Rotation and Scale invariant and so I haven't concentrated much on this). Does this been a better Option for me If I focus up on.
I’m now running one of my pet projects and it's required face recognition – detecting the area with face on the photo, if it exists with Raspberry pi, so I’ve done some analysis about that task
And I found this approach. The key idea is in avoiding scanning entire picture to help by scanning windows of different sizes like it was in OpenCV, but by dividing an entire photo into 49 (7x7) squares and train the model not only for detecting of presenting one of classes inside each square, but also for determining the location and size of detecting object
It’s only 49 runs of trained model, so I think it's possible to execute this in less than in a second even on non state-of-the-art smartphones. Anyway, it will be a trade-off between accuracy and performance
About the model
I will use vgg –like model, probably a bit simpler than even vgg11A.
In my case ready dataset already exists. So I can train convolutional network with it
Why deep learning approach is better than 1-3 you mentioned? Because of its higher accuracy for such kind of tasks. It’s practically proven. You could check it in kaggle. Majority of the top models for classification competitions are based on convolutional networks
The only disadvantage for you – probably it would be necessary create your own dataset to train the model
Here is a post that I think can be useful for you: Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition. Another one: Logo recognition in images.
2) Feature Based detectors like SIFT/SURF/STAR (As per my knowledge
this would be a better option for me)
Just remember that SIFT and SURF are both patented so you will need a license for any commercial use (free for non-commercial use).
4) Haar caascade classifiers (From one of the posts in SO, I came to know that by using Haar it doesn't work in Rotation and Scale invariant and so I haven't concentrated much on this). Does this been a better Option for me If I focus up on.
It works (if I understand your question right), much of this depends of how you trained your classifier. You could train it to detect all kind of rotations and scales. Anyways, I would discourage you to go for this option as I think the other possible solutions are better meant for the case.

How to work with machine learning algorithms in embedded systems?

I'm doing a project to detect (classify) human activities using a ARM cortex-m0 microcontroller (Freedom - KL25Z) with an accelerometer. I intend to predict the activity of the user using machine learning.
The problem is, the cortex-m0 is not capable of processing training or predicting algorithms, so I would probably have to collect the data, train it in my computer and then embed it somehow, which I don't really know how to do it.
I saw some post in the internet saying that you can generate a matrix of weights and embed it in a microcontroller, so it would be a straightforward function to predict something ,based on the data you providing for this function. Would it be the right way of doing ?
Anyway my question is, how could I embedded a classification algorithm in a microcontroller?
I hope you guys can help me and give some guidance, I'm kind of lost here.
Thank you in advance.
I've been thinking about doing this myself to solve a problem that I've had a hard time developing a heuristic for by hand.
You're going to have to write your own machine-learning methods, because there aren't any machine learning libraries out there suitable for low-end MCUs, as far as I know.
Depending on how hard the problem is, it may still be possible to develop and train a simple machine learning algorithm that performs well on a low-end MCU. After-all, some of the older/simpler machine learning methods were used with satisfactory results on hardware with similar constraints.
Very generally, this is how I'd go about doing this:
Get the (labelled) data to a PC (through UART, SD-card, or whatever means you have available).
Experiment with the data and a machine learning toolkit (scikit-learn, weka, vowpal wabbit, etc). Make sure an off-the-shelf method is able to produce satisfactory results before moving forward.
Experiment with feature engineering and selection. Try to get the smallest feature set possible to save resources.
Write your own machine learning method that will eventually be used on the embedded system. I would probably choose perceptrons or decision trees, because these don't necessarily need a lot of memory. Since you have no FPU, I'd only use integers and fixed-point arithmetic.
Do the normal training procedure. I.e. use cross-validation to find the best tuning parameters, integer bit-widths, radix positions, etc.
Run the final trained predictor on the held-out testing set.
If the performance of your trained predictor was satisfactory on the testing set, move your relevant code (the code that calculates the predictions) and the model you trained (e.g. weights) to the MCU. The model/weights will not change, so they can be stored in flash (e.g. as a const array).
I think you may be limited by your hardware. You may want to get something a little more powerful. For your project you've chosen the M-series processor from ARM. This is the simplest platform that they offer, the architecture doesn't lend itself to the kind of processing you're trying to do. ARM has three basic classifications as follows:
M - microcontroller
R - real-time
A - applications
You want to get something that has strong hardware support for these complex calculations. You're starting point should be an A-series for this. If you need to do floating point arithmetic, you'll definitely need to start with the A-series and probably get one with NEON-FPU.
TI's Discovery series is a nice place to start, or maybe just use the Raspberry Pi (at least for the development part)?
However, if you insist on using the M0 I think you might be able to pull it off using something lightweight like ROS-C. I know there are packages with ROS that can do it, even though its mainly for robotics you may be able to adapt it to what you're doing.
Dependency Free ROS
Neural Networks and Machine Learning with ROS

Recommended local search optimization algorithm for control domain

Background: I am trying to find a list of floating point parameters for a low level controller that will lead to balance of a robot while it is walking.
Question: Can anybody recommend me any local search algorithms that will perform well for the domain I just described? The main criteria for me is the speed of convergence to the right solution.
Any help will be greatly appreciated!
P.S. Also, I conducted some research and found out that "Evolutianry
Strategy" algorithms are a good fit for continuous state space. However, I am not entirely sure, if they will fit well my particular problem.
More info: I am trying to optimize 8 parameters (although it is possible for me to reduce the number of parameters to 4). I do have a simulator and a criteria for me is speed in number of trials because simulation resets are costly (take 10-15 seconds on average).
One of the best local search algorithms for low number of dimensions (up to about 10 or so) is the Nelder-Mead simplex method. By the way, it is used as the default optimizer in MATLAB's fminsearch function. I personally used this method for finding parameters of some textbook 2nd or 3rd degree dynamic system (though very simple one).
Other option are the already mentioned evolutionary strategies. Currently the best one is the Covariance Matrix Adaption ES, or CMA-ES. There are variations to this algorithm, e.g. BI-POP CMA-ES etc. that are probably better than the vanilla version.
You just have to try what works best for you.
In addition to evolutionary algorithm, I recommend you also check reinforcement learning.
The right method depends a lot on the details of your problem. How many parameters? Do you have a simulator? Do you work in simulation only, or also with real hardware? Speed is in number of trials, or CPU time?

Resources