Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
I am training a machine learning model to classify between positive and negative tweets, here are the results I am getting:
precision recall f1-score support
negative 0.78 0.37 0.50 384
positive 0.72 0.94 0.81 659
accuracy 0.73 1043
macro avg 0.75 0.65 0.66 1043
weighted avg 0.74 0.73 0.70 1043
I have googled around and have read about precision and recall but I just cant get my head around what these mean in terms of my example. Can someone show how these results where found i.e the calculations for precision, recall etc that would give these results.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 hours ago.
Improve this question
I am new to machine learning and linear separability.
Let's say curve1 is sin(x) and curve2 is sin(x) + 0.5
What non linear transformations or any other method should be implemented on these curves to make them linearly separable?
The curves are separable if we increase the constant part in the second curve to more than 2.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
In almost most of the cases, I come across about GPUs while dealing with any execution part in Deep Learning.
This has to do with GPU architecture versus CPU. It turns out gaming requires a lot of matrix multiplications, hence the GPU architecture was optimized for these types of operations, specifically they are optimized for high rate floating-point arithmetic. More on this here
It so happens that neural networks are mostly matrix multiplications.
For example:
Is the mathematical formulation of a simple neural network with one hidden layer. W_h is a matrix of weights that multiplies your input x, to which we add a bias b_h. The linear equation W_hx + b_h can be compacted to a single matrix multiplication. The sigma is a nonlinear activation like sigmoid. The outer sigmoid is again a matrix multiplication. Hence GPUs
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Normally we said that 1000m= 1Km, 1000ml= 1Liter and 1000g = 1Kg. Then why we say 1024 KB =1MB, instead of 1000 KB= 1MB. How 1024 kb is said to be as 1 mb.?
1byte = 8bits which is 2 power 3 bits. If it was 1000kib =1mb it would be harder to converts into bits and vvice versa. As 1024 is the integer which is closer to 1000 and can be expressed as power of 2. It has been choosen.
Hope that helps! Sorry for my bad english and poor explanation :P
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need a collection of sample images to train a Haar-based classifier for face detection. I read that a ratio of 2 negative examples for each positive example is acceptable. I searched around the web and found many databases containing positive examples to train my classifier (that is, images that contain faces), however, I can't any database with negative examples. Right now, I have over 18000 positive examples. Where can I find 2000 or more negative examples?
use
http://tutorial-haartraining.googlecode.com/svn/trunk/data/negatives/
or any other image set that has no objects you need to recognize
NOTE: the number of samples you mention is too big, you don't need so much to obtain high accuracy
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
lets say i have a dual core having a speed of 2.7. Does the 2.7 stands for the sum of the speed of each core, or the speed of each individual core?
It means that each CPU core runs 2.7 billion cycles per second. This has a lot less meaning than it used to, as the amount of "work" that is completed each cycle varies quite a bit (due to considerations like caching, pipelining, hyperthreading, memory access times, and so on).
If you want to know how fast a processor is, it is much more advisable to look at benchmarks related to the kind of tasks you are trying to accomplish with it than to look at the clock speed. Consider: a dual-core 2.4 GHz Core i5 is much, much faster than a 2.4 GHz Pentium D (also dual-core).