Why are linear layers used in Binary Classification with Deep Learning? [closed] - machine-learning

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
In many examples of Binary Classification with Deep learning
Why are linear layers used? I've been trying to look around the internet for information on the reason for the use of linear layers
e.g.
https://github.com/StatsGary/PyTorch_Tutorials/blob/main/01_MLP_Thyroid_Classifier/PyTorch_Binary_From_Scratch.py
https://hutsons-hacks.info/building-a-pytorch-binary-classification-multi-layer-perceptron-from-the-ground-up

Linear layer is just another (a bit mathematically incorrect) name of a fully connected layer, the most standard, classic, and in some sense - powerful building block of neural networks. Networks built purely from fully connected layers are universal approximators, and thus a good starting point for any sort of investigation.

Related

What is MFCC simply? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
I am new to music genre recognition and I am trying to do a project which classifies which genre a certain music clip is(I am using GTZAN).
I've came across an open source code in kaggle and I have used his preprocessing
(link)
I saw them using MFCC, but I need to understand why they are using these values(For example why 13 coefs?)
Moreover, I need to understand what is MFCC, I have basic knowledge of physics but it is very difficult to understand what it represents, and why he chose these values(I don't need to understand broad physics behind it, just as simple as possible please).
Another question,
MFCC image example
for example, the X axis here represents the time, but what the squares, or the Coefs represents in Y axis?
I have tried to search in the internet, but there are a lot of physics and music theory behind it, I need a simple explanation.
Thanks.

what's the meaning of ^ in this function [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 months ago.
Improve this question
Supervised learning is divided into two processes: learning and prediction, which are completed by learning system and prediction system. In the learning process, the learning system uses the given training data set to obtain a model through learning or training, which is expressed as conditional probability distribution or decision function
I just want to know why to add " ^ " on the top of "f(x)" or "P(Y|X)"
In statistics, that symbol is used to denote an "estimator" of a random variable.

Can we predict y when y value is not numeric? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 months ago.
Improve this question
I am using support vector regressor. I want to predict personality as shown in screenshot! Is it possible to predict when y is in string format? I used onehot encoder but its not working.
This is not a regression task, but classification. 'not working' is not very informative, however normally you'd just map classes to integers. Either sklearn.preprocessing.LabelEncoder, sklearn.preprocessing.label_binarize().argmax(axis=1), pandas.factorize() or manual mapping should get the job done.
Worth noting support vector machines don't handle multiclass problems natively, so you may encounter troubles depending on the exact model you use. At least the latest sklearn versions should handle it automatically when using models like sklearn.svm.LinearSVC, building N binary classifiers under the hood.
I'd also recommend getting acquainted with a more elegant way of ensembling SVMs for multiclass problems, using sklearn.multiclass.OutputCodeClassifier().

Any advice for Beginner Programmer studying Deep Learning? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Thanks for making it this far on my post!
I am studying engineering, yet have a passion for programming and wish to implement computer science knowledge into my own research.
My question is pertaining to any resources that this community has available and any advice you all are willing to give regarding getting started in this broad field.
I’m mainly confused about ‘neural networks’ in relation to Deep Learning as well as implementation of algorithms.
I have slight Python and R knowledge.
Note: one of the subfora of StackExchange is probably a better fit for this question.
In any case, for ML you can do just fine with basic Python/R. Most of the research and work done on ML is based on TensorFlow and similar frameworks currently (2018). To use the frameworks you don't really need a strong programming background to setup and train models on them (although it certainly helps). Actually, math/statistics will help you more, specially if you want to get to the bottom of it (i.e. reading the latest articles/papers, etc.).
Mainly I’m confused about ‘neural networks’ in relation to Deep Learning
"Deep Learning" is basically taking advantage of modern computing capabilities to train complex models (e.g. neural networks with many hidden layers) which a few years ago (e.g. 10 years ago) were unfeasible. Informally speaking, the more complex your network is, the more interesting are the things that it can learn.
as well as implementation of algorithms.
Typically, you will use an existing framework -- you won't implement the algorithms yourself. Although, of course, implementing a MultiLayer Perceptron by yourself is always a good and fun learning exercise.

Probability basics for machine learning [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have recently started studying Machine Learning and found that I need to refresh probability basics such as Conditional Probability, Bayes Theorem etc.
I am looking for online resources where I can quickly brush up probability concepts wrt Machine Learning.
The online resorces, I stumbled upon are either very basic or too advanced.
This might help: http://www.cs.cmu.edu/~tom/10601_fall2012/lectures.shtml
The above link is from Tom Mitchell's Machine Learning Class # CMU. Videos are available too. You will gain a very good understanding of ML concepts if you go through all the videos. (or just the first few videos for Conditional Probability, Bayes Theorem, etc).
The notion of conditional probability and bayes theorem are very basic themselves. It doesn't get any more basic than that in probabilistic modeling, you might say. Which suggests that you didn't look two well at what you've found or didn't really do any search at all.
Off the top of my head, I can name two resources: first, any Coursera course dealing with probabilities or machine learning (see AI, Statistics One or Probabilistic Graphical Models) contains these preliminaries. Second, there's a number of books on statistics freely available online, one example being Information Theory, Inference, and Learning Algorithms.

Resources