I've got a physical problem: To construct a product 10 output parameters (width, length, material, etc.) are determined based on 10 input parameters (performance, temprature, capacity, etc..). The output parameters are obviously depended from the input parameters. But I don't know how. For example output parameter O1 could be dependend from input parameters I1, I2 and I3.
I've got the data of lets say 30k products with their input/output parameters. The data base looks like this:
----------------------------------------------
| Product| I1 | I2 | I3 | ... | O1 | O2 | 03 |
----------------------------------------------
| Prod A | 1.2| 2.3| 4.2| ... | 5.3| 6.2| 1.2|
----------------------------------------------
| Prod B | 2.3| 4.1| 1.2| ... | 8.2| 5.2| 5.0|
----------------------------------------------
| Prod C | 6.3| 3.7| 9.1| ... | 3.1| 4.1| 7.7|
----------------------------------------------
| ... | |
----------------------------------------------
So what I need to do is to find ouput parameters O 1-O 10 based on input parameters I 1 - I 10.
First Question: If I get it right, this is a regression problem, based on some input values I want to find some output values (in the data there is somewhere a function/formular to determin the correct values). Is this correct?
My idea is to use/train a neuronal network (using keras and tensorflow as backend)
How would such a neuronal network look like? What is the best practice?
This is what I have so far:
Input layer with 10 inputs, two full connected deep layers with 100 neurones and an layer with 10 outputs. In keras this looks like this:
def baseline_model(self, callback):
model = Sequential()
model.add(Dense(100, input_dim=10, activation="relu"))
model.add(Dense(100, activation="relu"))
model.add(Dense(10))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=["accuracy"])
model.fit(input_train, output_train, batch_size=5, epochs=2000, verbose=2, callbacks=[callback], shuffle=True, validation_data=(input_val,output_val))
scores = model.evaluate(input_val, output_val, verbose=1)
print("Scores:",scores)
Of course the model does not work like expected, thats why I'm asking for help... the training failes:
Epoch 1999/2000
7s - loss: 47634520366153.6016 - acc: 0.0000e+00 - val_loss: 9585392308285.4395 - val_acc: 0.0000e+00
Any suggestions what I should change? I thought about using "sigmoid" as activation and to normalize the Data to [0,1].
Thanks for any advice
If I get it right, this is a regression problem, based on some input values I want to find some output values
Yes, i think you are right.
How would such a neuronal network look like? What is the best practice?
It's very broad question. i think you should split your data into train and validation set, start from simplest network (maybe no hidden layer or only one hidden layer) and then make it more and more complicated (add more layers and hidden units) while your validaton error decreases. When your net become quite deep it's good idea to add Batch Normalization layers between your dense layers. You can also look at residual connections but not sure that you really need this.
Any suggestions what I should change? I thought about using "sigmoid" as activation and to normalize the Data to [0,1].
Activation function type depends on your outputs type. For categorical outputs sigmoid/softmax probably good choice, linear should be ok for floating numbers.
Also if one of your inputs is categorial (material type, for example) maybe it's better to split it into several binary inputs.
It's almost always good idea to normalize your inputs and outputs. Non normalized data could really hurt training process.
Plot error and check how it changes during time. loss: 47634520366153.6016 is really big but it tell us not so much about optimization. If it decreases maybe you can increase learning rate. If it grows try to decrease learning rate or try another optimization algorithm.
Check your gradients, if it too big try to use gradient clipping.
Also try to start from simple model. Maybe from linear regression.
Strongly speaking neural neutwork debugging is big and complicated field, and i am not sure that it's appropriate for stackoverflow discussion
PS Sorry for my English
As #Dark_davier has already said, this is a field where you need some experience. Is not really possible to answer without really doing some tests. But as guideline be careful with the size of your network. In your network you have roughly (some more) 10e4 parameters, and you said you have "only" 30k observations. So there is a high probability of overfitting... So you need to be careful. You would need to use more sophisticated techniques to avoid it (first cross validation to check, then possibly regularisation). But this require some experience in NN optimisation...
Related
Please see this example as the project I am working on is quite similar, but with ~8 regressors instead of 2 and I need to understand how each regressor is impacting the forecast model: https://towardsdatascience.com/forecast-model-tuning-with-additional-regressors-in-prophet-ffcbf1777dda
Given a scenario like above with 2 additional regressors: How can we understand the impact of each regressor on the 'yhat' forecast (ex. 'temp' has 30% impact on yhat prediction and 'weathersit' has 70% impact on yhat prediction or something similar) . I have tried using "from fbprophet.utilities import regressor_coefficients" to see regressor coefficients but I'm not sure if that's the right approach.
Additionally, how to interpret regressor columns in the 'forecast' dataframe from '.predict()'?
Thanks for your help.
After running regressor_coefficients(model), you will get the center and coef of each additive regressor. For example:
regressor_coefficients(my_model)
|--|regressor| regressor_mode| center| coef_lower| coef| coef_upper|
|--|---------- |------------------|--------|------------|------|------------|
|0 |temperat|additive | 6.346457 | -51.124462| -51.124462| -51.124462|
|1 |humidity |additive | 66.665910| 7.736604| 7.736604| 7.736604|
So the results from your prediction should be (for additive seasonal trends):
yhat = trend + yearly + extra_regressors_additive,
where
extra_regressors_additive = (temperature_data - temperature_center)*temperature_coef
+ (humidity_data - humidity_center )* humidity_coef
You can have more details about the regressors in the "forecast" dataframe. Look for the columns that represent your regressor name. If you feel that fbprophet is under estimating the impact of your regressor, you can declare your regressor input values as binary instead. You can also clusterize you regressor input values if binary values are not appropriate. If you still feel that your regressor is under estimated, have a look at you historical data of your regressor. Does the y value increase the same day your regressor behaviour change? If not then you need to fix that.
You can also refer to the section "Coefficients of additional regressors" of this website: https://facebook.github.io/prophet/docs/seasonality,_holiday_effects,_and_regressors.html#additional-regressors
I'm very new to deep learning models, and trying to train a multiple time series model using LSTM with Keras Sequential. There are 25 observations per year for 50 years = 1250 samples, so not sure if this is even possible to use LSTM for such small data. However, I have thousands of feature variables, not including time lags. I'm trying to predict a sequence of the next 25 time steps of data. The data is normalized between 0 and 1. My problem is that, despite trying many obvious adjustments, I cannot get the LSTM validation loss anywhere close to the training loss (overfitting dramatically, I think).
I have tried adjusting number of nodes per hidden layer (25-375), number of hidden layers (1-3), dropout (0.2-0.8), batch_size (25-375), and train/ test split (90%:10% - 50%-50%). Nothing really makes much of a difference on the validation loss/ training loss disparity.
# SPLIT INTO TRAIN AND TEST SETS
# 25 observations per year; Allocate 5 years (2014-2018) for Testing
n_test = 5 * 25
test = values[:n_test, :]
train = values[n_test:, :]
# split into input and outputs
train_X, train_y = train[:, :-25], train[:, -25:]
test_X, test_y = test[:, :-25], test[:, -25:]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 5, newdf.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 5, newdf.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
# design network
model = Sequential()
model.add(Masking(mask_value=-99, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(LSTM(375, return_sequences=True))
model.add(Dropout(0.8))
model.add(LSTM(125, return_sequences=True))
model.add(Dropout(0.8))
model.add(LSTM(25))
model.add(Dense(25))
model.compile(loss='mse', optimizer='adam')
# fit network
history = model.fit(train_X, train_y, epochs=20, batch_size=25, validation_data=(test_X, test_y), verbose=2, shuffle=False)
Epoch 19/20
14s - loss: 0.0512 - val_loss: 188.9568
Epoch 20/20
14s - loss: 0.0510 - val_loss: 188.9537
I assume I must be doing something obvious wrong, but can't realize it since I'm a newbie. I am hoping to either get some useful validation loss achieved (compared to training), or know that my data observations are simply not large enough for useful LSTM modeling. Any help or suggestions is much appreciated, thanks!
Overfitting
In general, if you're seeing much higher validation loss than training loss, then it's a sign that your model is overfitting - it learns "superstitions" i.e. patterns that accidentally happened to be true in your training data but don't have a basis in reality, and thus aren't true in your validation data.
It's generally a sign that you have a "too powerful" model, too many parameters that are capable of memorizing the limited amount of training data. In your particular model you're trying to learn almost a million parameters (try printing model.summary()) from a thousand datapoints - that's not reasonable, learning can extract/compress information from data, not create it out of thin air.
What's the expected result?
The first question you should ask (and answer!) before building a model is about the expected accuracy. You should have a reasonable lower bound (what's a trivial baseline? For time series prediction, e.g. linear regression might be one) and an upper bound (what could an expert human predict given the same input data and nothing else?).
Much depends on the nature of the problem. You really have to ask, is this information sufficient to get a good answer? For many real life time problems with time series prediction, the answer is no - the future state of such a system depends on many variables that can't be determined by simply looking at historical measurements - to reasonably predict the next value, you need to bring in lots of external data other than the historical prices. There's a classic quote by Tukey: "The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."
i'm training my multilayer Perceptron Classifier. Here's my training set.The features are in sparse vector format.
df_train.show(10,False)
+------+---------------------------+
|target|features |
+------+---------------------------+
|1.0 |(5,[0,1],[164.0,520.0]) |
|1.0 |[519.0,2723.0,0.0,3.0,4.0] |
|1.0 |(5,[0,1],[2868.0,928.0]) |
|0.0 |(5,[0,1],[57.0,2715.0]) |
|1.0 |[1241.0,2104.0,0.0,0.0,2.0]|
|1.0 |[3365.0,217.0,0.0,0.0,2.0] |
|1.0 |[60.0,1528.0,4.0,8.0,7.0] |
|1.0 |[396.0,3810.0,0.0,0.0,2.0] |
|1.0 |(5,[0,1],[905.0,2476.0]) |
|1.0 |(5,[0,1],[905.0,1246.0]) |
+------+---------------------------+
Fist of all, i want to evaluate my estimator on a hold out method, here's my code:
from pyspark.ml.classification import MultilayerPerceptronClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
layers = [4, 5, 4, 3]
trainer = MultilayerPerceptronClassifier(maxIter=100, layers=layers, blockSize=128, seed=1234)
param = trainer.setParams(featuresCol = "features",labelCol="target")
train,test = df_train.randomSplit([0.8, 0.2])
model = trainer.fit(train)
result = model.transform(test)
evaluator = MulticlassClassificationEvaluator(
labelCol="target", predictionCol="prediction", metricName="accuracy")
print("Test set accuracy = " + str(evaluator.evaluate(result)))
But it turns out the error:Failed to execute user defined function($anonfun$1: (vector) => double). Is this because i have sparse vector in my features?What can i do?
And for the cross-validation part, I coded as following:
X=df_train.select("features").collect()
y=df_train.select("target").collect()
from sklearn.model_selection import cross_val_score,KFold
k_fold = KFold(n_splits=10, random_state=None, shuffle=False)
print(cross_val_score(trainer, X, y, cv=k_fold, n_jobs=1,scoring="accuracy"))
And I get: it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' methods.
But when i look up the document, i didn't find get_params method.Can someone help me with this?
There is a number of issues with your question...
Focusing on the second part (it is actually a separate question), the error message claim, i.e. that
it does not seem to be a scikit-learn estimator
is indeed correct, since you are using the MultilayerPerceptronClassifier from PySpark ML as trainer in the scikit-learn method cross_val_score (they are not compatible).
Additionally, your 2nd code snippet is not at all PySpark-like, but scikit-learn-like: while you use correctly the input in your 1st snippet (a single 2-column dataframe, with the features in one column and the labels/targets in the other), you seem to have forgotten this lesson in your second snippet, where you build separate dataframes X and y for input to your classifier (which should be the case in scikit-learn but not in PySpark). See the CrossValidator docs for a straightforward example of the correct usage.
From a more general viewpoint: if your data fit in the main memory (i.e. you can collect them as you do for your CV), there is absolutely no reason to bother with Spark ML, and you would be far better off with scikit-learn.
--
Regarding the 1st part: the data you have shown seem to have only 2 labels 0.0/1.0; I cannot be sure (since you show only 10 records), but if indeed you have only 2 labels you should not use MulticlassClassificationEvaluator but BinaryClassificationEvaluator - which however, does not have a metricName="accuracy" option... [EDIT: against all odds, seems that MulticlassClassificationEvaluator indeed can work for binary classification, too, and it is a handy way to get the accuracy, which is not provided with its binary counterpart!]
But this is not why you get this error (which, BTW, has nothing to do with the evaluator - you get it with result.show() or result.collect()); the reason for the error is that the number of nodes in your first layer (layers[0]) is 4, while your input vectors are evidently 5-dimensional. From the docs:
Number of inputs has to be equal to the size of feature vectors
Changing layers[0] to 5 resolves the issue (not shown). Similarly, if you indeed have only 2 classes, you should also change layers[-1] to 2 (you'll not get an error if you don't, but it won't make much sense from a classification point of view).
I'm trying to train a CNN to categorize text by topic. When I use binary cross-entropy I get ~80% accuracy, with categorical cross-entropy I get ~50% accuracy.
I don't understand why this is. It's a multiclass problem, doesn't that mean that I have to use categorical cross-entropy and that the results with binary cross-entropy are meaningless?
model.add(embedding_layer)
model.add(Dropout(0.25))
# convolution layers
model.add(Conv1D(nb_filter=32,
filter_length=4,
border_mode='valid',
activation='relu'))
model.add(MaxPooling1D(pool_length=2))
# dense layers
model.add(Flatten())
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
# output layer
model.add(Dense(len(class_id_index)))
model.add(Activation('softmax'))
Then I compile it either it like this using categorical_crossentropy as the loss function:
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
or
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Intuitively it makes sense why I'd want to use categorical cross-entropy, I don't understand why I get good results with binary, and poor results with categorical.
The reason for this apparent performance discrepancy between categorical & binary cross entropy is what user xtof54 has already reported in his answer below, i.e.:
the accuracy computed with the Keras method evaluate is just plain
wrong when using binary_crossentropy with more than 2 labels
I would like to elaborate more on this, demonstrate the actual underlying issue, explain it, and offer a remedy.
This behavior is not a bug; the underlying reason is a rather subtle & undocumented issue at how Keras actually guesses which accuracy to use, depending on the loss function you have selected, when you include simply metrics=['accuracy'] in your model compilation. In other words, while your first compilation option
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
is valid, your second one:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
will not produce what you expect, but the reason is not the use of binary cross entropy (which, at least in principle, is an absolutely valid loss function).
Why is that? If you check the metrics source code, Keras does not define a single accuracy metric, but several different ones, among them binary_accuracy and categorical_accuracy. What happens under the hood is that, since you have selected binary cross entropy as your loss function and have not specified a particular accuracy metric, Keras (wrongly...) infers that you are interested in the binary_accuracy, and this is what it returns - while in fact you are interested in the categorical_accuracy.
Let's verify that this is the case, using the MNIST CNN example in Keras, with the following modification:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # WRONG way
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=2, # only 2 epochs, for demonstration purposes
verbose=1,
validation_data=(x_test, y_test))
# Keras reported accuracy:
score = model.evaluate(x_test, y_test, verbose=0)
score[1]
# 0.9975801164627075
# Actual accuracy calculated manually:
import numpy as np
y_pred = model.predict(x_test)
acc = sum([np.argmax(y_test[i])==np.argmax(y_pred[i]) for i in range(10000)])/10000
acc
# 0.98780000000000001
score[1]==acc
# False
To remedy this, i.e. to use indeed binary cross entropy as your loss function (as I said, nothing wrong with this, at least in principle) while still getting the categorical accuracy required by the problem at hand, you should ask explicitly for categorical_accuracy in the model compilation as follows:
from keras.metrics import categorical_accuracy
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[categorical_accuracy])
In the MNIST example, after training, scoring, and predicting the test set as I show above, the two metrics now are the same, as they should be:
# Keras reported accuracy:
score = model.evaluate(x_test, y_test, verbose=0)
score[1]
# 0.98580000000000001
# Actual accuracy calculated manually:
y_pred = model.predict(x_test)
acc = sum([np.argmax(y_test[i])==np.argmax(y_pred[i]) for i in range(10000)])/10000
acc
# 0.98580000000000001
score[1]==acc
# True
System setup:
Python version 3.5.3
Tensorflow version 1.2.1
Keras version 2.0.4
UPDATE: After my post, I discovered that this issue had already been identified in this answer.
It all depends on the type of classification problem you are dealing with. There are three main categories
binary classification (two target classes),
multi-class classification (more than two exclusive targets),
multi-label classification (more than two non exclusive targets), in which multiple target classes can be on at the same time.
In the first case, binary cross-entropy should be used and targets should be encoded as one-hot vectors.
In the second case, categorical cross-entropy should be used and targets should be encoded as one-hot vectors.
In the last case, binary cross-entropy should be used and targets should be encoded as one-hot vectors. Each output neuron (or unit) is considered as a separate random binary variable, and the loss for the entire vector of outputs is the product of the loss of single binary variables. Therefore it is the product of binary cross-entropy for each single output unit.
The binary cross-entropy is defined as
and categorical cross-entropy is defined as
where c is the index running over the number of classes C.
I came across an "inverted" issue — I was getting good results with categorical_crossentropy (with 2 classes) and poor with binary_crossentropy. It seems that problem was with wrong activation function. The correct settings were:
for binary_crossentropy: sigmoid activation, scalar target
for categorical_crossentropy: softmax activation, one-hot encoded target
It's really interesting case. Actually in your setup the following statement is true:
binary_crossentropy = len(class_id_index) * categorical_crossentropy
This means that up to a constant multiplication factor your losses are equivalent. The weird behaviour that you are observing during a training phase might be an example of a following phenomenon:
At the beginning the most frequent class is dominating the loss - so network is learning to predict mostly this class for every example.
After it learnt the most frequent pattern it starts discriminating among less frequent classes. But when you are using adam - the learning rate has a much smaller value than it had at the beginning of training (it's because of the nature of this optimizer). It makes training slower and prevents your network from e.g. leaving a poor local minimum less possible.
That's why this constant factor might help in case of binary_crossentropy. After many epochs - the learning rate value is greater than in categorical_crossentropy case. I usually restart training (and learning phase) a few times when I notice such behaviour or/and adjusting a class weights using the following pattern:
class_weight = 1 / class_frequency
This makes loss from a less frequent classes balancing the influence of a dominant class loss at the beginning of a training and in a further part of an optimization process.
EDIT:
Actually - I checked that even though in case of maths:
binary_crossentropy = len(class_id_index) * categorical_crossentropy
should hold - in case of keras it's not true, because keras is automatically normalizing all outputs to sum up to 1. This is the actual reason behind this weird behaviour as in case of multiclassification such normalization harms a training.
After commenting #Marcin answer, I have more carefully checked one of my students code where I found the same weird behavior, even after only 2 epochs ! (So #Marcin's explanation was not very likely in my case).
And I found that the answer is actually very simple: the accuracy computed with the Keras method evaluate is just plain wrong when using binary_crossentropy with more than 2 labels. You can check that by recomputing the accuracy yourself (first call the Keras method "predict" and then compute the number of correct answers returned by predict): you get the true accuracy, which is much lower than the Keras "evaluate" one.
a simple example under a multi-class setting to illustrate
suppose you have 4 classes (onehot encoded) and below is just one prediction
true_label = [0,1,0,0]
predicted_label = [0,0,1,0]
when using categorical_crossentropy, the accuracy is just 0 , it only cares about if you get the concerned class right.
however when using binary_crossentropy, the accuracy is calculated for all classes, it would be 50% for this prediction. and the final result will be the mean of the individual accuracies for both cases.
it is recommended to use categorical_crossentropy for multi-class(classes are mutually exclusive) problem but binary_crossentropy for multi-label problem.
As it is a multi-class problem, you have to use the categorical_crossentropy, the binary cross entropy will produce bogus results, most likely will only evaluate the first two classes only.
50% for a multi-class problem can be quite good, depending on the number of classes. If you have n classes, then 100/n is the minimum performance you can get by outputting a random class.
You are passing a target array of shape (x-dim, y-dim) while using as loss categorical_crossentropy. categorical_crossentropy expects targets to be binary matrices (1s and 0s) of shape (samples, classes). If your targets are integer classes, you can convert them to the expected format via:
from keras.utils import to_categorical
y_binary = to_categorical(y_int)
Alternatively, you can use the loss function sparse_categorical_crossentropy instead, which does expect integer targets.
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
when using the categorical_crossentropy loss, your targets should be in categorical format (e.g. if you have 10 classes, the target for each sample should be a 10-dimensional vector that is all-zeros except for a 1 at the index corresponding to the class of the sample).
Take a look at the equation you can find that binary cross entropy not only punish those label = 1, predicted =0, but also label = 0, predicted = 1.
However categorical cross entropy only punish those label = 1 but predicted = 1.That's why we make assumption that there is only ONE label positive.
The main point is answered satisfactorily with the brilliant piece of sleuthing by desernaut. However there are occasions when BCE (binary cross entropy) could throw different results than CCE (categorical cross entropy) and may be the preferred choice. While the thumb rules shared above (which loss to select) work fine for 99% of the cases, I would like to add a few new dimensions to this discussion.
The OP had a softmax activation and this throws a probability distribution as the predicted value. It is a multi-class problem. The preferred loss is categorical CE. Essentially this boils down to -ln(p) where 'p' is the predicted probability of the lone positive class in the sample. This means that the negative predictions dont have a role to play in calculating CE. This is by intention.
On a rare occasion, it may be needed to make the -ve voices count. This can be done by treating the above sample as a series of binary predictions. So if expected is [1 0 0 0 0] and predicted is [0.1 0.5 0.1 0.1 0.2], this is further broken down into:
expected = [1,0], [0,1], [0,1], [0,1], [0,1]
predicted = [0.1, 0.9], [.5, .5], [.1, .9], [.1, .9], [.2, .8]
Now we proceed to compute 5 different cross entropies - one for each of the above 5 expected/predicted combo and sum them up. Then:
CE = -[ ln(.1) + ln(0.5) + ln(0.9) + ln(0.9) + ln(0.8)]
The CE has a different scale but continues to be a measure of the difference between the expected and predicted values. The only difference is that in this scheme, the -ve values are also penalized/rewarded along with the +ve values. In case your problem is such that you are going to use the output probabilities (both +ve and -ves) instead of using the max() to predict just the 1 +ve label, then you may want to consider this version of CE.
How about a multi-label situation where expected = [1 0 0 0 1]? Conventional approach is to use one sigmoid per output neuron instead of an overall softmax. This ensures that the output probabilities are independent of each other. So we get something like:
expected = [1 0 0 0 1]
predicted is = [0.1 0.5 0.1 0.1 0.9]
By definition, CE measures the difference between 2 probability distributions. But the above two lists are not probability distributions. Probability distributions should always add up to 1. So conventional solution is to use same loss approach as before - break the expected and predicted values into 5 individual probability distributions, proceed to calculate 5 cross entropies and sum them up. Then:
CE = -[ ln(.1) + ln(0.5) + ln(0.9) + ln(0.9) + ln(0.9)] = 3.3
The challenge happens when the number of classes may be very high - say a 1000 and there may be only couple of them present in each sample. So the expected is something like: [1,0,0,0,0,0,1,0,0,0.....990 zeroes]. The predicted could be something like: [.8, .1, .1, .1, .1, .1, .8, .1, .1, .1.....990 0.1's]
In this case the CE =
- [ ln(.8) + ln(.8) for the 2 +ve classes and 998 * ln(0.9) for the 998 -ve classes]
= 0.44 (for the +ve classes) + 105 (for the negative classes)
You can see how the -ve classes are beginning to create a nuisance value when calculating the loss. The voice of the +ve samples (which may be all that we care about) is getting drowned out. What do we do? We can't use categorical CE (the version where only +ve samples are considered in calculation). This is because, we are forced to break up the probability distributions into multiple binary probability distributions because otherwise it would not be a probability distribution in the first place. Once we break it into multiple binary probability distributions, we have no choice but to use binary CE and this of course gives weightage to -ve classes.
One option is to drown the voice of the -ve classes by a multiplier. So we multiply all -ve losses by a value gamma where gamma < 1. Say in above case, gamma can be .0001. Now the loss comes to:
= 0.44 (for the +ve classes) + 0.105 (for the negative classes)
The nuisance value has come down. 2 years back Facebook did that and much more in a paper they came up with where they also multiplied the -ve losses by p to the power of x. 'p' is the probability of the output being a +ve and x is a constant>1. This penalized -ve losses even further especially the ones where the model is pretty confident (where 1-p is close to 1). This combined effect of punishing negative class losses combined with harsher punishment for the easily classified cases (which accounted for majority of the -ve cases) worked beautifully for Facebook and they called it focal loss.
So in response to OP's question of whether binary CE makes any sense at all in his case, the answer is - it depends. In 99% of the cases the conventional thumb rules work but there could be occasions when these rules could be bent or even broken to suit the problem at hand.
For a more in-depth treatment, you can refer to: https://towardsdatascience.com/cross-entropy-classification-losses-no-math-few-stories-lots-of-intuition-d56f8c7f06b0
The binary_crossentropy(y_target, y_predict) doesn't need to apply to binary classification problem.
In the source code of binary_crossentropy(), the nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output) of tensorflow was actually used.
And, in the documentation, it says that:
Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time.
I am trying to map electrical signals (specifically EEG signals) to actions. I have the raw data from from the eeg device it has 14 channels so for each training data instance I end up with a 14x128 matrix. (14 channels 128 samples (1 sec window)). Currently what I do is apply hamming window on each channel then apply fft to classify using frequency. What I can not wrap my head around is SVM (or other classification algorithms) expects a matrix of the following form
Feature 1 | Feature 2 | Feature 3 | .... | Feature N | Class
but in the case of EEG each channel is the feature but instead of having single values each channel has vector of 128 values. what would be the best way to transform this matrix into a form that svm can understand? Say do I just modify the 14x128 matrices add new col class and append them one after the other. So for a 1 sec record of the eeg signal I end up with 128 pos/neg classes?
You almost certainly need some feature extraction prior to handing the raw data to the SVM. With temporal data like this, the important features are generally not represented well by individual point readings. Rather, they are captured by relationships over time.
I did some work about 10 years ago with SVMs on EEG data[1], and what we did at the time was split the data into windows, but then build autoregression models of each window. Our features for the classifiers were not the raw sensor readings, but the AR coefficients for each channel. This gives you much more useful information for the classifier to use.
I haven't kept working in that area, and I can't say for sure what people are doing now 10+ years later, but certainly I would expect the state of the art to still involve some sort of feature extraction.
[1] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1214704 (pdf available from my personal page http://www.ru.is/kennarar/deong/pubs/ieee_eeg_final.pdf)
Edit: In light of the discussion in the comments, I'm editing the answer to provide a bit more detail. Signal processing is not my strongest area, so if I'm completely mistaking your description of what it is you're doing, feel free to ignore.
Yes, the answer to the question you asked is that when you have multiple channels of data and so your instance is a matrix, you just concatenate the rows into a row vector. So if for each training instance, you're getting a 14x128 matrix, you'd just convert that into a 1x1792 vector and then stick the class label on the end. Like
c1x1 | c1x2 | c1x3 | ... | c1x128 | c2x1 | c2x2 | ... | c14x127 | c14x128 | class
where cNxM = channel N, sample M. That would be the standard way to make a single feature vector out of a sort of feature matrix.
However...read on to see why I think this is not what you really want to do.
I'm still not clear what it is you're describing. In particular, where does the 128 come from? I see two possibilities here. (A) is that you sample each of the 14 electrodes 128 times for each item you want to classify. This is what I'm calling the raw data. (B) is that you've already run the DFT and you've ended up with 128 coefficients per channel. I think (A) is what you mean, and that's what I assume here, but it's not entirely clear.
For classification, you need meaningful features. Features are just whatever you decide to make them. You could take each of the 14 sensors, compute the mean and variance of the 128 points, and use those as your features. In that case, your training instances would look like
mean_ch1 | var_ch1 | mean_ch2 | var_ch2 | ... | mean_ch14 | var_ch14 | class
For EEG classification, mean and variance aren't going to be very good though -- they're not likely to provide enough useful information to discriminate between the classes. That's what I mean by meaningful features. If you want to predict whether, for example, an invasive species will thrive in a lake, you might need to know the temperature. You could then pass the classifier the estimated velocity of every water molecule in the lake separately, but that's entirely the wrong level of detail, and it's really unlikely the classifier would learn anything. You need to give it the temperature already computed.
So in your case, you could instead take an FFT of each window of 128 points. That would give you some small number of non-zero coefficients per channel. Your training data would then look like
dft_coeff1_ch1 | cft_coeff2_ch1 | dft_coeff3_ch1 | dft_coeff1_ch2 | dft_coeff2_ch2 | ... | class
You could also just dump the 128 values per channel into the feature vector unmodified, giving you 14*128=1792 features per input, but those features are probably terribly unhelpful -- you're giving it the velocities of molecules rather than the temperature again. In principle, most learning algorithms would be capable of learning the target concept, but the requirements on the amount of training data and time needed may be vast.
Features should capture the level of detail the classifier can use. For most time series data, that usually means high-level conceptual things like "sloping upward", "V-shaped", "flat for a while, then decreasing", "oscillating at these frequencies", etc. Whatever you as a human think might be relevant. This is really the reason to use something like a Fourier transform -- the frequency domain gives you a much higher level, and probably more useful, description of the signal with many fewer degrees of freedom than the time domain.