Correct response and predictor parameters when using caret and roc() - r-caret

I'm practicing logistic regression models and cross validation. I would like to output ROC curve to estimate performance, but I'm not sure which response and predictor I should use in the roc() function.
Here is an example I tried. Why are the plots different? What is my error? Thanks.
library(caret)
library(ggplot2)
library(lattice)
library(pROC)
data(mtcars)
mtcars$am = factor(ifelse(mtcars$am == 1, 'one', 'zero'))
ctrl = trainControl(method="cv",number = 5,
summaryFunction=twoClassSummary, classProbs=T,
savePredictions = T)
m=train(am ~ qsec, data = mtcars, method = "glm",
family = binomial,metric="ROC",
trControl=ctrl)
curve1 = roc(response = mtcars$am,
predictor = as.numeric(predict(m)), plot = T, legacy.axes = T, percent = T,
main = 'Test Curve1',
xlab = 'False Positive Percentage (1 - Specificity)',
ylab = 'True Positive Percentage (Sensitivity)',print.auc = T,
print.auc.x = 100, print.auc.y = 100, col = '#20B2AA', lwd = 4)
curve2 = roc(response = m$pred$obs,
predictor = as.numeric(m$pred$pred), plot = T, legacy.axes = T, percent = T,
main = 'Test Curve2',
xlab = 'False Positive Percentage (1 - Specificity)',
ylab = 'True Positive Percentage (Sensitivity)',print.auc = T,
print.auc.x = 100, print.auc.y = 100, col = '#20B2AA', lwd = 4)

Related

About pytorch reduction mean

I want use L1loss and BCELoss with reduction='mean' in vae reconstruction loss
but it produce same result for all different input i.e. result for landmark
so i use reduction='sum' it produce correct result that different output for different input.
how can i use mean reduction??
L1Loss = nn.L1Loss(reduction='mean').to(device)
BCELoss = nn.BCELoss(reduction='mean').to(device)
kld_criterion = KLDLoss(reduction='mean').to(device)
in training
rec_m, (rec_f, mean_f, logvar_f), (rec_l, mean_l, logvar_l) = model(origin)
lm_loss = CELoss(rec_l, lm)
f_loss = L1Loss(rec_f, f)
m_loss = CELoss(rec_m, m)
lm_kld_loss = kld_criterion(mean_l, logvar_l)
f_kld_loss = kld_criterion(mean_f, logvar_f)
loss = 4000*(f_loss + m_loss) + 30 * (lm_kld_loss + f_kld_loss) + 2000 * lm_loss
and model code
class VAE_NET(nn.Module):
def __init__(self, nc=3, ndf=32, nef=32, nz=128, isize=128, device=torch.device("cuda:0"), is_train=True):
super(VAE_NET, self).__init__()
self.nz = nz
# Encoder
self.l_encoder = Encoder(nc=nc, nef=nef, nz=nz, isize=isize, device=device)
self.f_encoder = Encoder(nc=nc, nef=nef, nz=nz, isize=isize, device=device)
# Decoder
self.l_decoder = Decoder(nc=nc, ndf=ndf, nz=nz, isize=isize)
self.m_decoder = Decoder(nc = nc, ndf = ndf, nz = nz * 2, isize = isize)
self.f_decoder = Decoder(nc = nc, ndf = ndf, nz = nz * 2, isize = isize)
if is_train == False:
for param in self.encoder.parameters():
param.requires_grad = False
for param in self.decoder.parameters():
param.requires_grad = False
def forward(self, x):
latent_l, mean_l, logvar_l = self.l_encoder(x)
latent_f, mean_f, logvar_f = self.f_encoder(x)
concat_latent = torch.cat((latent_l, latent_f), 1)
rec_l = self.l_decoder(latent_l)
rec_m = self.m_decoder(concat_latent)
rec_f = self.f_decoder(concat_latent)
return rec_m, (rec_f, mean_f, latent_f), (rec_l, mean_l, latent_l)
l is for face landmark
m is for face mask
f is for face part
reduction='sum' and reduction='mean' differs only by a scalar multiple. There is nothing wrong with your implementation from what I see. If your model only produces correct results with reduction='sum', it is likely that your learning rate is too low (and sum makes up for that difference by amplifying the gradient).

GAN generator producing distinguishable output

I am trying to train a special type of GAN called a Model-Assisted GAN (https://arxiv.org/pdf/1812.00879) using Keras, which takes as an input a vector of 13 input parameters + Gaussian noise, and generate a vector of 6 outputs. The mapping between the vectors of inputs and outputs is non-trivial, as it is related to a high energy physics simulation (in particular the simulation has some inherent randomness). The biggest dependence on the final outputs are encoded in the first five inputs. The biggest difference for this model to a traditional GAN is the use of a Siamese network as the discriminator, and this takes two inputs at a time, so for the same input parameters we provide two sets of possible outputs per training (possible due to the randomness of the simulation), so there are sort of 12 output distributions, but only 6 are unique, which is what we aim to generate. We used a 1D convolutional neural network for both the discriminator and generator.
The current model we have trained seems to reproduce the output distributions for an independent testing sample to reasonably good accuracy (see below plot of histograms overlayed), but there are still some clear differences between the distributions, and the eventual goal is for the model to be able to produce indistinguishable data to the simulation. I have so far tried varying the learning rate and added varying amounts of learning rate decay, tweaking the network architectures, changing some of the hyperparameters of the optimiser, adding some more noise to the discriminator training by implementing some label smoothing and swapping the order of inputs, adding some label smoothing to the generator training, increasing the batch size and also increasing the amount of noise inputs, and I still cannot get the model to perfectly reproduce the output distributions. I am struggling to come up with ideas of what to do next, and I was wondering if anyone else has had a similar problem, whereby the output is not quite perfect, and if so how they might have gone about solving this problem? Any thoughts or tips would be greatly appreciated!
I have included the full code for the training, as well as some plots of the input and output distributions (before applying the Quantile Transformer), the loss plots for the adversarial network and the discriminator (A for Adversarial, S for Siamese (Discriminator)) and then the overlay of the histograms for the generated and true output distributions for the independent testing sample (which is where you can see the small differences that arise).
Thanks in advance.
TRAINING CODE
"""
Training implementation
"""
net_range = [-1,1]
gauss_range = [-5.5,5.5]
mapping = interp1d(gauss_range, net_range)
class ModelAssistedGANPID(object):
def __init__(self, params=64, observables=6):
self.params = params
self.observables = observables
self.Networks = Networks(params=params, observables=observables)
self.siamese = self.Networks.siamese_model()
self.adversarial1 = self.Networks.adversarial1_model()
def train(self, pretrain_steps=4500, train_steps=100000, batch_size=32, train_no=1):
print('Pretraining for ', pretrain_steps,' steps before training for ', train_steps, ' steps')
print('Batch size = ', batch_size)
print('Training number = ', train_no)
'''
Pre-training stage
'''
# Number of tracks for the training + validation sample
n_events = 1728000 + 100000
n_train = n_events - 100000
# Parameters for Gaussian noise
lower = -1
upper = 1
mu = 0
sigma = 1
# import simulation data
print('Loading data...')
kaon_data = pd.read_hdf('PATH')
kaon_data = kaon_data.sample(n=n_events)
kaon_data = kaon_data.reset_index(drop=True)
kaon_data_train = kaon_data[:n_train]
kaon_data_test = kaon_data[n_train:n_events]
print("Producing training data...")
# add all inputs
P_kaon_data_train = kaon_data_train['TrackP']
Pt_kaon_data_train = kaon_data_train['TrackPt']
nTracks_kaon_data_train = kaon_data_train['NumLongTracks']
numRich1_kaon_data_train = kaon_data_train['NumRich1Hits']
numRich2_kaon_data_train = kaon_data_train['NumRich2Hits']
rich1EntryX_kaon_data_train = kaon_data_train['TrackRich1EntryX']
rich1EntryY_kaon_data_train = kaon_data_train['TrackRich1EntryY']
rich1ExitX_kaon_data_train = kaon_data_train['TrackRich1ExitX']
rich1ExitY_kaon_data_train = kaon_data_train['TrackRich1ExitY']
rich2EntryX_kaon_data_train = kaon_data_train['TrackRich2EntryX']
rich2EntryY_kaon_data_train = kaon_data_train['TrackRich2EntryY']
rich2ExitX_kaon_data_train = kaon_data_train['TrackRich2ExitX']
rich2ExitY_kaon_data_train = kaon_data_train['TrackRich2ExitY']
# add different DLL outputs
Dlle_kaon_data_train = kaon_data_train['RichDLLe']
Dlle2_kaon_data_train = kaon_data_train['RichDLLe2']
Dllmu_kaon_data_train = kaon_data_train['RichDLLmu']
Dllmu2_kaon_data_train = kaon_data_train['RichDLLmu2']
Dllk_kaon_data_train = kaon_data_train['RichDLLk']
Dllk2_kaon_data_train = kaon_data_train['RichDLLk2']
Dllp_kaon_data_train = kaon_data_train['RichDLLp']
Dllp2_kaon_data_train = kaon_data_train['RichDLLp2']
Dlld_kaon_data_train = kaon_data_train['RichDLLd']
Dlld2_kaon_data_train = kaon_data_train['RichDLLd2']
Dllbt_kaon_data_train = kaon_data_train['RichDLLbt']
Dllbt2_kaon_data_train = kaon_data_train['RichDLLbt2']
# convert to numpy array
P_kaon_data_train = P_kaon_data_train.to_numpy()
Pt_kaon_data_train = Pt_kaon_data_train.to_numpy()
nTracks_kaon_data_train = nTracks_kaon_data_train.to_numpy()
numRich1_kaon_data_train = numRich1_kaon_data_train.to_numpy()
numRich2_kaon_data_train = numRich2_kaon_data_train.to_numpy()
rich1EntryX_kaon_data_train = rich1EntryX_kaon_data_train.to_numpy()
rich1EntryY_kaon_data_train = rich1EntryY_kaon_data_train.to_numpy()
rich1ExitX_kaon_data_train = rich1ExitX_kaon_data_train.to_numpy()
rich1ExitY_kaon_data_train = rich1ExitY_kaon_data_train.to_numpy()
rich2EntryX_kaon_data_train = rich2EntryX_kaon_data_train.to_numpy()
rich2EntryY_kaon_data_train = rich2EntryY_kaon_data_train.to_numpy()
rich2ExitX_kaon_data_train = rich2ExitX_kaon_data_train.to_numpy()
rich2ExitY_kaon_data_train = rich2ExitY_kaon_data_train.to_numpy()
Dlle_kaon_data_train = Dlle_kaon_data_train.to_numpy()
Dlle2_kaon_data_train = Dlle2_kaon_data_train.to_numpy()
Dllmu_kaon_data_train = Dllmu_kaon_data_train.to_numpy()
Dllmu2_kaon_data_train = Dllmu2_kaon_data_train.to_numpy()
Dllk_kaon_data_train = Dllk_kaon_data_train.to_numpy()
Dllk2_kaon_data_train = Dllk2_kaon_data_train.to_numpy()
Dllp_kaon_data_train = Dllp_kaon_data_train.to_numpy()
Dllp2_kaon_data_train = Dllp2_kaon_data_train.to_numpy()
Dlld_kaon_data_train = Dlld_kaon_data_train.to_numpy()
Dlld2_kaon_data_train = Dlld2_kaon_data_train.to_numpy()
Dllbt_kaon_data_train = Dllbt_kaon_data_train.to_numpy()
Dllbt2_kaon_data_train = Dllbt2_kaon_data_train.to_numpy()
# Reshape arrays
P_kaon_data_train = np.array(P_kaon_data_train).reshape(-1, 1)
Pt_kaon_data_train = np.array(Pt_kaon_data_train).reshape(-1, 1)
nTracks_kaon_data_train = np.array(nTracks_kaon_data_train).reshape(-1, 1)
numRich1_kaon_data_train = np.array(numRich1_kaon_data_train).reshape(-1, 1)
numRich2_kaon_data_train = np.array(numRich2_kaon_data_train).reshape(-1, 1)
rich1EntryX_kaon_data_train = np.array(rich1EntryX_kaon_data_train).reshape(-1, 1)
rich1EntryY_kaon_data_train = np.array(rich1EntryY_kaon_data_train).reshape(-1, 1)
rich1ExitX_kaon_data_train = np.array(rich1ExitX_kaon_data_train).reshape(-1, 1)
rich1ExitY_kaon_data_train = np.array(rich1ExitY_kaon_data_train).reshape(-1, 1)
rich2EntryX_kaon_data_train = np.array(rich2EntryX_kaon_data_train).reshape(-1, 1)
rich2EntryY_kaon_data_train = np.array(rich2EntryY_kaon_data_train).reshape(-1, 1)
rich2ExitX_kaon_data_train = np.array(rich2ExitX_kaon_data_train).reshape(-1, 1)
rich2ExitY_kaon_data_train = np.array(rich2ExitY_kaon_data_train).reshape(-1, 1)
Dlle_kaon_data_train = np.array(Dlle_kaon_data_train).reshape(-1, 1)
Dlle2_kaon_data_train = np.array(Dlle2_kaon_data_train).reshape(-1, 1)
Dllmu_kaon_data_train = np.array(Dllmu_kaon_data_train).reshape(-1, 1)
Dllmu2_kaon_data_train = np.array(Dllmu2_kaon_data_train).reshape(-1, 1)
Dllk_kaon_data_train = np.array(Dllk_kaon_data_train).reshape(-1, 1)
Dllk2_kaon_data_train = np.array(Dllk2_kaon_data_train).reshape(-1, 1)
Dllp_kaon_data_train = np.array(Dllp_kaon_data_train).reshape(-1, 1)
Dllp2_kaon_data_train = np.array(Dllp2_kaon_data_train).reshape(-1, 1)
Dlld_kaon_data_train = np.array(Dlld_kaon_data_train).reshape(-1, 1)
Dlld2_kaon_data_train = np.array(Dlld2_kaon_data_train).reshape(-1, 1)
Dllbt_kaon_data_train = np.array(Dllbt_kaon_data_train).reshape(-1, 1)
Dllbt2_kaon_data_train = np.array(Dllbt2_kaon_data_train).reshape(-1, 1)
inputs_kaon_data_train = np.concatenate((P_kaon_data_train, Pt_kaon_data_train, nTracks_kaon_data_train, numRich1_kaon_data_train, numRich2_kaon_data_train, rich1EntryX_kaon_data_train,
rich1EntryY_kaon_data_train, rich1ExitX_kaon_data_train, rich1ExitY_kaon_data_train, rich2EntryX_kaon_data_train, rich2EntryY_kaon_data_train, rich2ExitX_kaon_data_train, rich2ExitY_kaon_data_train), axis=1)
Dll_kaon_data_train = np.concatenate((Dlle_kaon_data_train, Dllmu_kaon_data_train, Dllk_kaon_data_train, Dllp_kaon_data_train, Dlld_kaon_data_train, Dllbt_kaon_data_train), axis=1)
Dll2_kaon_data_train = np.concatenate((Dlle2_kaon_data_train, Dllmu2_kaon_data_train, Dllk2_kaon_data_train, Dllp2_kaon_data_train, Dlld2_kaon_data_train, Dllbt2_kaon_data_train), axis=1)
print('Transforming inputs and outputs using Quantile Transformer...')
scaler_inputs = QuantileTransformer(output_distribution='normal', n_quantiles=int(1e5), subsample=int(1e10)).fit(inputs_kaon_data_train)
scaler_Dll = QuantileTransformer(output_distribution='normal', n_quantiles=int(1e5), subsample=int(1e10)).fit(Dll_kaon_data_train)
scaler_Dll2 = QuantileTransformer(output_distribution='normal', n_quantiles=int(1e5), subsample=int(1e10)).fit(Dll2_kaon_data_train)
inputs_kaon_data_train = scaler_inputs.transform(inputs_kaon_data_train)
Dll_kaon_data_train = scaler_Dll.transform(Dll_kaon_data_train)
Dll2_kaon_data_train = scaler_Dll2.transform(Dll2_kaon_data_train)
inputs_kaon_data_train = mapping(inputs_kaon_data_train)
Dll_kaon_data_train = mapping(Dll_kaon_data_train)
Dll2_kaon_data_train = mapping(Dll2_kaon_data_train)
# REPEATING FOR TESTING DATA
print("Producing testing data...")
# add all inputs
P_kaon_data_test = kaon_data_test['TrackP']
Pt_kaon_data_test = kaon_data_test['TrackPt']
nTracks_kaon_data_test = kaon_data_test['NumLongTracks']
numRich1_kaon_data_test = kaon_data_test['NumRich1Hits']
numRich2_kaon_data_test = kaon_data_test['NumRich2Hits']
rich1EntryX_kaon_data_test = kaon_data_test['TrackRich1EntryX']
rich1EntryY_kaon_data_test = kaon_data_test['TrackRich1EntryY']
rich1ExitX_kaon_data_test = kaon_data_test['TrackRich1ExitX']
rich1ExitY_kaon_data_test = kaon_data_test['TrackRich1ExitY']
rich2EntryX_kaon_data_test = kaon_data_test['TrackRich2EntryX']
rich2EntryY_kaon_data_test = kaon_data_test['TrackRich2EntryY']
rich2ExitX_kaon_data_test = kaon_data_test['TrackRich2ExitX']
rich2ExitY_kaon_data_test = kaon_data_test['TrackRich2ExitY']
# add different DLL outputs
Dlle_kaon_data_test = kaon_data_test['RichDLLe']
Dlle2_kaon_data_test = kaon_data_test['RichDLLe2']
Dllmu_kaon_data_test = kaon_data_test['RichDLLmu']
Dllmu2_kaon_data_test = kaon_data_test['RichDLLmu2']
Dllk_kaon_data_test = kaon_data_test['RichDLLk']
Dllk2_kaon_data_test = kaon_data_test['RichDLLk2']
Dllp_kaon_data_test = kaon_data_test['RichDLLp']
Dllp2_kaon_data_test = kaon_data_test['RichDLLp2']
Dlld_kaon_data_test = kaon_data_test['RichDLLd']
Dlld2_kaon_data_test = kaon_data_test['RichDLLd2']
Dllbt_kaon_data_test = kaon_data_test['RichDLLbt']
Dllbt2_kaon_data_test = kaon_data_test['RichDLLbt2']
# convert to numpy array
P_kaon_data_test = P_kaon_data_test.to_numpy()
Pt_kaon_data_test = Pt_kaon_data_test.to_numpy()
nTracks_kaon_data_test = nTracks_kaon_data_test.to_numpy()
numRich1_kaon_data_test = numRich1_kaon_data_test.to_numpy()
numRich2_kaon_data_test = numRich2_kaon_data_test.to_numpy()
rich1EntryX_kaon_data_test = rich1EntryX_kaon_data_test.to_numpy()
rich1EntryY_kaon_data_test = rich1EntryY_kaon_data_test.to_numpy()
rich1ExitX_kaon_data_test = rich1ExitX_kaon_data_test.to_numpy()
rich1ExitY_kaon_data_test = rich1ExitY_kaon_data_test.to_numpy()
rich2EntryX_kaon_data_test = rich2EntryX_kaon_data_test.to_numpy()
rich2EntryY_kaon_data_test = rich2EntryY_kaon_data_test.to_numpy()
rich2ExitX_kaon_data_test = rich2ExitX_kaon_data_test.to_numpy()
rich2ExitY_kaon_data_test = rich2ExitY_kaon_data_test.to_numpy()
Dlle_kaon_data_test = Dlle_kaon_data_test.to_numpy()
Dlle2_kaon_data_test = Dlle2_kaon_data_test.to_numpy()
Dllmu_kaon_data_test = Dllmu_kaon_data_test.to_numpy()
Dllmu2_kaon_data_test = Dllmu2_kaon_data_test.to_numpy()
Dllk_kaon_data_test = Dllk_kaon_data_test.to_numpy()
Dllk2_kaon_data_test = Dllk2_kaon_data_test.to_numpy()
Dllp_kaon_data_test = Dllp_kaon_data_test.to_numpy()
Dllp2_kaon_data_test = Dllp2_kaon_data_test.to_numpy()
Dlld_kaon_data_test = Dlld_kaon_data_test.to_numpy()
Dlld2_kaon_data_test = Dlld2_kaon_data_test.to_numpy()
Dllbt_kaon_data_test = Dllbt_kaon_data_test.to_numpy()
Dllbt2_kaon_data_test = Dllbt2_kaon_data_test.to_numpy()
P_kaon_data_test = np.array(P_kaon_data_test).reshape(-1, 1)
Pt_kaon_data_test = np.array(Pt_kaon_data_test).reshape(-1, 1)
nTracks_kaon_data_test = np.array(nTracks_kaon_data_test).reshape(-1, 1)
numRich1_kaon_data_test = np.array(numRich1_kaon_data_test).reshape(-1, 1)
numRich2_kaon_data_test = np.array(numRich2_kaon_data_test).reshape(-1, 1)
rich1EntryX_kaon_data_test = np.array(rich1EntryX_kaon_data_test).reshape(-1, 1)
rich1EntryY_kaon_data_test = np.array(rich1EntryY_kaon_data_test).reshape(-1, 1)
rich1ExitX_kaon_data_test = np.array(rich1ExitX_kaon_data_test).reshape(-1, 1)
rich1ExitY_kaon_data_test = np.array(rich1ExitY_kaon_data_test).reshape(-1, 1)
rich2EntryX_kaon_data_test = np.array(rich2EntryX_kaon_data_test).reshape(-1, 1)
rich2EntryY_kaon_data_test = np.array(rich2EntryY_kaon_data_test).reshape(-1, 1)
rich2ExitX_kaon_data_test = np.array(rich2ExitX_kaon_data_test).reshape(-1, 1)
rich2ExitY_kaon_data_test = np.array(rich2ExitY_kaon_data_test).reshape(-1, 1)
Dlle_kaon_data_test = np.array(Dlle_kaon_data_test).reshape(-1, 1)
Dlle2_kaon_data_test = np.array(Dlle2_kaon_data_test).reshape(-1, 1)
Dllmu_kaon_data_test = np.array(Dllmu_kaon_data_test).reshape(-1, 1)
Dllmu2_kaon_data_test = np.array(Dllmu2_kaon_data_test).reshape(-1, 1)
Dllk_kaon_data_test = np.array(Dllk_kaon_data_test).reshape(-1, 1)
Dllk2_kaon_data_test = np.array(Dllk2_kaon_data_test).reshape(-1, 1)
Dllp_kaon_data_test = np.array(Dllp_kaon_data_test).reshape(-1, 1)
Dllp2_kaon_data_test = np.array(Dllp2_kaon_data_test).reshape(-1, 1)
Dlld_kaon_data_test = np.array(Dlld_kaon_data_test).reshape(-1, 1)
Dlld2_kaon_data_test = np.array(Dlld2_kaon_data_test).reshape(-1, 1)
Dllbt_kaon_data_test = np.array(Dllbt_kaon_data_test).reshape(-1, 1)
Dllbt2_kaon_data_test = np.array(Dllbt2_kaon_data_test).reshape(-1, 1)
inputs_kaon_data_test = np.concatenate((P_kaon_data_test, Pt_kaon_data_test, nTracks_kaon_data_test, numRich1_kaon_data_test, numRich2_kaon_data_test, rich1EntryX_kaon_data_test, rich1EntryY_kaon_data_test, rich1ExitX_kaon_data_test, rich1ExitY_kaon_data_test, rich2EntryX_kaon_data_test, rich2EntryY_kaon_data_test, rich2ExitX_kaon_data_test, rich2ExitY_kaon_data_test), axis=1)
Dll_kaon_data_test = np.concatenate((Dlle_kaon_data_test, Dllmu_kaon_data_test, Dllk_kaon_data_test, Dllp_kaon_data_test, Dlld_kaon_data_test, Dllbt_kaon_data_test), axis=1)
Dll2_kaon_data_test = np.concatenate((Dlle2_kaon_data_test, Dllmu2_kaon_data_test, Dllk2_kaon_data_test, Dllp2_kaon_data_test, Dlld2_kaon_data_test, Dllbt2_kaon_data_test), axis=1)
print('Transforming inputs and outputs using Quantile Transformer...')
inputs_kaon_data_test = scaler_inputs.transform(inputs_kaon_data_test)
Dll_kaon_data_test = scaler_Dll.transform(Dll_kaon_data_test)
Dll2_kaon_data_test = scaler_Dll.transform(Dll2_kaon_data_test)
inputs_kaon_data_test = mapping(inputs_kaon_data_test)
Dll_kaon_data_test = mapping(Dll_kaon_data_test)
Dll2_kaon_data_test = mapping(Dll2_kaon_data_test)
# Producing testing data
params_list_test = np.random.normal(loc=mu, scale=sigma, size=[len(kaon_data_test), self.params])
for e in range(len(kaon_data_test)):
params_list_test[e][0] = inputs_kaon_data_test[e][0]
params_list_test[e][1] = inputs_kaon_data_test[e][1]
params_list_test[e][2] = inputs_kaon_data_test[e][2]
params_list_test[e][3] = inputs_kaon_data_test[e][3]
params_list_test[e][4] = inputs_kaon_data_test[e][4]
params_list_test[e][5] = inputs_kaon_data_test[e][5]
params_list_test[e][6] = inputs_kaon_data_test[e][6]
params_list_test[e][7] = inputs_kaon_data_test[e][7]
params_list_test[e][8] = inputs_kaon_data_test[e][8]
params_list_test[e][9] = inputs_kaon_data_test[e][9]
params_list_test[e][10] = inputs_kaon_data_test[e][10]
params_list_test[e][11] = inputs_kaon_data_test[e][11]
params_list_test[e][12] = inputs_kaon_data_test[e][12]
obs_simu_1_test = np.zeros((len(kaon_data_test), self.observables, 1))
obs_simu_1_test.fill(-1)
for e in range(len(kaon_data_test)):
obs_simu_1_test[e][0][0] = Dll_kaon_data_test[e][0]
obs_simu_1_test[e][1][0] = Dll_kaon_data_test[e][1]
obs_simu_1_test[e][2][0] = Dll_kaon_data_test[e][2]
obs_simu_1_test[e][3][0] = Dll_kaon_data_test[e][3]
obs_simu_1_test[e][4][0] = Dll_kaon_data_test[e][4]
obs_simu_1_test[e][5][0] = Dll_kaon_data_test[e][5]
obs_simu_2_test = np.zeros((len(kaon_data_test), self.observables, 1))
obs_simu_2_test.fill(-1)
for e in range(len(kaon_data_test)):
obs_simu_2_test[e][0][0] = Dll2_kaon_data_test[e][0]
obs_simu_2_test[e][1][0] = Dll2_kaon_data_test[e][1]
obs_simu_2_test[e][2][0] = Dll2_kaon_data_test[e][2]
obs_simu_2_test[e][3][0] = Dll2_kaon_data_test[e][3]
obs_simu_2_test[e][4][0] = Dll2_kaon_data_test[e][4]
obs_simu_2_test[e][5][0] = Dll2_kaon_data_test[e][5]
event_no_par = 0
event_no_obs_1 = 0
event_no_obs_2 = 0
d1_hist, d2_hist, d_hist, g_hist, a1_hist, a2_hist = list(), list(), list(), list(), list(), list()
print('Beginning pre-training...')
'''
#Pre-training stage
'''
for train_step in range(pretrain_steps):
log_mesg = '%d' % train_step
noise_value = 0.3
params_list = np.random.normal(loc=mu,scale=sigma, size=[batch_size, self.params])
y_ones = np.ones([batch_size, 1])
y_zeros = np.zeros([batch_size, 1])
# add physics parameters + noise to params_list
for b in range(batch_size):
params_list[b][0] = inputs_kaon_data_train[event_no_par][0]
params_list[b][1] = inputs_kaon_data_train[event_no_par][1]
params_list[b][2] = inputs_kaon_data_train[event_no_par][2]
params_list[b][3] = inputs_kaon_data_train[event_no_par][3]
params_list[b][4] = inputs_kaon_data_train[event_no_par][4]
params_list[b][5] = inputs_kaon_data_train[event_no_par][5]
params_list[b][6] = inputs_kaon_data_train[event_no_par][6]
params_list[b][7] = inputs_kaon_data_train[event_no_par][7]
params_list[b][8] = inputs_kaon_data_train[event_no_par][8]
params_list[b][9] = inputs_kaon_data_train[event_no_par][9]
params_list[b][10] = inputs_kaon_data_train[event_no_par][10]
params_list[b][11] = inputs_kaon_data_train[event_no_par][11]
params_list[b][12] = inputs_kaon_data_train[event_no_par][12]
event_no_par += 1
# Step 1
# simulated observables (number 1)
obs_simu_1 = np.zeros((batch_size, self.observables, 1))
obs_simu_1.fill(-1)
for b in range(batch_size):
obs_simu_1[b][0][0] = Dll_kaon_data_train[event_no_obs_1][0]
obs_simu_1[b][1][0] = Dll_kaon_data_train[event_no_obs_1][1]
obs_simu_1[b][2][0] = Dll_kaon_data_train[event_no_obs_1][2]
obs_simu_1[b][3][0] = Dll_kaon_data_train[event_no_obs_1][3]
obs_simu_1[b][4][0] = Dll_kaon_data_train[event_no_obs_1][4]
obs_simu_1[b][5][0] = Dll_kaon_data_train[event_no_obs_1][5]
event_no_obs_1 += 1
obs_simu_1_copy = np.copy(obs_simu_1)
# simulated observables (Gaussian smeared - number 2)
obs_simu_2 = np.zeros((batch_size, self.observables, 1))
obs_simu_2.fill(-1)
for b in range(batch_size):
obs_simu_2[b][0][0] = Dll2_kaon_data_train[event_no_obs_2][0]
obs_simu_2[b][1][0] = Dll2_kaon_data_train[event_no_obs_2][1]
obs_simu_2[b][2][0] = Dll2_kaon_data_train[event_no_obs_2][2]
obs_simu_2[b][3][0] = Dll2_kaon_data_train[event_no_obs_2][3]
obs_simu_2[b][4][0] = Dll2_kaon_data_train[event_no_obs_2][4]
obs_simu_2[b][5][0] = Dll2_kaon_data_train[event_no_obs_2][5]
event_no_obs_2 += 1
obs_simu_2_copy = np.copy(obs_simu_2)
# emulated DLL values
obs_emul = self.emulator.predict(params_list)
obs_emul_copy = np.copy(obs_emul)
# decay the learn rate
if(train_step % 1000 == 0 and train_step>0):
siamese_lr = K.eval(self.siamese.optimizer.lr)
K.set_value(self.siamese.optimizer.lr, siamese_lr*0.7)
print('lr for Siamese network updated from %f to %f' % (siamese_lr, siamese_lr*0.7))
adversarial1_lr = K.eval(self.adversarial1.optimizer.lr)
K.set_value(self.adversarial1.optimizer.lr, adversarial1_lr*0.7)
print('lr for Adversarial1 network updated from %f to %f' % (adversarial1_lr, adversarial1_lr*0.7))
loss_simu_list = [obs_simu_1_copy, obs_simu_2_copy]
loss_fake_list = [obs_simu_1_copy, obs_emul_copy]
input_val = 0
# swap which inputs to give to Siamese network
if(np.random.random() < 0.5):
loss_simu_list[0], loss_simu_list[1] = loss_simu_list[1], loss_simu_list[0]
if(np.random.random() < 0.5):
loss_fake_list[0] = obs_simu_2_copy
input_val = 1
# noise
y_ones = np.array([np.random.uniform(0.97, 1.00) for x in range(batch_size)]).reshape([batch_size, 1])
y_zeros = np.array([np.random.uniform(0.00, 0.03) for x in range(batch_size)]).reshape([batch_size, 1])
if(input_val == 0):
if np.random.random() < noise_value:
for b in range(batch_size):
if np.random.random() < noise_value:
obs_simu_1_copy[b], obs_simu_2_copy[b] = obs_simu_2[b], obs_simu_1[b]
obs_simu_1_copy[b], obs_emul_copy[b] = obs_emul[b], obs_simu_1[b]
if(input_val == 1):
if np.random.random() < noise_value:
for b in range(batch_size):
if np.random.random() < noise_value:
obs_simu_1_copy[b], obs_simu_2_copy[b] = obs_simu_2[b], obs_simu_1[b]
obs_simu_2_copy[b], obs_emul_copy[b] = obs_emul[b], obs_simu_2[b]
# train siamese
d_loss_simu = self.siamese.train_on_batch(loss_simu_list, y_ones)
d_loss_fake = self.siamese.train_on_batch(loss_fake_list, y_zeros)
d_loss = 0.5 * np.add(d_loss_simu, d_loss_fake)
log_mesg = '%s [S loss: %f]' % (log_mesg, d_loss[0])
#print(log_mesg)
#print('--------------------')
#noise_value*=0.999
#Step 2
# train emulator
a_loss_list = [obs_simu_1, params_list]
a_loss = self.adversarial1.train_on_batch(a_loss_list, y_ones)
log_mesg = '%s [E loss: %f]' % (log_mesg, a_loss[0])
print(log_mesg)
print('--------------------')
noise_value*=0.999
if __name__ == '__main__':
params_physics = 13
params_noise = 51 #51 looks ok, 61 is probably best, 100 also works
params = params_physics + params_noise
observables= 6
train_no = 1
magan = ModelAssistedGANPID(params=params, observables=observables)
magan.train(pretrain_steps=11001, train_steps=10000, batch_size=32, train_no=train_no)
NETWORKS
class Networks(object):
def __init__(self, noise_size=100, params=64, observables=5):
self.noise_size = noise_size
self.params = params
self.observables = observables
self.E = None # emulator
self.S = None # siamese
self.SM = None # siamese model
self.AM1 = None # adversarial model 1
'''
Emulator: generate identical observable parameters to those of the simulator S when both E and S are fed with the same input parameters
'''
def emulator(self):
if self.E:
return self.E
# input params
# the model takes as input an array of shape (*, self.params = 6)
input_params_shape = (self.params,)
input_params_layer = Input(shape=input_params_shape, name='input_params')
# architecture
self.E = Dense(1024)(input_params_layer)
self.E = LeakyReLU(0.2)(self.E)
self.E = Dense(self.observables*128, kernel_initializer=initializers.RandomNormal(stddev=0.02))(self.E)
self.E = LeakyReLU(0.2)(self.E)
self.E = Reshape((self.observables, 128))(self.E)
self.E = UpSampling1D(size=2)(self.E)
self.E = Conv1D(64, kernel_size=7, padding='valid')(self.E)
self.E = LeakyReLU(0.2)(self.E)
self.E = UpSampling1D(size=2)(self.E)
self.E = Conv1D(1, kernel_size=7, padding='valid', activation='tanh')(self.E)
# model
self.E = Model(inputs=input_params_layer, outputs=self.E, name='emulator')
# print
print("Emulator")
self.E.summary()
return self.E
'''
Siamese: determine the similarity between output values produced by the simulator and emulator
'''
def siamese(self):
if self.S:
return self.S
# input DLL images
input_shape = (self.observables, 1)
input_layer_anchor = Input(shape=input_shape, name='input_layer_anchor')
input_layer_candid = Input(shape=input_shape, name='input_layer_candidate')
input_layer = Input(shape=input_shape, name='input_layer')
# siamese
cnn = Conv1D(64, kernel_size=8, strides=2, padding='same',
kernel_initializer=initializers.RandomNormal(stddev=0.02))(input_layer)
cnn = LeakyReLU(0.2)(cnn)
cnn = Conv1D(128, kernel_size=5, strides=2, padding='same')(cnn)
cnn = LeakyReLU(0.2)(cnn)
cnn = Flatten()(cnn)
cnn = Dense(128, activation='sigmoid')(cnn)
cnn = Model(inputs=input_layer, outputs=cnn, name='cnn')
# left and right encodings
encoded_l = cnn(input_layer_anchor)
encoded_r = cnn(input_layer_candid)
# merge two encoded inputs with the L1 or L2 distance between them
L1_distance = lambda x: K.abs(x[0]-x[1])
L2_distance = lambda x: (x[0]-x[1]+K.epsilon())**2/(x[0]+x[1]+K.epsilon())
both = Lambda(L2_distance)([encoded_l, encoded_r])
prediction = Dense(1, activation='sigmoid')(both)
# model
self.S = Model([input_layer_anchor, input_layer_candid], outputs=prediction, name='siamese')
# print
print("Siamese:")
self.S.summary()
print("Siamese CNN:")
cnn.summary()
return self.S
'''
Siamese model
'''
def siamese_model(self):
if self.SM:
return self.SM
# optimizer
optimizer = Adam(lr=0.004, beta_1=0.5, beta_2=0.9)
# input DLL values
input_shape = (self.observables, 1)
input_layer_anchor = Input(shape=input_shape, name='input_layer_anchor')
input_layer_candid = Input(shape=input_shape, name='input_layer_candidate')
input_layer = [input_layer_anchor, input_layer_candid]
# discriminator
siamese_ref = self.siamese()
siamese_ref.trainable = True
self.SM = siamese_ref(input_layer)
# model
self.SM = Model(inputs=input_layer, outputs=self.SM, name='siamese_model')
self.SM.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=[metrics.binary_accuracy])
print("Siamese model")
self.SM.summary()
return self.SM
'''
Adversarial 1 model (adversarial pre-training phase) - this is where the emulator and siamese network are trained to enable the emulator to generate DLL values for a set of given physics inputs
'''
def adversarial1_model(self):
if self.AM1:
return self.AM1
optimizer = Adam(lr=0.0004, beta_1=0.5, beta_2=0.9)
# input 1: simulated DLL values
input_obs_shape = (self.observables, 1)
input_obs_layer = Input(shape=input_obs_shape, name='input_obs')
# input 2: params
input_params_shape = (self.params, )
input_params_layer = Input(shape=input_params_shape, name='input_params')
# emulator
emulator_ref = self.emulator()
emulator_ref.trainable = True
self.AM1 = emulator_ref(input_params_layer)
# siamese
siamese_ref = self.siamese()
siamese_ref.trainable = False
self.AM1 = siamese_ref([input_obs_layer, self.AM1])
# model
input_layer = [input_obs_layer, input_params_layer]
self.AM1 = Model(inputs=input_layer, outputs=self.AM1, name='adversarial_1_model')
self.AM1.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=[metrics.binary_accuracy])
# print
print("Adversarial 1 model:")
self.AM1.summary()
return self.AM1
INPUTS PLOT
OUTPUTS PLOT
LOSS PLOT
GENERATED OUTPUT (ORANGE) vs TRUE OUTPUT (BLUE)

How to create stratified folds for repeatedcv in caret?

The way to create stratified folds for cv in caret is like this
library(caret)
library(data.table)
train_dat <- data.table(group = c(rep("group1",10), rep("group2",5)), x1 = rnorm(15), x2 = rnorm(15), label = factor(c(rep("treatment",15), rep("control",15))))
folds <- createFolds(train_dat[, group], k = 5)
fitCtrl <- trainControl(method = "cv", index = folds, classProbs = T, summaryFunction = twoClassSummary)
train(label~., data = train_dat[, !c("group"), with = F], trControl = fitCtrl, method = "xgbTree", metric = "ROC")
To balance group1 and group2, the creation of fold indexes is based on "group" variable.
However, is there any way to createFolds for repeatedcv in caret? So, I can have a balanced split for repeatedcv. Should I combined several createFolds and run trainControl?
trControl = trainControl(method = "cv", index = many_repeated_folds)
Thanks!
createMultiFolds is probably what you are interested in.

All input arrays and target arrays must have the same number of samples."- Training on single image to check if the model works in keras

def obcandidate(inputvgg,outputmodel):
graph = Graph()
graph.add_input(name = 'input1', input_shape = (512, 14, 14))
graph.add_node(Convolution2D(512, 1, 1), name = 'conv11', input = 'input1')
graph.add_node(Convolution2D(512, 14, 14), name = 'conv112', input = 'conv11')
graph.add_node(Flatten(), name = 'flatten11', input = 'conv112')
graph.add_node(Dense(3136), name = 'dense1', input = 'flatten11')
graph.add_node((Activation('relu')), name = 'relu', input = 'dense1')
graph.add_node(Reshape((56,56)), name = 'reshape', input = 'relu')
sgd = SGD(lr = 0.001, decay = .00005, momentum = 0.9, nesterov = True)
graph.add_output(name = 'output1', input = 'reshape')
graph.compile(optimizer = sgd, loss = {
'output1': 'binary_crossentropy'})
print 'compile success'
history = graph.fit({'input1':inputvgg, 'output1':outputmodel}, nb_epoch=1)
predictions = graph.predict({'input1':inputvgg})
return graph
""
"main function"
""
if __name__ == "__main__":
model = VGG_16('vgg16_weights.h5')
sgdvgg = SGD(lr = 0.1, decay = 1e-6, momentum = 0.9, nesterov = True)
model.compile(optimizer = sgdvgg, loss = 'categorical_crossentropy')
finaloutputmodel = outputofconvlayer(model)
finaloutputmodel.compile(optimizer = sgdvgg, loss = 'categorical_crossentropy')
img = cv2.resize(cv2.imread('000032.jpg'), (224, 224))
mean_pixel = [103.939, 116.779, 123.68]
img = img.astype(np.float32, copy = False)
for c in range(3):
img[: , : , c] = img[: , : , c] - mean_pixel[c]
img = img.transpose((2, 0, 1))
img = np.expand_dims(img, axis = 0)
imgout = np.asarray(cv2.resize(cv2.imread('000032seg.png',0), (56, 56)))
imgout[imgout!=0]=1
out=imgout
inputvgg = np.asarray(finaloutputmodel.predict(img))
obcandidate(inputvgg,out)
Hi ,above is my code where i am trying to segment object candidate through graph model,
i want to check for one input if the code works or not so i am giving it one input image and the output image,
But keras gives me an error - "All input arrays and target arrays must have the same number of samples."
Can anyone tell me what do i do to see if my model runs .i am training on one input so that i can verify that my model is correct and start training ,is there any other way to do it?
In the part where you do this - history = graph.fit({'input1':inputvgg, 'output1':outputmodel}, nb_epoch=1) inputvgg and outputmodel should have same number of dimensions.

How to get a prediction using Torch7

I'm still familiarizing myself with Torch and so far so good. I have however hit a dead end that I'm not sure how to get around: how can I get Torch7 (or more specifically the dp library) to evaluate a single input and return the predicted output?
Here's my setup (basically the dp demo):
require 'dp'
--[[hyperparameters]]--
opt = {
nHidden = 100, --number of hidden units
learningRate = 0.1, --training learning rate
momentum = 0.9, --momentum factor to use for training
maxOutNorm = 1, --maximum norm allowed for output neuron weights
batchSize = 128, --number of examples per mini-batch
maxTries = 100, --maximum number of epochs without reduction in validation error.
maxEpoch = 1000 --maximum number of epochs of training
}
--[[data]]--
datasource = dp.Mnist{input_preprocess = dp.Standardize()}
print("feature size: ", datasource:featureSize())
--[[Model]]--
model = dp.Sequential{
models = {
dp.Neural{
input_size = datasource:featureSize(),
output_size = opt.nHidden,
transfer = nn.Tanh(),
sparse_init = true
},
dp.Neural{
input_size = opt.nHidden,
output_size = #(datasource:classes()),
transfer = nn.LogSoftMax(),
sparse_init = true
}
}
}
--[[Propagators]]--
train = dp.Optimizer{
loss = dp.NLL(),
visitor = { -- the ordering here is important:
dp.Momentum{momentum_factor = opt.momentum},
dp.Learn{learning_rate = opt.learningRate},
dp.MaxNorm{max_out_norm = opt.maxOutNorm}
},
feedback = dp.Confusion(),
sampler = dp.ShuffleSampler{batch_size = opt.batchSize},
progress = true
}
valid = dp.Evaluator{
loss = dp.NLL(),
feedback = dp.Confusion(),
sampler = dp.Sampler{}
}
test = dp.Evaluator{
loss = dp.NLL(),
feedback = dp.Confusion(),
sampler = dp.Sampler{}
}
--[[Experiment]]--
xp = dp.Experiment{
model = model,
optimizer = train,
validator = valid,
tester = test,
observer = {
dp.FileLogger(),
dp.EarlyStopper{
error_report = {'validator','feedback','confusion','accuracy'},
maximize = true,
max_epochs = opt.maxTries
}
},
random_seed = os.time(),
max_epoch = opt.maxEpoch
}
xp:run(datasource)
You have two options.
One. Use the encapsulated nn.Module to forward your torch.Tensor:
mlp = model:toModule(datasource:trainSet():sub(1,2))
mlp:float()
input = torch.FloatTensor(1, 1, 32, 32) -- replace this with your input
output = mlp:forward(input)
Two. Encapsulate your torch.Tensor into a dp.ImageView and forward that through your dp.Model :
input = torch.FloatTensor(1, 1, 32, 32) -- replace with your input
inputView = dp.ImageView('bchw', input)
outputView = mlp:forward(inputView, dp.Carry{nSample=1})
output = outputView:forward('b')

Resources