Can I define my own activation function and use it in the TensorFlow Train API, i.e. the high level API with pre-defined estimators like DNNClassifier?
For example, I want to use this code but replace the activation function tf.nn.tanh with something my own:
tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[5,10,5
n_classes=3,
optimizer=tf.train.ProximalAdagradOptimizer(learning_rate=0.01,
l1_regularization_strength=0.0001),
activation_fn=tf.nn.tanh)
If your custom function can be expressed in terms of built-in tensorflow ops, then it's fairly straightforward. For example:
DNNClassifier(feature_columns=feature_columns,
...,
activation_fn=lambda x: 2*tf.nn.tanh(x)+3*tf.nn.relu(x)+1)
In general, activation_fn can be a callable that accepts a tensor of arbitrary shape (because it'll be applied after each layer). Tensorflow will be able to backpropagate through this expression without any problem.
However, if you want a completely new custom op, not expressible via existing ones, you'll have to register it and compute its gradient manually. See this question for the details.
Related
I am currently implementing a CNN with a custom error function.
The problem I am trying to solve is physics-based, so I can calculate the maximal achievable precision, or to put it another way, I know the best possible (i.e. minimal) standard deviation I can achieve. Those best possible precisions are calculated during the generation of the training data using the Cramer-Rao-lower bound (CRLB).
Right now, my error function looks something like this (in Keras):
def customLoss(yTrue, yPred):
STD = yTrue[:, 10:20]
yTrue = yTrue[:, 0:10]
dev = K.mean(K.abs(K.abs(yTrue - yPred) - STD))
return dev
In this case, I have 10 parameters, so I want to estimate with 10 CRLB's. I put the CRLB's in the target vector just to be able to handle the in the error function.
To my question. This method works, but it is not what I want. The problem is that the error is calculated considering a single prediction of the network, but to be correct the network would have to predict the same dataset/batch multiple times. By doing that I would be able to see the standard deviation of the prediction and use that to calculate the error (I'm using a Bayesian CNN).
Has someone an idea how to implement such a function in Keras or Tensorflow (I would also not mind switching to PyTorch)?
I have a custom activation function and its derivative, although I can use the custom activation function I don't know how to tell keras what is its derivative.
It seems like it finds one itself but I have a parameter that has to be shared between the function and its derivative so how can I do that?
I know there is a relatively easy way to do this in tensorflow but I have no idea how to implement it in keras here is how you do it in tensorflow
Edit: based on the answer I got maybe I wasn't clear enough. What I want is to implement a custom derivative for my activation function so that it use my derivative during the backpropagation. I know how to implement a custom activation function.
Take a look at the source code where the activation functions of Keras are defined:
keras/activations.py
For example:
def relu(x, alpha=0., max_value=None):
"""Rectified Linear Unit.
# Arguments
x: Input tensor.
alpha: Slope of the negative part. Defaults to zero.
max_value: Maximum value for the output.
# Returns
The (leaky) rectified linear unit activation: `x` if `x > 0`,
`alpha * x` if `x < 0`. If `max_value` is defined, the result
is truncated to this value.
"""
return K.relu(x, alpha=alpha, max_value=max_value)
And also how does Keras layers call the activation functions: self.activation = activations.get(activation) the activation can be string or callable.
Thus, similarly, you can define your own activation function, for example:
def my_activ(x, p1, p2):
...
return ...
Suppose you want use this activation in Dense layer, you just put your function like this:
x = Dense(128, activation=my_activ(p1, p2))(input)
If you mean you want to implement your own derivative:
If your activation function is written in Tensorflow/Keras functions of which the operations are differentiable (e.g. K.dot(), tf.matmul(), tf.concat() etc.), then the derivatives will be obtained by automatic differentiation https://en.wikipedia.org/wiki/Automatic_differentiation. In that case you dont need to write your own derivative.
If you still want to re-write the derivatives, check this document https://www.tensorflow.org/extend/adding_an_op where you need to register your gradients using tf.RegisterGradient
Since two operations Conv2DBackpropFilter and Conv2DBackpropInput count most of the time for lots of applications(AlexNet/VGG/GAN/Inception, etc.), I am analyzing the complexity of these two operations (back-propagation) in TensorFlow and I found out that there are three implementation versions (custom, fast and slot) for Conv2DBackpropFilter (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/conv_grad_filter_ops.cc ) and Conv2DBackpropInput (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/conv_grad_input_ops.cc). While I profile, all computations are passed to "custom" version instead of "fast" or "slow" which directly calls Eigen function SpatialConvolutionBackwardInput to do that.
The issue is:
Conv2DBackpropFilter uses Eigen:“TensorMap.contract" to do the tensor contraction and Conv2DBackpropInput uses Eigen:"MatrixMap.transpose" to do the matrix transposition in the Compute() function. Beside these two functions, I didn't see any convolutional operations which are needed for back-propagation theoretically. Beside convolutions, what else would be run inside these two operations for back-propagation? Does anyone know how to analyze the computation complexity of "back propagation" operation in TensorFlow?
I am looking for any advise/suggestion. Thank you!
In addition to the transposition and contraction, the gradient op for the filter and the gradient op for the input must transform their input using Im2Col and Col2Im respectively. Approximately speaking, these transformations enable the convolution operation to be implemented using tensor contraction. For more information, see the CS231n page on Convolutional Networks (specifically, the paragraphs titled "Implementation as Matrix Multiplication" and "Backpropagation").
mrry, I got it. It means that Conv2D, Conv2DBackpropFilter and Conv2DBackpropInput use the same way by using "GEMM" to work for convolution by Im2Col/Col2Im. An other issue is that while I do the profile of GAN in TensorFlow, the execution time of Conv2DBackpropInput and Conv2DBackpropFilter are around 4-6 times slower than Conv2D with the same input size. Why?
I have an image with 8 channels.I have a conventional algorithm where weights are added to each of these channels to get an output as '0' or '1'.This works fine with several samples and complex scenarios. I would like implement the same in Machine Learning using CNN method.
I am new to ML and started looking out the tutorials which seem to be exclusively dealing with image processing problems- Hand writing recognition,Feature extraction etc.
http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/
https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/neural_networks.html
I have setup the Keras with Theano as background.Basic Keras samples are working without problem.
What steps do I require to follow in order achieve the same result using CNN ? I do not comprehend the use of filters,kernels,stride in my use case.How do we provide Training data to Keras if the pixel channel values and output are in the below form?
Pixel#1 f(C1,C2...C8)=1
Pixel#2 f(C1,C2...C8)=1
Pixel#3 f(C1,C2...C8)=0 .
.
Pixel#N f(C1,C2...C8)=1
I think you should treat this the same way you use CNN to do semantic segmentation. For an example look at
https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf
You can use the same architecture has they are using but for the first layer instead of using filters for 3 channels use filters for 8 channels.
For the loss function you can use the same loos function or something that is more specific for binary loss.
There are several implementation for keras but with tensorflow
backend
https://github.com/JihongJu/keras-fcn
https://github.com/aurora95/Keras-FCN
Since the input is in the form of channel values,that too in sequence.I would suggest you to use Convolution1D. Here,you are taking each pixel's channel values as the input and you need to predict for each pixel.Try this
eg :
Conv1D(filters, kernel_size, strides=1, padding='valid')
Conv1D()
MaxPooling1D(pool_size)
......
(Add many layers as you want)
......
Dense(1)
use binary_crossentropy as the loss function.
My question is about the details of the frequency domain adaptive filter (fdaf) function provided in the DSP toolbox. This can be called as h = adaptfilt.fdaf which returns a structure, I think, in the variable h. This structure has all the parameters required to implement the filter, and the actual filtering of data is carried out using the function
[y, e] = filter(h, x, d)
where x is the input to be filtered and d is the desired output. y is an estimate of d.
adaptfilt.fdaf(...) can be passed many arguments, but I do not understand the use of most of the arguments, especially LAMBDA and LEAKAGE. The source code for the filter(h,x,d) function can be viewed, and most of the source code is a straight forward implementation of Overlap-Save algorithm (described in J.J. Shynk, "Frequency-domain and multirate adaptive filtering," IEEE Signal Processing Magazine, vol. 9, no. 1, pp. 14-37, Jan. 1992), but the theory does not include anything about the leakage or the lambda parameter (which appears in the filter function as an averaging factor). I am assuming that the designers of the filter function have modified their implementation to make the function as general as possible and so these concepts are related to some general filter theory, but I am unable to find any references to what they mean, how they affect the filter performance and why are they there in the filter function. Please help me if any one has any ideas about this, or have used the fdaf function of Matlab DSP toolbox before.