How can I set a global weight filler in Caffe? - machine-learning

Now I'm writing the weight filler layer by layer, like
layer {
name: "Convolution1"
type: "Convolution"
bottom: "data"
top: "Convolution1"
convolution_param {
num_output: 20
kernel_size: 5
weight_filler {
type: "xavier"
}
}
}
How can I set a global weight filler type?
Thanks.

It seems currently there's no other way of doing it. In the caffe.proto file, the NetParameter is defined as follows, where there's no such option as default_weight_filler or so.
message NetParameter {
optional string name = 1; // consider giving the network a name
// DEPRECATED. See InputParameter. The input blobs to the network.
repeated string input = 3;
// DEPRECATED. See InputParameter. The shape of the input blobs.
repeated BlobShape input_shape = 8;
// 4D input dimensions -- deprecated. Use "input_shape" instead.
// If specified, for each input blob there should be four
// values specifying the num, channels, height and width of the input blob.
// Thus, there should be a total of (4 * #input) numbers.
repeated int32 input_dim = 4;
// Whether the network will force every layer to carry out backward operation.
// If set False, then whether to carry out backward is determined
// automatically according to the net structure and learning rates.
optional bool force_backward = 5 [default = false];
// The current "state" of the network, including the phase, level, and stage.
// Some layers may be included/excluded depending on this state and the states
// specified in the layers' include and exclude fields.
optional NetState state = 6;
// Print debugging information about results while running Net::Forward,
// Net::Backward, and Net::Update.
optional bool debug_info = 7 [default = false];
// The layers that make up the net. Each of their configurations, including
// connectivity and behavior, is specified as a LayerParameter.
repeated LayerParameter layer = 100; // ID 100 so layers are printed last.
// DEPRECATED: use 'layer' instead.
repeated V1LayerParameter layers = 2;
}

Related

Dot Product vs Element-Wise multiplication for Backpropogation

I am trying to backpropogate a very primitive / simple ANN.
I've almost got it working. I'm trying to implement the formulas and the article I'm reading does not specify whether to use dot product or element wise multiplication or some other multiplication.
Article: https://ml-cheatsheet.readthedocs.io/en/latest/backpropagation.html
Here's the formula for calculating the error (or delta) of a single Hidden layer:
Or, as I read it in the context of my algorithm,
Delta = prev_delta * prev_weight * zprime
Where delta is the error of this layer, prev_delta is the delta of the previous layer, prev_weight is the weight of the previous layer, and zprime is the derivative of the activation function of the current layer.
Also, for a single Output Layer:
Or, as I read it in the context of my algorithm,
Delta = (output - target) % zprime;
Where output is the final output of the feed-forward and target is the target values.
I've written this code to run this calculation:
void Layer::backward(Matrix & prev_delta, Matrix & prev_weight) {
// all variables are matrices
// except for prev_layer, that's a pointer to a layer object.
// I'm using Armadillo for linear algebra / matrices
// delta, weight, output, and zprime refer to the current layer.
// prev_delta, prev_weight belong the the previous layer.
if (next_layer == nullptr) {
// if next layer is null, this is the output layer.
// in that case, prev_delta is target.
// yHat - y * R'(Zo)
delta = (output - prev_delta) * zprime;
}
else {
// Eo * Wo * R'(Zh)
delta = prev_delta * prev_weight * zprime;
}
// tell the next layer to backpropogate
if (prev_layer != nullptr)
prev_layer -> backward(delta, weight);
}
matrix * matrix indicates a matrix multiplication (dot product)
matrix % matrix indicates element-wise multiplication
The issue I'm having is that these matrices don't seem to multiply properly. I've made sure everything lines up the same way the article has it, but these pieces just don't seem to fit. How should these matrices be multiplied to get the result?
Edit: to clarify, I get errors when I try to take the dot product of these matrices. "invalid size". I've tried using element wise multiplication but then things get weird there too.

16 bit logic/computer simulation in Swift

I’m trying to make a basic simulation of a 16 bit computer with Swift. The computer will feature
An ALU
2 registers
That’s all. I have enough knowledge to create these parts visually and understand how they work, but it has become increasingly difficult to make larger components with more inputs while using my current approach.
My current approach has been to wrap each component in a struct. This worked early on, but is becoming increasingly difficult to manage multiple inputs while staying true to the principles of computer science.
The primary issue is that the components aren’t updating with the clock signal. I have the output of the component updating when get is called on the output variable, c. This, however, neglects the idea of a clock signal and will likely cause further problems later on.
It’s also difficult to make getters and setters for each variable without getting errors about mutability. Although I have worked through these errors, they are annoying and slow down the development process.
The last big issue is updating the output. The output doesn’t update when the inputs change; it updates when told to do so. This isn’t accurate to the qualities of real computers and is a fundamental error.
This is an example. It is the ALU I mentioned earlier. It takes two 16 bit inputs and outputs 16 bits. It has two unary ALUs, which can make a 16 bit number zero, negate it, or both. Lastly, it either adds or does a bit wise and comparison based on the f flag and inverts the output if the no flag is selected.
struct ALU {
//Operations are done in the order listed. For example, if zx and nx are 1, it first makes input 1 zero and then inverts it.
var x : [Int] //Input 1
var y : [Int] //Input 2
var zx : Int //Make input 1 zero
var zy : Int //Make input 2 zero
var nx : Int //Invert input 1
var ny : Int //Invert input 2
var f : Int //If 0, do a bitwise AND operation. If 1, add the inputs
var no : Int //Invert the output
public var c : [Int] { //Output
get {
//Numbers first go through unary ALUs. These can negate the input (and output the value), return 0, or return the inverse of 0. They then undergo the operation specified by f, either addition or a bitwise and operation, and are negated if n is 1.
var ux = UnaryALU(z: zx, n: nx, x: x).c //Unary ALU. See comments for more
var uy = UnaryALU(z: zy, n: ny, x: y).c
var fd = select16(s: f, d1: Add16(a: ux, b: uy).c, d0: and16(a: ux, b: uy).c).c //Adds a 16 bit number or does a bitwise and operation. For more on select16, see the line below.
var out = select16(s: no, d1: not16(a: fd).c, d0: fd).c //Selects a number. If s is 1, it returns d1. If s is 0, it returns d0. d0 is the value returned by fd, while d1 is the inverse.
return out
}
}
public init(x:[Int],y:[Int],zx:Int,zy:Int,nx:Int,ny:Int,f:Int,no:Int) {
self.x = x
self.y = y
self.zx = zx
self.zy = zy
self.nx = nx
self.ny = ny
self.f = f
self.no = no
}
}
I use c for the output variable, store values with multiple bits in Int arrays, and store single bits in Int values.
I’m doing this on Swift Playgrounds 3.0 with Swift 5.0 on a 6th generation iPad. I’m storing each component or set of components in a separate file in a module, which is why some variables and all structs are marked public. I would greatly appreciate any help. Thanks in advance.
So, I’ve completely redone my approach and have found a way to bypass the issues I was facing. What I’ve done is make what I call “tracker variables” for each input. When get is called for each variable, it returns that value of the tracker assigned to it. When set is called it calls an update() function that updates the output of the circuit. It also updates the value of the tracker. This essentially creates a ‘copy’ of each variable. I did this to prevent any infinite loops.
Trackers are unfortunately necessary here. I’ll demonstrate why
var variable : Type {
get {
return variable //Calls the getter again, resulting in an infinite loop
}
set {
//Do something
}
}
In order to make a setter, Swift requires a getter to be made as well. In this example, calling variable simply calls get again, resulting in a never-ending cascade of calls to get. Tracker variables are a workaround that use minimal extra code.
Using an update method makes sure the output responds to a change in any input. This also works with a clock signal, due to the architecture of the components themselves. Although it appears to act as the clock, it does not.
For example, in data flip-flops, the clock signal is passed into gates. All a clock signal does is deactivate a component when the signal is off. So, I can implement that within update() while remaining faithful to reality.
Here’s an example of a half adder. Note that the tracker variables I mentioned are marked by an underscore in front of their name. It has two inputs, x and y, which are 1 bit each. It also has two outputs, high and low, also known as carry and sum. The outputs are also one bit.
struct halfAdder {
private var _x : Bool //Tracker for x
public var x: Bool { //Input 1
get {
return _x //Return the tracker’s value
}
set {
_x = x //Set the tracker to x
update() //Update the output
}
}
private var _y : Bool //Tracker for y
public var y: Bool { //Input 2
get {
return _y
}
set {
_y = y
update()
}
}
public var high : Bool //High output, or ‘carry’
public var low : Bool //Low output, or ‘sum’
internal mutating func update(){ //Updates the output
high = x && y //AND gate, sets the high output
low = (x || y) && !(x && y) //XOR gate, sets the low output
}
public init(x:Bool, y:Bool){ //Initializer
self.high = false //This will change when the variables are set, ensuring a correct output.
self.low = false //See above
self._x = x //Setting trackers and variables
self._y = y
self.x = x
self.y = y
}
}
This is a very clean way, save for the trackers, do accomplish this task. It can trivially be expanded to fit any number of bits by using arrays of Bool instead of a single value. It respects the clock signal, updates the output when the inputs change, and is very similar to real computers.

How does caffe compute gradient when there are multiple branches?

I'm now reading Caffe source code, and the question occurred to me.
Take caffe/relu_layer.cpp for example. When computing gradient, from
void ReLULayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) {
if (propagate_down[0]) {
const Dtype* bottom_data = bottom[0]->cpu_data();
const Dtype* top_diff = top[0]->cpu_diff();
Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();
const int count = bottom[0]->count();
Dtype negative_slope = this->layer_param_.relu_param().negative_slope();
for (int i = 0; i < count; ++i) {
bottom_diff[i] = top_diff[i] * ((bottom_data[i] > 0)
+ negative_slope * (bottom_data[i] <= 0));
}
}
}
we can see a value is finally assigned to bottom_diff, indicating that value is the gradient of the corresponding bottom blob.
However, when multiple layers take one blob as inputs, e.g., stacking multiple ReLU layers on one blob, how does Caffe handle the gradient computation? The first ReLU layer modifies bottom_diff, and it seems that the second ReLU layer just overrides it, instead of adding two gradients.
I didn't see anywhere performing gradient summation, and I am confuses. Please inform me if I have missed something important, and thanks a lot.
Caffe automatically inserts Split layer when a top blob is used in multiple bottoms. This is done inside Net<Dtype>::Init(...) by a call to InsertSplits(...) from caffe/utils/insert_splits.cpp.
Example:
Original network in NetParameter protobuf object (nodes here are layers):
data ---> conv1 -> conv2 -> ...
\-> somelayer -> ...
Net Layers in memory after Net::Init():
data -> split ---> conv1 -> conv2 -> ...
\-> somelayer -> ...
(An interesting detail, by the way: .diff in activation Blobs is assigned to by Backward(), while .diff in layer learnable parameters is added to by Backward().)

Specifying Input/Output dimensions for CoreML 2 model with Flexible Shapes

I managed to create a CoreML 2.0 model with flexible input/output shape sizes:
I can't figure out how to set the size in my Xcode project, however. If I set the input pixel buffer size 2048x2048, the output pixel buffer is still 1536x1536. If I set it to 768x768, the resulting pixel buffer is still 1536x1536 - but is blank outside the region of 768x768.
I examined the automatically generated Swift model class and don't see any clues there.
I can't find a single example anywhere showing how to use the "Flexibility" sizes.
In the WWDC 2018 Session 708 "What's New in Core ML", Part 1 it states:
This means that now you have to ship a single model. You don't have to have any redundant code. And if you need to switch between standard definition and high definition, you can do it much faster because we don't need to reload the model from scratch; we just need to resize it. You have two options to specify the flexibility of the model. You can define a range for its dimension, so you can define a minimal width and height and the maximum width and height. And then at inference pick any value in between. But there is also another way. You can enumerate all the shapes that you are going to use. For example, all different aspect ratios, all different resolutions, and this is better for performance. Core ML knows more about your use case earlier, so it can -- it has the opportunities of performing more optimizations.
They say "we just need to resize it". It so frustrating because they don't tell you how to just resize it! They also say "And then at inference pick any value in between" but offer no clue how to pick the value in between!
Here is how I added the flexible shape sizes:
import coremltools
from coremltools.models.neural_network import flexible_shape_utils
spec = coremltools.utils.load_spec('mymodel_fxedShape.mlmodel')
img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()
img_size_ranges.add_height_range(640, 2048)
img_size_ranges.add_width_range(640, 2048)
flexible_shape_utils.update_image_size_range(spec, feature_name='inputImage', size_range=img_size_ranges)
flexible_shape_utils.update_image_size_range(spec, feature_name='outputImage', size_range=img_size_ranges)
coremltools.utils.save_spec(spec, 'myModel.mlmodel')
Here is the description of the model:
description {
input {
name: "inputImage"
shortDescription: "Image to stylize"
type {
imageType {
width: 1536
height: 1536
colorSpace: BGR
imageSizeRange {
widthRange {
lowerBound: 640
upperBound: 2048
}
heightRange {
lowerBound: 640
upperBound: 2048
}
}
}
}
}
output {
name: "outputImage"
shortDescription: "Stylized image"
type {
imageType {
width: 1536
height: 1536
colorSpace: BGR
imageSizeRange {
widthRange {
lowerBound: 640
upperBound: 2048
}
heightRange {
lowerBound: 640
upperBound: 2048
}
}
}
}
}
}
There are two layers using "outputShape":
layers {
name: "SpatialFullConvolution_63"
input: "Sequential_53"
output: "SpatialFullConvolution_63_output"
convolution {
outputChannels: 16
kernelChannels: 32
nGroups: 1
kernelSize: 3
kernelSize: 3
stride: 2
stride: 2
dilationFactor: 1
dilationFactor: 1
valid {
paddingAmounts {
borderAmounts {
}
borderAmounts {
}
}
}
isDeconvolution: true
hasBias: true
weights {
}
bias {
}
outputShape: 770
outputShape: 770
}
}
...relu layer...
layers {
name: "SpatialFullConvolution_67"
input: "ReLU_66"
output: "SpatialFullConvolution_67_output"
convolution {
outputChannels: 8
kernelChannels: 16
nGroups: 1
kernelSize: 3
kernelSize: 3
stride: 2
stride: 2
dilationFactor: 1
dilationFactor: 1
valid {
paddingAmounts {
borderAmounts {
}
borderAmounts {
}
}
}
isDeconvolution: true
hasBias: true
weights {
}
bias {
}
outputShape: 1538
outputShape: 1538
}
}
I am now trying to figure out how to remove the outputShape from those two layers.
>>> layer = spec.neuralNetwork.layers[49]
>>> layer.convolution.outputShape
[1538L, 1538L]
I tried setting it to []:
layer.convolution.outputShape = []
To a Shape:
layer.convolution.outputShape = flexible_shape_utils.Shape(())
Whatever I try, I get the error:
TypeError: Can't set composite field
Do I have to create a new layer and then link it to the layer that is outputting to it and the layer it is outputting to?
The issue in this case was that there were layers present in the model that used a fixed shape for their outputShapes. For example:
>>> layer = spec.neuralNetwork.layers[49]
>>> layer.convolution.outputShape
[1538L, 1538L]
The model in question was indeed fully convolutional, so before converting to CoreML, it worked with any input and output shapes.
I was able to delete the fixed outputShape with this command:
layer = spec.neuralNetwork.layers[49]
del layer.convolution.outputShape[:]
After doing that, the model worked with flexible input and output shapes.
All credit for this answer goes to Matthijs Hollemans.

How can calculate the software acceptance Mask value

I would like to know how can we calculate the software acceptance filter mask for some set of standard CAN id.It would be great if some one can explain this with example.
And also please suggest some links/materials to learn the CAN stack software implementation.
Thanks in advance.
Let me explain this with an example:
Suppose the user wants to receive the messages only with IDs 0x8Z (where Z = 1,3,5,7) then here is how the value of Mask register and Acceptance register can be calculated:
0x81 = 1000 0001
0x83 = 1000 0011
0x85 = 1000 0101
0x87 = 1000 0111
Mask Register = 1111 1001
First compare the 0th bits of all the IDs, if its same then the corresponding bit for mask register will be "1" else it will be "0". Then compare the 1st bits, then 2nd bits and so on...
In our case only the 5th and 6th bits differ in all the IDs. This explains how we got "Mask Register" value.
For Acceptance register value, take any of the allowed message IDs and that will be the value of the Acceptance register value. In our case it could be 0x81 or 0x83 or 0x85 or 0x87
While programming it can be checked like this:
if((Incoming_ID && Mask_Register) == (Incoming_ID && Acceptance_Register))
{
//Receive Message
}
else
{
//Discard Message
}
Hope it helps.
Since this filtering is done in hardware it is fairly primitive. Usually the calculation involves two registers a mask and a filter. The equivalent logic in C would be:
/* dsPIC style; mask specifies "do care" bits */
if ((arbitrationId & mask) == filter) {
/* Message accepted; rx interrupt triggered */
}
/* Accept all */
mask = 0;
filter = 0;
/* Accept CANopen default connection set (excluding SYNC and NMT) */
mask = 0x7F;
filter = node_id;
Or
/* SJA 1000 style; mask specifies "do not care" bits */
if ((arbitrationId & ~mask) == filter) {
/* Message accepted; rx interrupt triggered */
}
/* Accept all */
mask = ~0;
filter = 0;
/* Accept CANopen default connection set (excluding SYNC and NMT) */
mask = ~0x7F;
filter = node_id;
The number of masks, the number of filters, if and how the filters are enabled, and the arrangement of ID bits within registers is all hardware dependent. To give you a more concrete answer would require details about the specific hardware being used.
Basic information on CANbus can be found here:
The repository of all human knowledge
Microcontroller.com tutorial

Resources