Related
I am trying to implement convolution by hand in Julia. I'm not too familiar with image processing or Julia, so maybe I'm biting more than I can chew.
Anyway, when I apply this method with a 3*3 edge filter edge = [0 -1 0; -1 4 -1; 0 -1 0] as convolve(img, edge), I am getting an error saying that my values are exceeding the allowed values for the RGBA type.
Code
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_w, half_kernel_h) = size(kernel) .÷ 2
(width, height) = size(img)
cpy_im = copy(img)
for row ∈ 1+half_kernel_h:height-half_kernel_h
for col ∈ 1+half_kernel_w:width-half_kernel_w
from_row, to_row = row .+ (-half_kernel_h, half_kernel_h)
from_col, to_col = col .+ (-half_kernel_h, half_kernel_h)
cpy_im[row, col] = sum((kernel .* RGB.(img[from_row:to_row, from_col:to_col])))
end
end
cpy_im
end
Error (original)
ArgumentError: element type FixedPointNumbers.N0f8 is an 8-bit type representing 256 values from 0.0 to 1.0, but the values (-0.0039215684f0, -0.007843137f0, -0.007843137f0, 1.0f0) do not lie within this range.
See the READMEs for FixedPointNumbers and ColorTypes for more information.
I am able to identify a simple case where such error may occur (a white pixel surrounded by all black pixels or vice-versa). I tried "fixing" this by attempting to follow the advice here from another stackoverflow question, but I get more errors to the effect of Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package..
Code attempting to apply solution from the other SO question
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_w, half_kernel_h) = size(kernel) .÷ 2
(width, height) = size(img)
cpy_im = copy(img)
for row ∈ 1+half_kernel_h:height-half_kernel_h
for col ∈ 1+half_kernel_w:width-half_kernel_w
from_row, to_row = row .+ [-half_kernel_h, half_kernel_h]
from_col, to_col = col .+ [-half_kernel_h, half_kernel_h]
cpy_im[row, col] = sum((kernel .* RGB.(img[from_row:to_row, from_col:to_col] ./ 2 .+ 128)))
end
end
cpy_im
end
Corresponding error
MethodError: no method matching +(::ColorTypes.RGBA{Float32}, ::Int64)
Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package.
Closest candidates are:
+(::Any, ::Any, !Matched::Any, !Matched::Any...) at operators.jl:591
+(!Matched::T, ::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} at int.jl:87
+(!Matched::ChainRulesCore.AbstractThunk, ::Any) at ~/.julia/packages/ChainRulesCore/a4mIA/src/tangent_arithmetic.jl:122
Now, I can try using convert etc., but when I look at the big picture, I start to wonder what the idiomatic way of solving this problem in Julia is. And that is my question. If you had to implement convolution by hand from scratch, what would be a good way to do so?
EDIT:
Here is an implementation that works, though it may not be idiomatic
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_h, half_kernel_w) = size(kernel) .÷ 2
(height, width) = size(img)
cpy_im = copy(img)
# println(Dict("width" => width, "height" => height, "half_kernel_w" => half_kernel_w, "half_kernel_h" => half_kernel_h, "row range" => 1+half_kernel_h:(height-half_kernel_h), "col range" => 1+half_kernel_w:(width-half_kernel_w)))
for row ∈ 1+half_kernel_h:(height-half_kernel_h)
for col ∈ 1+half_kernel_w:(width-half_kernel_w)
from_row, to_row = row .+ (-half_kernel_h, half_kernel_h)
from_col, to_col = col .+ (-half_kernel_w, half_kernel_w)
vals = Dict()
for method ∈ [red, green, blue, alpha]
x = sum((kernel .* method.(img[from_row:to_row, from_col:to_col])))
if x > 1
x = 1
elseif x < 0
x = 0
end
vals[method] = x
end
cpy_im[row, col] = RGBA(vals[red], vals[green], vals[blue], vals[alpha])
end
end
cpy_im
end
First of all, the error
Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package.
should direct you to read the docs of the ColorVectorSpace package, where you will learn that using ColorVectorSpace will now enable math on RGB types. (The absence of default support it deliberate, because the way the image-processing community treats RGB is colorimetrically wrong. But everyone has agreed not to care, hence the ColorVectorSpace package.)
Second,
ArgumentError: element type FixedPointNumbers.N0f8 is an 8-bit type representing 256 values from 0.0 to 1.0, but the values (-0.0039215684f0, -0.007843137f0, -0.007843137f0, 1.0f0) do not lie within this range.
indicates that you're trying to write negative entries with an element type, N0f8, that can't support such values. Instead of cpy_im = copy(img), consider something like cpy_im = [float(c) for c in img] which will guarantee a floating-point representation that can support negative values.
Third, I would recommend avoiding steps like RGB.(img...) when nothing about your function otherwise addresses whether images are numeric, grayscale, or color. Fundamentally the only operations you need are scalar multiplication and addition, and it's better to write your algorithm generically leveraging only those two properties.
Tim Holy's answer above is correct - keep things simple and avoid relying on third-party packages when you don't need to.
I might point out that another option you may not have considered is to use a different algorithm. What you are implementing is the naive method, whereas many convolution routines using different algorithms for different sizes, such as im2col and Winograd (you can look these two up, I have a website that covers the idea behind both here).
The im2col routine might be worth doing as essentially you can break the routine in several pieces:
Unroll all 'regions' of the image to do a dot-product with the filter/kernel on, and stack them together into a single matrix.
Do a matrix-multiply with the unrolled input and filter/kernel.
Roll the output back into the correct shape.
It might be more complicated overall, but each part is simpler, so you may find this easier to do. A matrix multiply routine is definitely quite easy to implement. For 1x1 (single-pixel) convolutions where the image and filter have the same ordering (i.e. NCHW images and FCHW filter) the first and last steps are trivial as essentially no rolling/unrolling is necessary.
A final word of advice - start simpler and add in the code to handle edge-cases, convolutions are definitely fiddly to work with.
Hope this helps!
This has become quite a frustrating question, but I've asked in the Coursera discussions and they won't help. Below is the question:
I've gotten it wrong 6 times now. How do I normalize the feature? Hints are all I'm asking for.
I'm assuming x_2^(2) is the value 5184, unless I am adding the x_0 column of 1's, which they don't mention but he certainly mentions in the lectures when talking about creating the design matrix X. In which case x_2^(2) would be the value 72. Assuming one or the other is right (I'm playing a guessing game), what should I use to normalize it? He talks about 3 different ways to normalize in the lectures: one using the maximum value, another with the range/difference between max and mins, and another the standard deviation -- they want an answer correct to the hundredths. Which one am I to use? This is so confusing.
...use both feature scaling (dividing by the
"max-min", or range, of a feature) and mean normalization.
So for any individual feature f:
f_norm = (f - f_mean) / (f_max - f_min)
e.g. for x2,(midterm exam)^2 = {7921, 5184, 8836, 4761}
> x2 <- c(7921, 5184, 8836, 4761)
> mean(x2)
6676
> max(x2) - min(x2)
4075
> (x2 - mean(x2)) / (max(x2) - min(x2))
0.306 -0.366 0.530 -0.470
Hence norm(5184) = 0.366
(using R language, which is great at vectorizing expressions like this)
I agree it's confusing they used the notation x2 (2) to mean x2 (norm) or x2'
EDIT: in practice everyone calls the builtin scale(...) function, which does the same thing.
It's asking to normalize the second feature under second column using both feature scaling and mean normalization. Therefore,
(5184 - 6675.5) / 4075 = -0.366
Usually we normalize all of them to have zero mean and go between [-1, 1].
You can do that easily by dividing by the maximum of the absolute value and then remove the mean of the samples.
"I'm assuming x_2^(2) is the value 5184" is this because it's the second item in the list and using the subscript _2? x_2 is just a variable identity in maths, it applies to all rows in the list. Note that the highest raw mid-term exam result (i.e. that which is not squared) goes down on the final test and the lowest raw mid-term result increases the most for the final exam result. Theta is a fixed value, a coefficient, so somewhere your normalisation of x_1 and x_2 values must become (EDIT: not negative, less than 1) in order to allow for this behaviour. That should hopefully give you a starting basis, by identifying where the pivot point is.
I had the same problem, in my case the thing was that I was using as average the maximum x2 value (8836) minus minimum x2 value (4761) divided by two, instead of the sum of each x2 value divided by the number of examples.
For the same training set, I got the question as
Q. What is the normalized feature x^(3)_1?
Thus, 3rd training ex and 1st feature makes out to 94 in above table.
Now, normalized form is
x = (x - mean(x's)) / range(x)
Values are :
x = 94
mean(89+72+94+69) / 4 = 81
range = 94 - 69 = 25
Normalized x = (94 - 81) / 25 = 0.52
I'm taking this course at the moment and a really trivial mistake I made first time I answered this question was using comma instead of dot in the answer, since I did by hand and in my country we use comma to denote decimals. Ex:(0,52 instead of 0.52)
So in the second time I tried I used dot and works fine.
I have a set of tags (different from the conventional Name, Place, Object etc.). In my case, they are domain-specific and I call them: Entity, Action, Incident. I want to use these as a seed for extracting more named-entities.
I came across this paper: "Efficient Support Vector Classifiers for Named Entity Recognition" by Isozaki et al. While I like the idea of using Support Vector Machines for doing named-entity recognition, I am stuck on how to encode the feature vector. For their paper, this is what they say:
For instance, the words in “President George Herbert Bush said Clinton
is . . . ” are classified as follows: “President” = OTHER, “George” =
PERSON-BEGIN, “Herbert” = PERSON-MIDDLE, “Bush” = PERSON-END, “said” =
OTHER, “Clinton” = PERSON-SINGLE, “is”
= OTHER. In this way, the first word of a person’s name is labeled as PERSON-BEGIN. The last word is labeled as PERSON-END. Other words in
the name are PERSON-MIDDLE. If a person’s name is expressed by a
single word, it is labeled as PERSON-SINGLE. If a word does not
belong to any named entities, it is labeled as OTHER. Since IREX de-
fines eight NE classes, words are classified into 33 categories.
Each sample is represented by 15 features because each word has three
features (part-of-speech tag, character type, and the word itself),
and two preceding words and two succeeding words are also used for
context dependence. Although infrequent features are usually removed
to prevent overfitting, we use all features because SVMs are robust.
Each sample is represented by a long binary vector, i.e., a sequence
of 0 (false) and 1 (true). For instance, “Bush” in the above example
is represented by a vector x = x[1] ... x[D] described below. Only
15 elements are 1.
x[1] = 0 // Current word is not ‘Alice’
x[2] = 1 // Current word is ‘Bush’
x[3] = 0 // Current word is not ‘Charlie’
x[15029] = 1 // Current POS is a proper noun
x[15030] = 0 // Current POS is not a verb
x[39181] = 0 // Previous word is not ‘Henry’
x[39182] = 1 // Previous word is ‘Herbert
I don't really understand how the binary vector here is being constructed. I know I am missing a subtle point but can someone help me understand this?
There is a bag of words lexicon building step that they omit.
Basically you have build a map from (non-rare) words in the training set to indicies. Let's say you have 20k unique words in your training set. You'll have mapping from every word in the training set to [0, 20000].
Then the feature vector is basically a concatenation of a few very sparse vectors that have a 1 corresponding to a particular word, and 19,999 0s, and then 1 for a particular POS, and 50 other 0s for non-active POS. This is generally called a one hot encoding. http://en.wikipedia.org/wiki/One-hot
def encode_word_feature(word, POStag, char_type, word_index_mapping, POS_index_mapping, char_type_index_mapping)):
# it makes a lot of sense to use a sparsely encoded vector rather than dense list, but it's clearer this way
ret = empty_vec(len(word_index_mapping) + len(POS_index_mapping) + len(char_type_index_mapping))
so_far = 0
ret[word_index_mapping[word] + so_far] = 1
so_far += len(word_index_mapping)
ret[POS_index_mapping[POStag] + so_far] = 1
so_far += len(POS_index_mapping)
ret[char_type_index_mapping[char_type] + so_far] = 1
return ret
def encode_context(context):
return encode_word_feature(context.two_words_ago, context.two_pos_ago, context.two_char_types_ago,
word_index_mapping, context_index_mapping, char_type_index_mapping) +
encode_word_feature(context.one_word_ago, context.one_pos_ago, context.one_char_types_ago,
word_index_mapping, context_index_mapping, char_type_index_mapping) +
# ... pattern is obvious
So your feature vector is about size 100k with a little extra for POS and char tags, and is almost entirely 0s, except for 15 1s in positions picked according to your feature to index mappings.
I need to write a function that takes 4 bytes as input, performs a reversible linear transformation on this, and returns it as 4 bytes.
But wait, there is more: it also has to be distributive, so changing one byte on the input should affect all 4 output bytes.
The issues:
if I use multiplication it won't be reversible after it is modded 255 via the storage as a byte (and its needs to stay as a byte)
if I use addition it can't be reversible and distributive
One solution:
I could create an array of bytes 256^4 long and fill it in, in a one to one mapping, this would work, but there are issues: this means I have to search a graph of size 256^8 due to having to search for free numbers for every value (should note distributivity should be sudo random based on a 64*64 array of byte). This solution also has the MINOR (lol) issue of needing 8GB of RAM, making this solution nonsense.
The domain of the input is the same as the domain of the output, every input has a unique output, in other words: a one to one mapping. As I noted on "one solution" this is very possible and I have used that method when a smaller domain (just 256) was in question. The fact is, as numbers get big that method becomes extraordinarily inefficient, the delta flaw was O(n^5) and omega was O(n^8) with similar crappiness in memory usage.
I was wondering if there was a clever way to do it. In a nutshell, it's a one to one mapping of domain (4 bytes or 256^4). Oh, and such simple things as N+1 can't be used, it has to be keyed off a 64*64 array of byte values that are sudo random but recreatable for reverse transformations.
Balanced Block Mixers are exactly what you're looking for.
Who knew?
Edit! It is not possible, if you indeed want a linear transformation. Here's the mathy solution:
You've got four bytes, a_1, a_2, a_3, a_4, which we'll think of as a vector a with 4 components, each of which is a number mod 256. A linear transformation is just a 4x4 matrix M whose elements are also numbers mod 256. You have two conditions:
From Ma, we can deduce a (this means that M is an invertible matrix).
If a and a' differ in a single coordinate, then Ma and Ma' must differ in every coordinate.
Condition (2) is a little trickier, but here's what it means. Since M is a linear transformation, we know that
M(a - a) = Ma - Ma'
On the left, since a and a' differ in a single coordinate, a - a has exactly one nonzero coordinate. On the right, since Ma and Ma' must differ in every coordinate, Ma - Ma' must have every coordinate nonzero.
So the matrix M must take a vector with a single nonzero coordinate to one with all nonzero coordinates. So we just need every entry of M to be a non-zero-divisor mod 256, i.e., to be odd.
Going back to condition (1), what does it mean for M to be invertible? Since we're considering it mod 256, we just need its determinant to be invertible mod 256; that is, its determinant must be odd.
So you need a 4x4 matrix with odd entries mod 256 whose determinant is odd. But this is impossible! Why? The determinant is computed by summing various products of entries. For a 4x4 matrix, there are 4! = 24 different summands, and each one, being a product of odd entries, is odd. But the sum of 24 odd numbers is even, so the determinant of such a matrix must be even!
Here are your requirements as I understand them:
Let B be the space of bytes. You want a one-to-one (and thus onto) function f: B^4 -> B^4.
If you change any single input byte, then all output bytes change.
Here's the simplest solution I have thusfar. I have avoided posting for a while because I kept trying to come up with a better solution, but I haven't thought of anything.
Okay, first of all, we need a function g: B -> B which takes a single byte and returns a single byte. This function must have two properties: g(x) is reversible, and x^g(x) is reversible. [Note: ^ is the XOR operator.] Any such g will do, but I will define a specific one later.
Given such a g, we define f by f(a,b,c,d) = (a^b^c^d, g(a)^b^c^d, a^g(b)^c^d, a^b^g(c)^d). Let's check your requirements:
Reversible: yes. If we XOR the first two output bytes, we get a^g(a), but by the second property of g, we can recover a. Similarly for the b and c. We can recover d after getting a,b, and c by XORing the first byte with (a^b^c).
Distributive: yes. Suppose b,c, and d are fixed. Then the function takes the form f(a,b,c,d) = (a^const, g(a)^const, a^const, a^const). If a changes, then so will a^const; similarly, if a changes, so will g(a), and thus so will g(a)^const. (The fact that g(a) changes if a does is by the first property of g; if it didn't then g(x) wouldn't be reversible.) The same holds for b and c. For d, it's even easier because then f(a,b,c,d) = (d^const, d^const, d^const, d^const) so if d changes, every byte changes.
Finally, we construct such a function g. Let T be the space of two-bit values, and h : T -> T the function such that h(0) = 0, h(1) = 2, h(2) = 3, and h(3) = 1. This function has the two desired properties of g, namely h(x) is reversible and so is x^h(x). (For the latter, check that 0^h(0) = 0, 1^h(1) = 3, 2^h(2) = 1, and 3^h(3) = 2.) So, finally, to compute g(x), split x into four groups of two bits, and take h of each quarter separately. Because h satisfies the two desired properties, and there's no interaction between the quarters, so does g.
I'm not sure I understand your question, but I think I get what you're trying to do.
Bitwise Exclusive Or is your friend.
If R = A XOR B, R XOR A gives B and R XOR B gives A back. So it's a reversible transformation, assuming you know the result and one of the inputs.
Assuming I understood what you're trying to do, I think any block cipher will do the job.
A block cipher takes a block of bits (say 128) and maps them reversibly to a different block with the same size.
Moreover, if you're using OFB mode you can use a block cipher to generate an infinite stream of pseudo-random bits. XORing these bits with your stream of bits will give you a transformation for any length of data.
I'm going to throw out an idea that may or may not work.
Use a set of linear functions mod 256, with odd prime coefficients.
For example:
b0 = 3 * a0 + 5 * a1 + 7 * a2 + 11 * a3;
b1 = 13 * a0 + 17 * a1 + 19 * a2 + 23 * a3;
If I remember the Chinese Remainder Theorem correctly, and I haven't looked at it in years, the ax are recoverable from the bx. There may even be a quick way to do it.
This is, I believe, a reversible transformation. It's linear, in that af(x) mod 256 = f(ax) and f(x) + f(y) mod 256 = f(x + y). Clearly, changing one input byte will change all the output bytes.
So, go look up the Chinese Remainder Theorem and see if this works.
What you mean by "linear" transformation?
O(n), or a function f with f(c * (a+b)) = c * f(a) + c * f(b)?
An easy approach would be a rotating bitshift (not sure if this fullfils the above math definition). Its reversible and every byte can be changed. But with this it does not enforce that every byte is changed.
EDIT: My solution would be this:
b0 = (a0 ^ a1 ^ a2 ^ a3)
b1 = a1 + b0 ( mod 256)
b2 = a2 + b0 ( mod 256)
b3 = a3 + b0 ( mod 256)
It would be reversible (just subtract the first byte from the other, and then XOR the 3 resulting bytes on the first), and a change in one bit would change every byte (as b0 is the result of all bytes and impacts all others).
Stick all of the bytes into 32-bit number and then do a shl or shr (shift left or shift right) by one, two or three. Then split it back into bytes (could use a variant record). This will move bits from each byte into the adjacent byte.
There are a number of good suggestions here (XOR, etc.) I would suggest combining them.
You could remap the bits. Let's use ii for input and oo for output:
oo[0] = (ii[0] & 0xC0) | (ii[1] & 0x30) | (ii[2] & 0x0C) | (ii[3] | 0x03)
oo[1] = (ii[0] & 0x30) | (ii[1] & 0x0C) | (ii[2] & 0x03) | (ii[3] | 0xC0)
oo[2] = (ii[0] & 0x0C) | (ii[1] & 0x03) | (ii[2] & 0xC0) | (ii[3] | 0x30)
oo[3] = (ii[0] & 0x03) | (ii[1] & 0xC0) | (ii[2] & 0x30) | (ii[3] | 0x0C)
It's not linear, but significantly changing one byte in the input will affect all the bytes in the output. I don't think you can have a reversible transformation such as changing one bit in the input will affect all four bytes of the output, but I don't have a proof.
I don't understand a passage in the article about the VGGNet. Maybe someone can help.
In my opinion, the number of weights in a convolutional layer is
p=w*h*d*n+n
where w is the width of the filters, h the height of the filters, d the depth of the filters and n the num of the filters.
In the article the following is written:
assuming that both the input and the output of a three-layer 3 × 3 onvolution stack has C channels, the stack is parametrised by 3*(3^2*C^2) = 27C^2
weights; at the same time, a single 7 × 7 conv. layer would require 7^2*C^2 = 49C^2 parameters.
I do not understand, what is meant by channels here, and why this formula is used.
Can someone explain this to me?
Thanks in advance.
Your intuition is correct; we just need to unpack their explanation a bit. For the first case:
w = 3 # filter width
h = 3 # filter height
d = C # filter depth (number of channels is same as number of input filters; eg RGB is C=3)
n = C # number of output filters/channels
This then makes whdn = 9C^2 parameters. Then, they also say there are three of these stacked, so thats 27C^2.
For a single 7x7 filter, then it's all the same 7x7xCxCx1.
The final difference is that you add n once more at the end in your original post; that is the bias terms, which in VGG they skip (many people skip bias terms; their value is debatable in some settings).