Wireshark CAN : meaning of padding and reserved fields - wireshark

I captured CAN frames with Wireshark, and when I observed the bit sequence, it turned not to follow the standard.
Precisely, in Wireshark, bits sequences labeled as "padding" and "reserved" are added.
Where do they come from ?
Thanks,

Related

Why Microsoft rescaled their HSL ranges to [0-240]?

I'm studying digital image processing in university, and now we're doing a work where we have to convert RGB values to HSL, and it has to be equal to the Microsoft Paint. When I finished the implementation I saw that it was not equal to Paint, and searching I found that they don't use [0-360] range, as the usual way to implement it, but [0-240] range.
Source
But even reading it I couldn't got why to do this.
There are several reasons.
We really prefer ranges which have just 256 values. In such case we can store data in a byte. We may use two bytes, but so we will double the memory usage. Or just using more then 8-bits, but so we have to do bit operation on every pixel before doing anything interesting, so very inefficient. Note: with two bytes we can use half-float, which are very useful in some context, but float points are also inefficient on various calculation (but it you do filters and layers composition).
The second reason it is stated in the link you included in the question, but it may need some more information: in many format (especially for video or televisions) we have a limited range (not 0 to 255, but 16 to 235 or 16 to 240). There are few reasons: the analog signal could have more black than black (which may be impossible) and more white then white (realistic). We just calibrated TV so to define white and black points (and possibly broadcaster will filter out the point that are below black. This was useful for some reasons (converting analog data, and more white then white is a colour we experience a lot). With digitalization, TV needed analog to digital converters (and the inverse). These converters costs (and more bit: more costs), and we already used data outside "visible colours" range (in analog signal) for other reasons, outside colours. So instead to have different converted depending on the kind of data (or just more bits on converter), we just keep limited range).
For every colour operation, one should check the range (and the definition of colour space). Quantization may be also a problem (many professional software use floating points, so we are less limited on range (but there are also other reason for floating point, e.g. precision and efficiency when working in linear space [without gamma correction]). Note: you should care only on the ratio value to maximal value. Angles have various measurements: 360 degree, 400 degree, or 2 Pi (the most common, sometime, especially on past you may write then as decimal degree (or other factors, so e.g. 3600, 36000, or 21600 [minutes of degree]). User interfaces may just show a different convention (more convenient for users).
And to make all more complex, the HSL conversion is not so exact, if you want exact H, S, and L (in stricter definition), just a quick shortcut.
Short Answer
The reason for a hue of 240 is only about using one single int (8 bit integer) for the value, a remnant from a bygone era.
Longer Answer
Regarding the accepted answer: it has nothing at all to do with digital video code values, nothing to do with AtoD converters, nor the other comments on the purpose of video range, etc.
The use of a hue with 240 values instead of 360 dates back to when computers did not have math coprocessors (i.e. possibly before the OP was born, considering he's at university now, but I date myself LOL). Without a math coprocessor, doing anything in floating point was a significant performance bottleneck. Moreover, low CPU speed, limited RAM, etc... back then much was done to be efficient in ways that might not make sense today.
Remember we are talking about decades before it was possible to have a hand-held computer connected to all the world's knowledge that also happened to double as a phone, camera, map with your current location, music player, speech to text interpreter, video production system, and luminance adjustable flashlight...
LOL 😂
Back in the late 80s/early 90s, 8bit integer math was how almost everything was done, and going out of that 8bit integer zone was... was uphill... both ways!! ....
Reducing 360 to 240 keeps that value in 8bit, and has the advantage that it is exactly 2/3rds of 360.
Since there are three primaries in an RGB monitor, HSL breaks things down accordingly. If you think of sRGB as a "cube" with one corner as black, one corner as white, then you have 6 corners as the max values of full saturation, such as #f00 for red and #0ff for aqua.
HSL is not a perceptually uniform color model — it is just a low-tech way to make color adjustments "intuitive" for the user.
Side Note
Color (hue) does not exist as an a "angle". Color is not actually real, there is simply wavelengths of visible light, from longer than 700nm (red) to shorter than 400nm (blue). The sensation of color is just a perception; a function of your neurological system.
Nevertheless encoding those to a circle makes it fairly intuitive. So in any application where color values are defined as an angle, it is purely a construct of convenience.
Speaking of convenience, my hand held computer that is connected to all the world's knowledge is beeping at me to remind me it's time to feed my cat. And it's not even a beep like we had in the 80s. It's a stereo harp sound with some tubular bells in the background...
Edit to add:
The "uphill both ways" is an old person joke that is common here in the US, and that I find myself making more and more often, at my age, LOL... I don't know how well it tranlates to EU be honest...

Max bit errors for a certain hamming distances

Basically, the code set has a hamming distance of 6, what is the max number of bit errors that can be detected?
I can't find an equation to link the two anywhere. Thanks!
A hamming distance of 6 means that every pair of valid code words is separated by changing at least 6 bits. So for every code word, changing up to 5 bits leads to an invalid code word. Hence, up to 5 bit errors can be detected.
A related question is the number of bit errors that can be corrected. This number is 2 because for each word, there can only ever be one word reachable by changing up to two bits. If there was more than one such word, the two words reachable by changing up to 2 bits had a distance of at most 4 bits and thus not be part of a code with Hamming distance 6.

Intel 80386 Programmer's Reference Manual: Interpreting bit numbering in example data structure

In illustrations of data structures in memory, smaller addresses appear at the lower-right part of the figure; addresses increase toward the left and upwards. Bit positions are numbered from right to left. Figure 1-1 illustrates this convention.
From my understanding, most modern computers are byte addressable meaning the combinations of bits correspond to one byte in memory. My question is how does that translate to the example diagram given above?
I see Fig 1-1 is showing 32 bytes in sequence, and as we move towards the top left, the address of each byte is increasing, but as for the bit offset I am confused. How are the bit offsets related to the bytes? Especially the top columns numbers 0, 7, 15, 23, 31.

Connected Character segmentation in OpenCV

What is a good method to segment characters that are united as in the following figure, knowing that:
characters have this font, but the font size varies based on the image size
only isolated groups of characters from the image are connected
Also, how can i detect if in a given bounding box, there are 2 or more letters which are connected?
I tried with checking for width > height for detecting connected characters but it doesn't work for the blue groups in the image.
I also tried a segmentation method based on:
Article section 3.4
for separating characters but got poor results.
IDEA: if you have a good ocr already, you can try to apply ocr all these connected components (or contours). If ocr cant detect a letter; than there is not 1 letter, there are 2 or more.
IDEA: check convexity defects of these connected components, the closest defect points are where the bridges are.
IDEA: use a kernel having small width & big height for erosion+dilation (morphological opening)
IDEA: take y-derivative of the image. The smallest contours (or lines) left will be your bridges. Mark them and erase those pixels from the original image.
IDEA: search problem approach: Take 2 letters from alphabet (and this font), connect them horizontally with some tool and use matchShapes method (moment match) of opencv to find if that shape matches with your connected component. Or try to implement autocorrelation.
good luck.

Ideal Input In Neural Network For The Game Checkers

I'm designing a feed forward neural network learning how to play the game checkers.
For the input, the board has to be given and the output should give the probability of winning versus losing. But what is the ideal transformation of the checkers board to a row of numbers for input? There are 32 possible squares and 5 different possibilities (king or piece of white or black player and free position) on each square. If I provide an input unit for each possible value for each square, it will be 32 * 5. Another option is that:
Free Position: 0 0
Piece of white: 0 0.5 && King Piece of white: 0 1
Piece of black: 0.5 1 && King Piece of black: 1 0
In this case, the input length will be just 64, but I'm not sure which one will give a better result. Could anyone give any insight on this?
In case anyone is still interested in this topic—I suggest encoding the Checkers board with a 32 dimensional vector. I recently trained a CNN on an expert Checkers database and was able to acheive a suprisingly high level of play with no search, somewhat similar (I suspect) to the supervised learning step that Deepmind used to pretrain AlphaGo. I represented my input as an 8x4 grid, with entries in the set [-3, -1, 0, 1, 3] corresponding to an opposing king, opposing checker, empty, own checker, own king, repsectively. Thus, rather than encoding the board with a 160 dimensional vector where each dimension corresponds to a location-piece combination, the input space can be reduced to a 32-dimensional vector where each board location is represented by a unique dimension, and the piece at that location is encoded by a set of real numbers—this is done without any loss of information.
The more interesting question, at least in my mind, is which output encoding is most conducive for learning. One option is to encode it in the same way as the input. I would advise against this having found that simplifying the output encoding to a location (of the piece to move) and a direction (along which to move said piece) is much more advantageous for learning. While the reasons for this are likely more subtle, I suspect it is due to the enormous state space of checkers (something like 50^20 board possitions). Considering that the goal of our predictive model is to accept an input containing an enourmous number of possible states, and produce one ouput (i.e., move) from (at-most) 48 possibilities (12 pieces times 4 possible directions excluding jumps), a top priority in architecting a neural network should be matching the complexity of its input and output space to that of the actual game. With this in mind, I chose to encode the ouput as a 32 x 4 matrix, with each row representing a board location, and each column representing a direction. During training I simply unraveled this into a 128 dimensional, one-hot encoded vector (using argmax of softmax activations). Note that this output encoding lends itself to many invalid moves for a given board (e.g., moves off the board from edges and corners, moves to occupied locations, etc..)—we hope that the neural network can learn valid play given a large enough training set. I found that the CNN did a remarkable job at learning valid moves.
I’ve written more about this project at http://www.chrislarson.io/checkers-p1.
I've done this sort of thing with Tic-Tac-Toe. There are several ways to represent this. One of the most common for TTT is have input and output that represent the entire size of the board. In TTT this becomes 9 x hidden x 9. Input of -1 for X, 0 for none, 1 for O. Then the input to the neural network is the current state of the board. The output is the desired move. Whatever output neuron has the highest activation is going to be the move.
Propagation training will not work too well here because you will not have a finite training set. Something like Simulated Annealing, PSO, or anything with a score function would be ideal. Pitting the networks against each other for the scoring function would be great.
This worked somewhat well for TTT. I am not sure how it would work for Checkers. Chess would likely destroy it. For Go it would likely be useless.
The problem is that the neural network will learn patters only at fixed location. For example jumping an opponent in the top-left corner would be a totally different situation than jumping someone in the bottom left corner. These would have to be learned separately.
Perhaps better is to represent the exact state of the board in position independent way. This would require some thought. For instance you might communicate what "jump" opportunities exist. What move-towards king square opportunity's exist, etc and allow the net to learn to prioritize these.
I've tried all possibilities and intuitive i can say that the most great idea is separating all possibilities for all squares. Thus, concrete:
0 0 0: free
1 0 0: white piece
0 0 1: black piece
1 1 0: white king
0 1 1: black king
It is also possible to enhance other parameters about the situation of the game like the amount of pieces under threat or amount of possibilities to jump.
Please see this thesis
Blondie24 page 46, there is description of input for neural network.

Resources