What is Nominal bit rate? - can-bus

I am new to CAN protocol. I am trying to understand the bit timing concept. Wiki states "All nodes on the CAN network must operate at the same nominal bit rate".
Are bit rate and nominal bit rate same or different?

How about reading the Bosch CAN specification?
NOMINAL BIT RATE
The Nominal Bit Rate is the number of bits per
second transmitted in the absence of resynchronization by an ideal
transmitter.
Next question: "What is resynchronization?"
See same document.

Related

Number of backprops as performance metric for neural networks

I have been reading article about SRCNN and found that they are using "number of backprops" for evaluating how well network is performing, i.e. what network is able to learn after x backprops (as I understand). I would like to know what number of backprops actually means. Is this just the number of training data samples that there used during the training? Or maybe the number of mini-batches? Maybe it is one of the previous numbers multiplied by number of learnable parameters in the network? Or something completely different? Maybe there is some other more common name for this that I could loop up somewhere and read more about it because I was not able to find anything useful by searching "number of backprops" or "number of backpropagations"?
Bonus question: how widely this metric is used and how good is it?
I read their Paper from 2016:
author={C. Dong and C. C. Loy and K. He and X. Tang},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Image Super-Resolution Using Deep Convolutional Networks},
Since they don't even mention batches I assume they are doing a backpropagation to update their weights after each sample / image.
In other words their batchsize (mini-batchsize) is equal to 1 sample.
So number of backpropagations means amount of batches after all, which is quite a common metric, viz. in the paper PSNR (loss) over amount of batches (or usually loss over epochs).
Bonus question: I come to the conclusion they just didn't stick to the common thesaurus of machine learning, or deep learning.
BonusBonus question: They use the metric of loss after n batches to showcase how much the different network architectures could learn on trainigdatasets with different size.
I would assume that after it means how many the network has learned after back-propagating n times. Its more likely interchangeable with "after training over n samples..."
This maybe a bit different if they are using a recurrent network, as they could have more samples run in forward prop then in backwardprop. (For whatever reason I can't get the link to the paper to load, so unsure).
Based on your number of questions I think you might be overthinking this :)
Number of backprops is not a metric used commonly. Perhaps they use it here to showcase the speed of training based upon whatever optimization method's they are using. But for most common instances, it is not a relevant metric.

SPSS two way repeated measures ANOVA

i am fairly new with statitistic.
I made an experiment and used the two way ANOVA with repeated measures. The calculation was done in SPSS. In most papers I have seen, the f-value and the degree of freedom were reported as well. is it normal to report those values as well? if so, which values do i take from the spss output.
how do I interpret these values? what do they mean?
when does the f-value support a significant result and when not?
what are good values for the f-value and the degree of freedom.
in some article is also read about the critical f-values, how do I get this value?
most articles describe how to calculate those values but do not explain their meaning for the experiment.
some clarification in these issues is greatly appreciated.
My English is not very good, but I will try to answer your question.
The main purpose of ANOVA is that we want statistical proof that the measured groups have the same mean or not. So we make a null hypothesis and an alternative hypothesis, then we use a test statistics on the data. You can use ANOVA if the groups has the same variance (squared standard deviation).
You need to test this. This is a hyptest too, the nullhyp. is the groups have the same variance, the anternative hyp. is they dont.
You need to make decision from the Sig. value, if the value is higher than 0,05, we usually accept the nullhyp. If the variances are equal, we can use ANOVA. (I assume that the data is following the Normal distribution.) The nullhyp. is that the groups have equal means, the alternative hyp is that we have at least 1 group with a different mean. You can make your decision from the Sig. value, as I said before, if the value higher than 0.05 we accept the nullhyp. The F-critical value is not important if you are calculating on a computer. You can make an accepting interval from the lower and the upper F-critical, and if the F-value is in the interval you accept the nullhyp, but I only used this method in statistics class. You don't need the F-value and the df in the report, because they don't explain anything on their own.

Why learning rate is change in SOM with Iterations

Normally learning rate is a value that we decide in the begining and normally it doesn't change with no of iterations.
But in SOM learning rate is change with the iteration, what is the idea behind that?
As I understand learning rate should be decrease with the number of iterations. why is that?
Reason is quite simple. SOMs are ill-designed in terms of mathematical models, and one needs to decrease learning rate in order to ensure convergence. In other words, if you do not change this value, learning procedure might not stop at all. This issue is somehow addressed by more mathematical models called "Principal Curves" and "Principal Manifolds", which are much less popular but introduce valid mathematical approach for learning SOM-like representations.

Best 8-bit supplemental checksum for CRC8-protected packet

I'm looking at designing a low-level radio communications protocol, and am trying to decide what sort of checksum/crc to use. The hardware provides a CRC-8; each packet has 6 bytes of overhead in addition to the data payload. One of the design goals is to minimize transmission overhead. For some types of data, the CRC-8 should be adequate, for for other types it would be necessary to supplement that to avoid accepting erroneous data.
If I go with a single-byte supplement, what would be the pros and cons of using a CRC8 with a different polynomial from the hardware CRC-8, versus an arithmetic checksum, versus something else? What about for a two-byte supplement? Would a CRC-16 be a good choice, or given the existence of a CRC-8, would something else be better?
In 2004 Phillip Koopman from CMU published a paper on choosing the most appropriate CRC, http://www.ece.cmu.edu/~koopman/crc/index.html
This paper describes a polynomial selection process for embedded
network applications and proposes a set of good general-purpose
polynomials. A set of 35 new polynomials in addition to 13 previously
published polynomials provides good performance for 3- to 16-bit CRCs
for data word lengths up to 2048 bits.
That paper should help you analyze how effective that 8 bit CRC actually is, and how much more protection you'll get from another 8 bits. A while back it helped me to decide on a 4 bit CRC and 4 bit packet header in a custom protocol between FPGAs.

When to use geometric vs arithmetic mean?

So I guess this isn't technically a code question, but it's something that I'm sure will come up for other folks as well as myself while writing code, so hopefully it's still a good one to post on SO.
The Google has directed me to plenty of nice lengthy explanations of when to use one or the other as regards financial numbers, and things like that.
But my particular context doesn't fit in, and I'm wondering if anyone here has some insight. I need to take a whole bunch of individual users' votes on how "good" a particular item is. I.e., some number of users each give a particular item a score between 0 and 10, and I want to report on what the 'typical' score is. What would be the intuitive reasons to report the geometric and/or arithmetic mean as the typical response?
Or, for that matter, would I be better off reporting the median instead?
I imagine there's some psychology involved in what the "best" method might be...
Anyway, there you have it.
Thanks!
Generally speaking, the arithmetic mean will suffice. It is much less computationally intensive than the geometric mean (which involves taking an n-th root).
As for the psychology involved, the geometric mean is never greater than the arithmetic mean, so arithmetic is the best choice if you'd prefer higher scores in general.
The median is most useful when the data set is relatively small and the chance of a massive outlier relatively high. Depending on how much precision these votes can take, the median can sometimes end up being a bit arbitrary.
If you really really want the most accurate answer possible, you could go for calculating the arithmetic-geomtric mean. However, this involved calculating both arithmetic and geometric means repeatedly, so it is very computationally intensive in comparison.
you want the arithmetic mean. since you aren't measuring the average change in average or something.
Arithmetic mean is correct.
Your scale is artificial:
It is bounded, from 0 and 10
8.5 is intuitively between 8 and 9
But for other scales, you would need to consider the correct mean to use.
Some other examples
In counting money, it has been argued that wealth has logarithmic utility. So the median between Bill Gates' wealth and a bum in the inner city would be a moderately successful business person. (Arithmetic average would hive you Larry Page.)
In measuring sound level, decibels already normalizes the effect. So you can take arithmetic average of decibels.
But if you are measuring volume in watts, then use quadratic means (RMS).
The answer depends on the context and your purpose. Percent changes were mentioned as a good time to use geometric mean. I use geometric mean when calculating antennas and frequencies since the percentage change is more important than the average or middle of the frequency range or average size of the antenna is concerned. If you have wildly varying numbers, especially if most are similar but one or two are "flyers" (far from the range of the others) the geometric mean will "smooth" the results (not let the different ones exert a change in the results more than they should). This method is used to calculate bullet group sizes (the "flyer" was probably human error, not the equipment, so the average is ""unfair" in that case). Another variation similar to geometric mean is the root mean square method. First you take the square root of the numbers, take THAT mean, and then square your answer (this provides even more smoothing). This is often used in electrical calculations and most electical meters are calculated in "RMS" (root mean square), not average readings. Hope this helps a little. Here is a web site that explains it pretty well. standardwisdom.com

Resources