How many nodes a typical CAN/LIN/MOST Networks will contain? - can-bus

I would like to know number of nodes a typical network(CAN/LIN/MOST) Contain and On which basis we will decide?

There's no fixed number but it depends on multiple factors:
Baud rate: Lower the baud rate more the number of nodes. It takes more time for signal to propogate and higher baud rate won't allow that delay.
Wiring: Every node will add capacitance to bus so your wiring scheme will also impact node count.
Signal strength weakens as bus length/node count increases. Hence repeaters may be requred.

Related

Why the BER for 16QAM is better than that of 32QAM

I am a little confused about the BER. I found that the BER of 16QAM is better than that of 32QAM. is this right, if so, why we go to higher QAM (i.e. 32, 64, and etc).
thank you in advance
If one would target the best BER, you wouldn't even go up to 16QAM and stick at 4QAM / QPSK. You'll have a secure transmission, with the downside of a low spectral efficiency.
16QAM can achieve a spectral efficiency of 4 Bits/s/Hz, where 64QAM has already 6 Bits/s/Hz. This means, you can increase the bitrate by 50% compared to the previous setting. This is especially important if you have limited resources like channels or bandwidth. In Wireless transmission you'll have a bandwidth of a few MHz and there's no parallel channel for other users, so spectral efficiency is the key to increase data throughput. (In fact there's something like an parallel channel, called MIMO, but you get the key)
See the table here for an overview of wireless transmission systems and their spectral efficiency. Spectral Efficiency
Even for more robust transmission systems (in case of BER) you can pick relatively high modulation grades and use the increased number of bits for redundant information. In case of a bit error the receiver is able to repair the original content. This is called Forward Error Correction

Feedforward Algorithm in NEAT (Neural Evolution of Augmenting Topologies)

I don’t understand how the NEAT algorithm takes inputs and then outputs numbers based on the connection genes, I am familiar with using matrixes in fixed topology neural networks to feedforward inputs, however as each node in NEAT has its own number of connections and isn’t necessarily connected to every other node, I don’t understand, and after much searching I can’t find an answer on how NEAT produces outputs based on the inputs.
Could someone explain how it works?
That was also a question I struggled while implementing my own version of the algorithm.
You can find the answer in the NEAT Users Page: https://www.cs.ucf.edu/~kstanley/neat.html where the author says:
How are networks with arbitrary topologies activated?
The activation function, bool Network::activate(), gives the specifics. The
implementation is of course considerably different than for a simple layered
feedforward network. Each node adds up the activation from all incoming nodes
from the previous timestep. (The function also handles a special "time delayed"
connection, but that is not used by the current version of NEAT in any
experiments that we have published.) Another way to understand it is to realize
that activation does not travel all the way from the input layer to the output
layer in a single timestep. In a single timestep, activation only travels from
one neuron to the next. So it takes several timesteps for activation to get from
the inputs to the outputs. If you think about it, this is the way it works in a
real brain, where it takes time for a signal hitting your eyes to get to the
cortex because it travels over several neural connections.
So, if one of the evolved networks is not feedforward, the outputs of the network will change in different timesteps and this is particularly useful in continuous control problems, where the environment is not static, but also problematic in classification problems. The author also answers:
How do I ensure that a network stabilizes before taking its output(s) for a
classification problem?
The cheap and dirty way to do this is just to activate n times in a row where
n>1, and hope there are not too many loops or long pathways of hidden nodes.
The proper (and quite nice) way to do it is to check every hidden node and output
node from one timestep to the next, and see if nothing has changed, or at least
not changed within some delta. Once this criterion is met, the output must be
stable.
Note that output may not always stabilize in some cases. Also, for continuous
control problems, do not check for stabilization as the network never "settles"
but rather continuously reacts to a changing environment. Generally,
stabilization is used in classification problems, or in board games.
when I was dealing with this I researched into loop detection using matrix methods etc.
https://en.wikipedia.org/wiki/Adjacency_matrix#Matrix_powers
But I found the best way to feedforward inputs and get outputs was with loop detection using a timeout propagation delay at each node:
a feedforward implementation is simple and I started from there:
wait until all incoming connections to a node have a signal then sum-squash activate and send to all output connections of that node. Start from input nodes that already have a signal from the input vector. Manually 'shunt' output nodes with a sum-squash operation once there are no more nodes to be processed to get the output vector.
for circularity (traditional NEAT implementation) I did the same as feedforward with one more feature:
calculate the 'maximum possible loop size' of the network. an easy way to calculate this is ~2*(total number of nodes). No walk from input to any node in the network is larger than this without cycling, therefore the node MUST propagate in this many time steps unless it is part of a cycle.
Then I wait until all input connection signals arrive at a node OR timeout occurs (signal has not arrived at a connection within maximum loop size steps). If timeout occurs label the input connections that don't have signals as recurrent.
Once a connection is labelled recurrent, restart all timers on all nodes (to prevent a node later in the detected cycle from being labelled recurrent due to propagation latency)
Now forward propagation is the same as feed forward network except: don't wait for connections that are recurrent, sum-squash as soon as all non-recurrent connections have arrived (0 for recurrent connections that don't have a signal yet). This ensures that the first node reached in a cycle is set to recurrent, making it deterministic for any given topology and recurrent connections pass data to the next propagation time step.
This has some first time overhead but is concise and produces the same results with a given topology each time its ran. Note that this only works when all nodes have a path to output so you cant necessarily disable split connections (connections that were made from node addition operations) and prune randomly during evolution without making considerations.
(P.S. This also creates a traditional residual-recurrent network that in theory could be implemented as matrix operations trivially. If I had large networks I would first 'express' by running forward propagation once to get recurrent connections then create a 'tensor per layer' representation for matrix-multiplication operations using recurrent, weight, and signal connection attributes with recurrent connection attribute as a sparse binary mask. I actually started writing a Tensorflow implementation that performed all mutation/augmentation operations with tf.sparse_matrix operations and didn't use any tree objects but I had to use dense operations and the n^2 space consumed is too much for what I need but this allowed the use of the aforementioned adjacency matrix powers trick since in matrix form! At least one other person on Github has done tf NEAT but I'm unsure of their implementation. Also I found this interesting https://neat-python.readthedocs.io/en/latest/neat_overview.html)
Happy Hacking!

how to calculate interference weight between 20MHz, 40MHz and 11ac 80MHz APs?

I want to know how to calculate interference weight in the combination of APs running on different channel frequencies.
Lets say, i have 10 APs, with different modes running, like 11a, 11na and 11ac.
If 11a is running a 20MHz channel say (36), and 11na devices running with 40MHz (36 and 40), and 11ac devices running with 80MHz(36, 40,44,48).
Now how does these frequencies interfere with each other and how to calculate the interference weight among these frequencies.
First of all you should read the 802.11-2012 standard and 802.11ac amendment to understand the difference in PHY between the 3 modes. But more generally I think a more precise definition of "interference weight" or at least how you would use this measure, is needed to assist.
In practice, interference depends on many variables, is highly dynamic, and has many elements of randomness. The standard allows you to define a quantity measure called RSSI of the signal you are measuring but the actual method is proprietary and no vendor will be the same. Moreover different hardware/firmwave/drivers will measure signal and SNR differently at the exact same location and time.
IMO, all measures of signal quality are by definition averages of some kind. Interference can be more precisely defined and measured on a per-symbol basis but with millions of symbols per second this is of limited use

What limits data rate through a medium keep on increasing?

We know data rate is bits per second. It can be also considered as baud rate(symbols per second) times the number of bits in symbol. So, if to increase data rate, we can increase baud rate or we can increase number of bits in a symbol. Why can't we keep on increasing these two? Can someone explain what happens with these 2 occasions separately?
This is essentially a physics question. We can play all sorts of games with how to physically represent a signal (hence, getting more bits per baud), but at the end of the day you can only physically convey so much information for any given rate of change of a signal. If you want to communicate faster, you have to up the frequency, which means having signals that change faster in time -- and nature ultimately limits how fast you can change the signal.
See:
http://en.wikipedia.org/wiki/Nyquist_rate
This gets even worse when you add noise:
http://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem

Wavelet packet decomposition vs bandpass filters

If I am right, Wavelet packet decomposition (WPT) breaks a signal into various filter banks.
The same thing can be done using many band pass filters.
My aim is to find the energy content of a signal with a large sapmling rate ((2000 hz) in various frequency bands like 1-200, 200-400, 400-600.
What are the advantages and disadvantages of using a WPT of band pass filters?
with wpt (or dwt indeed) you have quadrature mirror filters that will ensure that if you add up all the reconstructed signals in the last level (the leaves) of the wpt tree you get exactly the original signal except for the math processor finite word length aproximations. The algorithm is pretty fast.
Moreover if your signal is non-stationary you can gain the time-frequency localization although this will drastically decrease as you go down on the tree (inverted).
The other aspect is that if yoy are lucky to get a wavelet that correlates well with the non stationary components of your signal the transform will map this components more efficiently.
For you application firstly see how many levels you have to go down in the wpt tree to go from your sampling frequency to the desired freq intervals, you may not get excately 200-400, 400-600 etc,the downer you go in the tree the more accurate are the feq limits, and you may have to join nodes to get your bands.

Resources