Not exactly sure how the BFS algorithm is working for my graph - graph-algorithm

Basically took this BFS algorithm from Khan Academy:
https://repl.it/#Stylebender/BFS-Graph#index.js
In the third iteration of bfsInfo in which we iterate over Vertex 2 and its sub array of [3,4,5], we skip 3 since it's existing distance of 0 does not === null but why does vertex 4 have a distance of 2? Shouldn't its distance be 1 since 4 has never been iterated over?
In addition, why is the information for vertex 5 null with respect to both distance and predecessor? Shouldn't it be distance:1 predecessor: 2

Related

Shape function for B21 (Timoshenko beam) element in Abaqus

I am wondering what are the appropriate shape/interpolation functions for the B21 element since it has 3 DoF per node, but is stated as linear interpolation element.
Update: (as per duffymo's comment)
I know there is a distinction between nodal DoF and interpolation order and I am looking for the relation between the two.
For example the standard Euler-Bernoulli beam element (B23) has a 3rd order polynomial interpolation and uses the four nodal DoF (2 displacements and 2 rotations) two determine the displacement field. This interpolation is still linear in the coefficients, but cubic in length. How is the interpolation kept linear in length for B21? Does it have separate first order polynomials for each DoF?
My end goal here is to calculate stresses from displacements, obtained by my own solver.
Any help is appreciated.

Filters for FFT signals for analysis and MFCC

Below is the code I have been writing to try to create a mel triangular filter bank.
I start with 300 to 8000 hz range, convert the frequency to mels, and then mels back into frequency to then get the fft_bin numbers.
clear all;
g=[300 8000]; % low freqncy and fs/2 for the highest frequency
freq2mel=1125*log(1+(g/700)); % creating mel scale from the frequency
% answer [401.25 2834.99]
f=linspace(0,2835,12); % if we want 10 filter banks that we use the
two endpoints and it will put 10 banks between them
% answer is [401.25 622.50 843.75 1065.0 1286.25 1507.50 1728.74
1949.99 2171.24 2392.49 2613.74 2834.99]
mel2freq=700*(exp(f/1125)-1); % converting the mel back into frequency
%answer is [300 517.33 781.90 1103.97 1496.04 1973.32 2554.33
3261.62 4122.63 5170.76 6446.70 8000]
fft_bins=floor((mel2freq/16000)*512); % creating fft bins
%answer is [9 16 25 35 47 63 81 104 132 165 206 256]
My issue is this. I am stuck after this. I keep seeing the below filter bank piecewise function come up but I do not understand what K is in this function. Is k the array of $$ \mid(FFT)\mid ^2 $$ numbers from the hamming window? how to get the actual filter with the triangular output with magnitude of 1 to pass the $\mid(FFT)\mid^2$ to get my MFCC's. Can someone please help me out.
Normally when you do this kind of filtering, you have your spectrum in a 1D array and then you have a Mel filter bank in 2D matrix, with one dimension matching the FFT bins on your spectrum array, and another dimension being your target Mel bands. You multiply them and get your 1D Mel spectrum.
The H_m function really just describes a triangle centered around m, where m is the center Mel band and k is the frequency from 0 to Fs/2. In theory, the k parameter should be continuous. You can assume that k is a FFT bin and it will kind of work, but you will not get great results at low frequencies where you entire Mel band covers 1 or 2 FFT bins. If you need to get a better resolution than that, you will consider how much of the triangle a particular FFT bin contains.

Anomaly detection with machine learning without labels

I am tracing multiple signals for a certain period of time and associating them with a timestamp like following:
t0 1 10 2 0 1 0 ...
t1 1 10 2 0 1 0 ...
t2 3 0 9 7 1 1 ... // pressed a button to change the mode
t3 3 0 9 7 1 1 ...
t4 3 0 8 7 1 1 ... // pressed button to adjust a certain characterstic like temperature (signal 3)
where t0 is the tamp stamp, 1 is the value for signal 1, 10 the value for signal 2 and so on.
That captured data during that certain period of time should be considered as the normal case. Now significant derivations should be detected from the normal case. With significant derivation I do NOT mean that one signal value just changes to a value that has not been seen during the tracing phase but rather that a lot of values change that have not yet been related to each other. I do not want to hardcode rules since in the future more signals might be added or removed and other "modi" that have other signal values might be implemented.
Can this be achieved via a certain Machine Learning algorithm? If a small derivation occurs I want the algorithm to first see it as a minor change to the training set and if it occurs multiple times in the future it should be "learned". The major goal is to detect the bigger changes / anomalies.
I hope I could explain my problem detailed enough. Thanks in advance.
you could just calculate the nearest neighbor in your feature space and set a threshold how far its allowed to be away from your test point to not be an anomaly.
Lets say you have 100 values in your "certain period of time"
so you use a 100 dimensional feature space with your training data (which doesn't contain anomalies)
If you get a new dataset you want to test, you calculate the (k) nearest neighbor(s) and calculate the (e.g. euclidean) distance in your featurespace.
If that distance is larger than a certain threshold it's an anomaly.
What you have to do in order to optimize is finding a good k and a good threshold. E.g. by Grid-search.
(1) Note that something like this probably only works well if your data has a fixed starting and ending point. Otherwise you would need a huge amount of data and even than it will not perform as good.
(2) Note It should be worth trying to create an own detector for every "mode" you have mentioned in your question.

Analysis of Prim's Algorithm

Can anyone explain why we use or what's the importance of using the key array(i.e key[])
in PRIM'S ALGORITHM which deals with the minimum spanning tree problem.
PRIM_MST(G,W,R)//G->graph,W->weighted matrix,R->root vertex
-------------------------
for v<-v[G]
key[v]<-infinity
pred[v]<-NIL //pred[]-->predecessor array
key[v]=0
Q<-v[G] //Q-->priority queue
while Q!=NULL
u<-EXTRACT_MIN(Q)
for v<-adj[u] //adj[]--> adjacency list matrix
if v belongs to Q && w(Q,v)<key[v]
pred[v]<-u,key[v]<-w(u,v)
Key is basically the value on the edge that led to the particular vertex in the graph during the construction of MST
on arriving on a vertex during the algorithm, it checks for the minimum weighted edge connecting the set A(the set of vertices already traversed) and set B(the set of edges not yet traversed). It follows this minimum edge and puts the key of the newly arrived vertex(the one reached after following this min edge) as the weight of this minimum edge

Feedback on algorithm for Steiner Tree with restrictions

For an assignment, I have to create a Steiner Tree. However, this is not a typical Steiner Tree, as the graph structure we're required to use does not allow insertion of new vertices. Rather, the test cases define a graph structure of N vertices and M edges while specifically marking X vertices as target nodes. These are the nodes we have to span while using some, none or all of the unmarked vertices in the graph.
My solution to this problem is
Implement Dijkstra's Algorithm to find the shortest path between all the target vertices
For each of the shortest paths 1:n
Extract all current selected path vertices into a set
Extract all remaining vertices into a set
For all vertices of the current selected path 1:m
Execute Dijkstra to find shortest path between current vertex and other path's vertices
If this creates a spanning tree, save path and length in priority queue sorted by length value
Pop top of priority queue and return path
My issue is that this is an exhaustive search that uses the initial application of Dijkstra to create a reduced set of possible start-end vertices for a shorter path than a minimum spanning tree.
Is there a heuristic or other algorithm that may solve this problem?
With some help, I worked out this answer for a similar problem that I had. Rather than adding new vertices as in a spacial steiner tree problem, the new steiner points in this graph are the vertices that lie along the path between the marked nodes. For a graph with N vertices, M edges, X require vertices, and S found vertices (vertices along our path):
Compute All Pairs Shorest Paths (Floyd-Warshall, Johnson's, whatever)
for k in X
remove k from X, insert k into S
for v in (X + S) - Both sets
find the shortest distance from k to v - path P
for u in P (all vertices on the path)
insert u into S
if u exists in k, remove u from k
Now for the wall of text as to what this algorithm does. We pick a vertex k in X, and then find the minimum distance to the nearest other vertex in the target set X, or in the result set S, and call it v. Then we follow the path of nodes from {k,u}, inserting them into our result set. Finally, double check and make sure that any vertices in X that were on the path (shouldn't happen) are removed from X.
Any new vertex that you want to add, c, will have a minimum distance to some node already in your result set S. Since the nodes already in S are the minimum distance apart, it follows that c will be the minimum distance from any point in S to c. For example, if you have three nodes, A, B, and C, if A and B are already found to be a minimum distance apart, adding C fulfills the requirement that it is the minimum distance from B, and the minimum distance path from A to C goes through B.
I did some research on the discrete Steiner Tree problem (which is what this is), and this is the best brute force solution that I found. The main problem is going to be the O(n^3) time it takes to do all pairs shortest paths, but then the construction of the minimum tree should be straightforward and quick, since you just need to look up distance information. The implementation I wound up working with is outlined nicely on wikipedia.

Resources