How to prove every free tree T has a separator S consisting of a single vertex, the removal of which partitions T into two or more connected components, each of size at most n/2?
Obviously, any vertex in a tree (except leaves) is a separator. To find a vertex which partitions the tree in "small" components is also quite straightforward.
Take any non-leaf vertex 'v' (except the trees with 1 or 2 vertices which are trivial).
If 'v' is not "good" then exists exactly one "large" component 'C' incident with 'v'. Moreover, exists only one vertex 'w' in 'C' which is adjacent to 'v'.
Take 'w' as the next candidate and repeat step (2).
P.S. Try to do your homework yourself next time.
Related
I am given a matrix of 0 and 1's and have to find the islands formed by one's. If found a reference :
https://www.careercup.com/question?id=14948781
About who to compute the number of island but don't know at all how to adapt the algorithm to at the end obtain the list of islands given a lists of cells of the matrix belonging to them.
This problem is essentially asking you to find the connected components of an undirected graph. Here, a connected component is a group of 1's surrounded by 0s and none of the 1s in the group are connected to another separate group of 1s surrounded by 0s.
To solve this, you can start by performing a Depth First Search (DFS) on each of the elements in the 2D matrix.
If the algorithm encounters an unvisited 1, increment the count of the islands
Recursively perform a DFS on all the adjacent vertices (up, down, left, right)
Keep track of the visited 1s so that they are not visited again (can be done
using a set or by marking the node in the graph visited with a boolean flag)
As soon as a 0 is found during this recursive DFS, stop the DFS
Continue the algorithm by moving onto the next element in the 2D matrix and
repeat steps 1-4
When you reach the end of the 2D matrix return the count of islands
I have 1024 bit long binary representation of three handwritten digits: 0, 1, 8.
Basically, in 32x32 bitmap of a digit, rows are concatenated to form a binary vector.
There are 50 binary vectors for each digit.
When we apply Nearest neighbour to each digit, we can use hamming distance metric or some other, and then apply the algorithm to differentiate between the vectors.
Now I want to use another technique where instead of looking at each bit of a vector, I would like to analyse on less number of bits while comparing the vectors.
For example, I know that when one compares bitmap(size:1024 bits) of digits '8' and '0', We must have 1s in middle of the vector of digit '8' as there digit 8 visually appears as the combination of two zeros placed in column.
So our algorithm would look for the intersection of two zeros(which would be the middle of digit.
Thats the way I want to work. I want to convert the low level representation(looking at 1024 bitmap vector) to the high level representation(that consist of two properties extracted from bitmap).
Any suggestion? I hope, the question is somewhat clear to the audience.
Idea 1: Flood fill
This idea does not use the 50 patterns you have per digit: it is based on the idea that usually a "1" has all 0-bits connected around that "1" shape, while a "0" separates the 0-bits inside it from those outside it, and an "8" has two such enclosed areas. So counting connected areas of 0-bits would identify which of the three it is.
So you could use a flood fill algorithm, starting at any 0 bit in the vector, and set all those connected 0-bits to 1. In a 1 dimensional array you need to take care to correctly identify connected bits (either horizontally: 1 position apart, but not crossing a 32 boundary, or vertically... 32 positions apart). Of course, this flood-filling will destroy the image - so make sure to use a copy. If after one such flood-fill there are still 0 bits (which were therefore not connected to those you turned into 1), then choose one of those and start a second flood-fill there. Repeat if necessary.
When all bits have been set to 1 in that way, use the number of flood-fills you had to perform, as follows:
One flood-fill? It's a "1", because all 0-bits are connected.
Two flood-fills? It's a "0", because the shape of a zero separates two areas (inside/outside)
Three flood-fills? It's an "8", because this shape separates three areas of connected 0-bits.
Of course, this process assumes that these handwritten digits are well-formed. For example, if an 8-shape would have a small gap, like here:
..then it will not be identified as an "8", but a "0". This particular problem could be resolved by identifying "loose ends" of 1-bits (a "line" that stops). When you have two of those at a short distance, then increase the number you got from flood-fill counting with 1 (as if those two ends were connected).
Similarly, if a "0" accidentally has a small second loop, like here:
...it will be identified as an "8" instead of a "0". You could prevent this particular problem by requiring that each flood-fill finds a minimum number of 0-bits (like at least 10 0-bits) to count as one.
Idea 2: probability vector
For each digit, add up the 50 example vectors you have, so that for each position you have a count somewhere between 0 to 50. You would have one such "probability" vector per digit, so prob0, prob1 and prob8. If prob8[501] = 45, it means that it is highly probable (45/50) that an "8" vector will have a 1-bit at index 501.
Now transform these 3 probability vectors as follows: instead of storing a count per position, store the positions in order of decreasing count (probability). So if prob8[513] has the highest value (like 49), then that new array should start like [513, ...]. Let's call these new vectors A0, A8 and A1 (for the corresponding digit).
Finally, when you need to match a given input vector, simultaneously go through A0, A1 and A8 (always looking at the same index in the three vectors) and keep 3 scores. When the input vector has a 1 at the position specified in A0[i], then add 1 to score0. If it also has a 1 at the position specified in A1[i] (same i), then add 1 to score1. Same thing for score8. Increment i, and repeat. Stop this iteration as soon as you have a clear winner, i.e. when the highest score among score0, score1 and score8 has crossed a threshold difference with the second highest score among them. At that point you know which digit is being represented.
What method would you use to compute a distance that represents the number of "jumps" one has to do to go from one area to another area in a given 2D map?
Let's take the following map for instance:
(source: free.fr)
The end result of the computation would be a triangle like this:
A B C D E F
A
B 1
C 2 1
D 2 1 1
E . . . .
F 3 2 2 1 .
Which means that going from A to D, it takes 2 jumps.
However, to go from anywhere to E, it's impossible because the "gap" is too big, and so the value is "infinite", represented here as a dot for simplification.
As you can see on the example, the polygons may share points, but most often they are simply close together and so a maximum gap should be allowed to consider two polygons to be adjacent.
This, obviously, is a simplified example, but in the real case I'm faced with about 60000 polygons and am only interested by jump values up to 4.
As input data, I have the polygon vertices as an array of coordinates, from which I already know how to calculate the centroid.
My initial approach would be to "paint" the polygons on a white background canvas, each with their own color and then walk the line between two candidate polygons centroid. Counting the colors I encounter could give me the number of jumps.
However, this is not really reliable as it does not take into account concave arrangements where one has to walk around the "notch" to go from one polygon to the other as can be seen when going from A to F.
I have tried looking for reference material on this subject but could not find any because I have a hard time figuring what the proper terms are for describing this kind of problem.
My target language is Delphi XE2, but any example would be most welcome.
You can create inflated polygon with small offset for every initial polygon, then check for intersection with neighbouring (inflated) polygons. Offseting is useful to compensate small gaps between polygons.
Both inflating and intersection problems might be solved with Clipper library.
Solution of the potential neighbours problem depends on real conditions - for example, simple method - divide plane to square cells, and check for neighbours that have vertices in the same cell and in the nearest cells.
Every pair of intersecting polygons gives an edge in (unweighted, undirected) graph. You want to find all the path with length <=4 - just execute depth-limited BFS from every vertice (polygon) - assuming that graph is sparse
You can try a single link clustering or some voronoi diagrams. You can also brute-force or try Density-based spatial clustering of applications with noise (DBSCAN) or K-means clustering.
I would try that:
1) Do a Delaunay triangulation of all the points of all polygons
2) Remove from Delaunay graph all triangles that have their 3 points in the same polygon
Two polygons are neightbor by point if at least one triangle have at least one points in both polygons (or obviously if polygons have a common point)
Two polygons are neightbor by side if each polygon have at least two adjacents points in the same quad = two adjacent triangles (or obviously two common and adjacent points)
Once the gaps are filled with new polygons (triangles eventually combined) use Djikistra Algorithm ponderated with distance from nearest points (or polygons centroid) to compute the pathes.
I have a table that has frame numbers in one column and corresponding color moments in the other column. I found them using openCV.
Some of the frames have extremely high values and rest very low. How can I extract the frames with very high peaks ?
This is the plot of the distribution, I tried to use Gaussian smoothing and then thresholding on the plot below.
I got this result.
Now how should I proceed ?
Basically you are looking for a peakfinder...MATLAB has a peakfinder function to find peaks...
I did not find any ready made API in OpenCV for this so I implemented the peakfinder of MATLAB...the algorithm goes this way...
Initial assumptions or prior knowledge can be a) you can have 'n' peaks in your distribution b) your peaks are separated by a minimum window 'w' i.e no two peak are closer than 'w'.
I can tel you the window implementation. Start at a data point . Mark its position as current index and check in its left and right neighbourhood of length 'w' whether a value more than the value at current index exists or not.
If yes move to the point. Make the point the current index and repeat 2.
If no then its your local maxima. Move ur current index by 'w' length and repeat 2 till you reach data set end.
try to implement this and check MATLAB help for peakfinder. If no luck I can post the code..
EDIT after seeing your edited graph it seems the graph has well defined maximum peaks and hence what you can do is track the sign of the dy/dx of the graph. Maximum peaks are points where sign of dy/dx changes from positive to negative...in code language
vector<double> array_of_max_peak;
if (sign( x(n+1) - x(n) ) ) > 0
array_of_max_peak.push(x(n));
I want to create effect like "Magic Wand" in Photoshop for central pixel of screen with GLSL shaders in my iPhone app (capturing image from camera). Now I've made this by getting array of pixels and applying some sort of flood-fill algorithm for central pixel (all with Objective-C code). This is performed on CPU and this is a bit too slow for me, so I want to try to make it with GLSL shaders.
Actually, all I need is to rewrite flood-fill in fragment shader, more precisely speaking, to know if current fragment's color is near threshold color and if current fragment is the neighbor of previously detected fragments that are in area. That sounds too confusing for me and I can not understand if it is even possible.
The algorithm for flood-fill is (pseudocode):
Flood-fill (node, target-color, replacement-color):
1. Set Q to the empty queue.
2. If the color of node is not equal to target-color, return.
3. Add node to Q.
4. For each element n of Q:
5. If the color of n is equal to target-color:
6. Set w and e equal to n.
7. Move w to the west until the color of the node to the west of w no longer matches target-color.
8. Move e to the east until the color of the node to the east of e no longer matches target-color.
9. Set the color of nodes between w and e to replacement-color.
10. For each node n between w and e:
11. If the color of the node to the north of n is target-color, add that node to Q.
12. If the color of the node to the south of n is target-color, add that node to Q.
13. Continue looping until Q is exhausted.
14. Return.
The question: is it possible to do that in shader, and if yes, how can I do it?
Thanks!
No, shaders do not work that way. In shaders you always can only have read OR write, not both at the same time. If you look back at you algorithm it does read AND write on the same data.
You could try a ping pong scheme, but I doubt it would be fast:
for ( sometime )
for (every pixel in dest)
if source has filled neighbours (up,left,top,bottom) and is above threshold, then write fill
else write source
flip source and dest
this will step one pixel more per iteration - but you have only an upper limit on when it is done (the image size).
You could go even more clever and try to do some pyramid scheme: Run at 2x lower resolutions first and use that to determine areas in the fill. But it is really just not an algorithm that works well on the GPU. I recommend doing a hand optimized assembly CPU version instead.