can any body please explain What is vertex cover of disconneted undirected graph? - vertex-cover

Let say we have graph vertice v1= A,B,C are connected and Vertices v2 =E,F,G are connected but v1 and v2 are not connected what will be the vertex cover of this graph?
in my opinion vertex cover will be 2 but am confuse in this problem

Related

Minimum cost to connect components of a disconnected graph?

We are initially given a fully connected graph by means of an adjacency matrix. Then, some edges are removed such that the graph becomes disconnected and we now have multiple components of this disconnected graph. What is the minimum cost needed to connect all the components?
Let G = (V, E_1 ∪ E_2) be the original (weighted, fully connected) graph and G' = (V, E_1) the graph obtained by removing the edges in the set E_2.
Consider the graph G'' that is obtained by contracting the connected components of G' (i.e., each connected component becomes a single vertex), where two vertices of G'' are neighbours if and only if the corresponding connected components in G' were connected by an edge in E_2. Essentially, this means that the edges of G'' are the edges in the set E_2 (the edges that were removed from the original graph).
Observe that adding a subset of the edges from E_2 to G' restores (full) connectivity of G' if and only if these edges connect all vertices from G''. The cheapest way to do this is by selecting a min-cost spanning tree on G'' (with respect to the weights of the edges). From your comments, I assume you know what a minimum spanning tree is and how it can be computed.
One-sentence-summary:
A cost-minimal set of edges that is needed to restore connectivity can be found by computing a minimum (cost) spanning tree on the graph that is obtained by contracting each of the connected components into a single vertex and that contains, as its edge set, the edges that were removed from the original graph.

Merging depth maps for trinocular stereo

I have a parallel trinocular setup where all 3 cameras are alligned in a collinear fashion as depicted below.
Left-Camera------------Centre-Camera---------------------------------Right-Camera
The baseline (distance between cameras) between left and centre camera is the shortest and the baseline between left and right camera is the longest.
In theory I can obtain 3 sets of disparity images using different camera combinations (L-R, L-C and C-R).I can generate depth maps (3D points) for each disparity map using Triangulation. I now have 3 depth maps.
The L-C combination has higher depth accuracy (measured distance is more accurate) for objects that are near (since the baseline is short) whereas
the L-R combination has higher depth accuracy for objects that are far(since the baseline is long). Similarly the C-R combination is accurate for objects at medium distance.
In stereo setups, normally we define the left (RGB) image as the reference image. In my project, by thresholding the depth values, I obtain an ROI on the reference image. For example I find all the pixels that have a depth value between 10-20m and find their respective pixel locations. In this case, I have a relationship between 3D points and their corresponding pixel location.
Since in normal stereo setups, we can have higher depth accuracy only for one of the two regions depending upon the baseline (near and far), I plan on using 3 cameras. This helps me to generate 3D points of higher accuracy for three regions (near, medium and far).
I now want to merge the 3 depth maps to obtain a global map. My problems are as follows -
How to merge the three depth maps ?
After merging, how do I know which depth value corresponds to which pixel location in the reference (Left RGB) image ?
Your help will be much appreciated :)
1) I think that simple "merging" of depth maps (as matrices of values) is not possible, if you are thinking of a global 2D depth map as an image or a matrix of depth values. You can consider instead to merge the 3 set of 3D points with some similarity criteria like the distance (refining your point cloud). If they are too close, delete one of theme (pseudocode)
for i in range(points):
for j in range(i,points):
if distance(i,j) < treshold
delete(j)
or delete the 2 points and add a point that have average coordinates
2) From point one, this question became "how to connect a 3D point to the related pixel in the left image" (it is the only interpretation).
The answer simply is: use the projection equation. If you have K (intrinsic matrix), R (rotation matrix) and t (translation vector) from calibration of the left camera, join R and t in a 3x4 matrix
[R|t]
and then connect the M 3D point in 4-dimensional coordinates (X,Y,Z,1) as an m point (u,v,w)
m = K*[R|t]*M
divide m by its third coordinate w and you obtain
m = (u', v', 1)
u' and v' are the pixel coordinates in the left image.

Calculating the neighborhood distance

What method would you use to compute a distance that represents the number of "jumps" one has to do to go from one area to another area in a given 2D map?
Let's take the following map for instance:
(source: free.fr)
The end result of the computation would be a triangle like this:
A B C D E F
A
B 1
C 2 1
D 2 1 1
E . . . .
F 3 2 2 1 .
Which means that going from A to D, it takes 2 jumps.
However, to go from anywhere to E, it's impossible because the "gap" is too big, and so the value is "infinite", represented here as a dot for simplification.
As you can see on the example, the polygons may share points, but most often they are simply close together and so a maximum gap should be allowed to consider two polygons to be adjacent.
This, obviously, is a simplified example, but in the real case I'm faced with about 60000 polygons and am only interested by jump values up to 4.
As input data, I have the polygon vertices as an array of coordinates, from which I already know how to calculate the centroid.
My initial approach would be to "paint" the polygons on a white background canvas, each with their own color and then walk the line between two candidate polygons centroid. Counting the colors I encounter could give me the number of jumps.
However, this is not really reliable as it does not take into account concave arrangements where one has to walk around the "notch" to go from one polygon to the other as can be seen when going from A to F.
I have tried looking for reference material on this subject but could not find any because I have a hard time figuring what the proper terms are for describing this kind of problem.
My target language is Delphi XE2, but any example would be most welcome.
You can create inflated polygon with small offset for every initial polygon, then check for intersection with neighbouring (inflated) polygons. Offseting is useful to compensate small gaps between polygons.
Both inflating and intersection problems might be solved with Clipper library.
Solution of the potential neighbours problem depends on real conditions - for example, simple method - divide plane to square cells, and check for neighbours that have vertices in the same cell and in the nearest cells.
Every pair of intersecting polygons gives an edge in (unweighted, undirected) graph. You want to find all the path with length <=4 - just execute depth-limited BFS from every vertice (polygon) - assuming that graph is sparse
You can try a single link clustering or some voronoi diagrams. You can also brute-force or try Density-based spatial clustering of applications with noise (DBSCAN) or K-means clustering.
I would try that:
1) Do a Delaunay triangulation of all the points of all polygons
2) Remove from Delaunay graph all triangles that have their 3 points in the same polygon
Two polygons are neightbor by point if at least one triangle have at least one points in both polygons (or obviously if polygons have a common point)
Two polygons are neightbor by side if each polygon have at least two adjacents points in the same quad = two adjacent triangles (or obviously two common and adjacent points)
Once the gaps are filled with new polygons (triangles eventually combined) use Djikistra Algorithm ponderated with distance from nearest points (or polygons centroid) to compute the pathes.

Finding a Directed Graph from Image

The problem that I'm facing is that I have an image. The image will contain shapes of triangle , square and cross. Based on that i need to find a Directed Graph which connects the shapes together.
If the shape is a square it means that it is directed to the next image (or node)
in the direction of traversal
if the shape is a triangle then it means it is connected to all nodes/shapes in the direction
the vertices are pointing to.
if the shape is a square then it means that it is connected all the nodes/shapes pointed out by its sides.
Example:
My problem statement is to find the shortest path of traversal based upon these shapes.
Please help . Thanks in Advance.!!

XNA texture coordinates on merged textures

I got a problem with texture coordinates. First I would like to describe what I want to do then I will ask the question.
I want to have a mesh that has more textures using only one big texture. The big texture merges all textures the mesh is using in it. I made a routine that merges textures, that is no problem, but I still have to modify the texture coordinates, so the mesh that now uses only one texture instead of many has everything placed well.
See the picture:
On the upper left corner I got one of the textures (let's call it A) I merged into a big texture, the right one (B). A's top left is 0,0 and bottom right is 1,1. For easy use let's say that B.width = A.width * 2 and so for the height too. So on B the mini texture (M what is the A originally) bottom-right should be 0.5,0.5.
I got no problems understanding these so far and I hope I understood it ok. But the problem here is, that there are texture coordinates that are:
above 1
negative
on the original A. What should these be on M?
Let's say, A has -0.1,0 - is that -0.05,0 on M inside B?
What about those numbers that are outside 0..1 region? Is -3.2,0 on A -1.6 or -3.1 on B? So I clip of the part that is %1 and divide by 2 (because I stated above that width is double) or should I divide the whole number by 2? As far I understand so far, numbers outside this region are about mirroring the texture. How do I manage this, so the output does not contain the orange texture from B?
If my question is not clear enough (I am not much skilled in English), please ask and I will edit/answer, just help me clear my confusion :)
Thanks in advance:
Péter
A single texture has coordinates in [0-1,0-1] range
The new texture has coordinates in [0-1,0-1] range
In your new texture composed by four single textures, your algoritm has to translate texture coordinates this way.
Blue single square texture will have new coordinates in [0-0.5,
0-0.5] range
Orange single square texture will have new coordinates
in [0.5-1, 0-0.5] range

Resources