In Blender, you can see and access each face of a 3D model like this one: https://poly.google.com/view/6mRHqTCZHxw
Is it possible in SceneKit to do the same, that is access each face of the model?
This question is similar and implies it is impossible, but does not confirm if SceneKit lets you programmatically access all faces of a model. (It focuses on identifying the face touched.)
Two questions:
1) Can you programmatically access each face?
2) Can you filter and only access faces that are visible (i.e., ignore faces that "inside" or occluded by other faces)?
An implementation of #Xartec's answer for your first question #1, based also on the Apple documentation, in Swift 5.3:
extension SCNGeometryElement {
var faces: [[Int]] {
func arrayFromData<Integer: BinaryInteger>(_ type: Integer.Type, startIndex: Int = 0, size: Int) -> [Int] {
assert(self.bytesPerIndex == MemoryLayout<Integer>.size)
return [Integer](unsafeUninitializedCapacity: size) { arrayBuffer, capacity in
self.data.copyBytes(to: arrayBuffer, from: startIndex..<startIndex + size * MemoryLayout<Integer>.size)
capacity = size
}
.map { Int($0) }
}
func integersFromData(startIndex: Int = 0, size: Int = self.primitiveCount) -> [Int] {
switch self.bytesPerIndex {
case 1:
return arrayFromData(UInt8.self, startIndex: startIndex, size: size)
case 2:
return arrayFromData(UInt16.self, startIndex: startIndex, size: size)
case 4:
return arrayFromData(UInt32.self, startIndex: startIndex, size: size)
case 8:
return arrayFromData(UInt64.self, startIndex: startIndex, size: size)
default:
return []
}
}
func vertices(primitiveSize: Int) -> [[Int]] {
integersFromData(size: self.primitiveCount * primitiveSize)
.chunked(into: primitiveSize)
}
switch self.primitiveType {
case .point:
return vertices(primitiveSize: 1)
case .line:
return vertices(primitiveSize: 2)
case .triangles:
return vertices(primitiveSize: 3)
case .triangleStrip:
let vertices = integersFromData(size: self.primitiveCount + 2)
return (0..<vertices.count - 2).map { index in
Array(vertices[(index..<(index + 3))])
}
case .polygon:
let polygonSizes = integersFromData()
let allPolygonsVertices = integersFromData(startIndex: polygonSizes.count * self.bytesPerIndex, size: polygonSizes.reduce(into: 0, +=))
var current = 0
return polygonSizes.map { count in
defer {
current += count
}
return Array(allPolygonsVertices[current..<current + count])
}
#unknown default:
return []
}
}
}
The resulting arrays is an array of faces, each faces containing a list of vertex index.
An answering for how to extract the vertices from SCNGeometrySource can be found there https://stackoverflow.com/a/66748865/3605958, and can be updated to get colors instead.
You will need this extension that implements the chunked(into:) method used above:
extension Collection {
func chunked(into size: Index.Stride) -> [[Element]] where Index: Strideable {
precondition(size > 0, "Chunk size should be atleast 1")
return stride(from: self.startIndex, to: self.endIndex, by: size).map {
Array(self[$0..<Swift.min($0.advanced(by: size), self.endIndex)])
}
}
}
For #2, I don't believe there's a way.
You can, but there is no convenient way built into SceneKit that lets you do it so you would have to built that yourself.
Yes, if you define what a face is and map that to the vertices in the model. For example, you could read the SCNGeometry’s SCNGeometrySources into your own arrays of face objects, in the same order. Using the faceIndex you can than get the index to your array of faces. To update them, you would have to construct a SCNGeometry based on SCNGeometrySources programmatically, based on your own data from the faces array.
Note, the faceIndex returns the triangle rendered and not the quad/polygon so you have to convert it (very doable if all quads).
I’m working on a SceneKit based app that is basically a mini Blender for ipad pros. It uses a halfedge datastructure with objects for vertices and edges and faces. This allows access to those elements but in reality it allows access to the half edge data structure mapped to the model, which forms the basis for the geometry that replaces the one rendered.
Not directly. If you have the geometry mapped to a data model it is of course possible to calculate it before rendering but unfortunately Scenekit doesn’t provide a convenient way to know which faces weren’t rendered.
That all said, a face is merely a collection of vertices and indices, which are stored in the SCNGeometrySources. It may be easier to provide a better answer if you add why you want to add the faces and what you want to do with its vertices.
EDIT: based on your comment "if they tap on face, for instance, the face should turn blue."
As I mentioned above, a face is merely a collection of vertices and indices, a face itself does not have a color, it is the vertices that can have a color. A SCNNode has a SCNGeometry that has several SCNGeometrySources that hold the information about the vertices and how they are used to render faces. So what you want to do is go from faceIndex to corresponding vertex indices in the SCNGeometrySource. You then need to read the latter into an array of vectors, update them as desired, and then create a SCNGeometrySource based on your own array of vectors.
As I mentioned the faceIndex merely provides an index of what was rendered an not necessarily what you fed it (the SCNGeometrySource) so this requires mapping the model to a data structure.
If your model would consists of all triangles and has unique verts opposed to shared, does not interleave the vertex data, then faceIndex 0 would correspond to vertex 0, 1, and 2, and faceIndex 1 would correspond to vertex 3, 4, and 5 in the SCNGeometrySource. In case of quads and other polygons and interleaved vertex data it becomes significantly more complicated.
In short, there is no direct access to face entities in SceneKit but it is possible to modify the SCNGeometrySources (with vertex positions, colors, normals uv coords) programmatically.
EDIT 2: based on further comments:
The primitiveType tells Scenekit how the model is constructed, it does not actually convert it. So it would still require the model to be triangulated already. But then if all triangles, AND if the model uses unique vertices (opposed to sharing verts with adjacent faces, model io provides a function to split vertices to unique from shared if necessary) AND if all the verts in the SCNGeometrySource are actually rendered (which is usually the case if the model is properly constructed), then yes. It is possible to do the same with polygons, see https://developer.apple.com/documentation/scenekit/scngeometryprimitivetype/scngeometryprimitivetypepolygon
Polygon 5, 3, 4, 3 would correspond to face index 0, 1, 2, 3 only if they were all triangles which they are obviously not. Based on the number of vertices per polygon however you can determine how many triangles will be rendered for the polygon. Based on that it is possible to get the index of the corresponding verts.
For example, the first polygon corresponds to face index 0, 1 and 2 (takes 3 triangles to create that polygon with 5 verts), the second polygon is face index 3, the third polygon is faceIndex 4 and 5.
In practice that means looping through the polygons in the element and adding to a faceCounter var (increment with 1 for each vert more than 2) till you reached the same value as faceIndex. Though on my own datastructure, I actually do this same basic conversion myself and works quite well.
EDIT3: in practical steps:
Convert the SCNGeometryElement to an array of ints.
Convert the SCNGeometrySource with the color semantic to an array of vectors. It is possible there is no SCNGeometrySource with the color semantic in which case you will have to create it.
If the polygon primitive is used, loop through the first portion (up to the number of primitives, in this case polygons) of the array you created from the SCNGeometryElement and keep a counter to which you add 1 for every vert more than 2. So if the polygon has 3 verts, increment the counter with 1, if the polygon has 4 verts, increment with 2. Everytime you increment the counter, thus for every polygon, check if faceIndex has been reached. Once you get to the polygon that contains the tapped face, you can get the corresponding vertex indices from the second part of the SCNGeometryElement using the mapping depicted in the image above. If you add a second variable and increment that with the vertex count of each polygon while looping through them you already know the indices of the vertex indices stored in the element.
If all the polygons are quads the conversion is easier and faceindex 0 and 1 correspond to polygon 0, face index 2 and 3 to polygon 1.
Once you got the vertex indices from the SCNGeometryElement, you can modify the vertices at those indices in the array you created from and for the SCNGeometrySource. Then recreate and update the SCNGeometrySource of the SCNGeometry.
Last but not least, unless you use a custom shader, the vertex colors you provide through the SCNGeometrySource will only show up correctly if the material assigned has a white color as diffuse (so you may have to make the base texture white too).
Related
I'm trying to make a simple 3D modeling tool.
there is some work to move a vertex( or vertices ) for transform the model.
I used dynamic vertex buffer because thought it needs much update.
but performance is too low in high polygon model even though I change just one vertex.
is there other methods? or did I wrong way?
here is my D3D11_BUFFER_DESC
Usage = D3D11_USAGE_DYNAMIC;
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BindFlags = D3D11_BIND_VERTEX_BUFFER;
ByteWidth = sizeof(ST_Vertex) * _nVertexCount
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = pVerticesInfo;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pVertexBuffer);
and my update funtion
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
for (int i = 0; i < vIndice.size(); ++i)
{
pBuffer[vIndice[i]].xfPosition.x = pVerticesInfo[vIndice[i]].xfPosition.x;
pBuffer[vIndice[i]].xfPosition.y = pVerticesInfo[vIndice[i]].xfPosition.y;
pBuffer[vIndice[i]].xfPosition.z = pVerticesInfo[vIndice[i]].xfPosition.z;
}
pImmediateContext->Unmap(_pVertexBuffer, 0);
As mentioned in the previous answer, you are updating your whole buffer every time, which will be slow depending on model size.
The solution is indeed to implement partial updates, there are two possibilities for it, you want to update a single vertex, or you want to update
arbitrary indices (for example, you want to move N vertices in one go, in different locations, like vertex 1,20,23 for example.
The first solution is rather simple, first create your buffer with the following description :
Usage = D3D11_USAGE_DEFAULT;
CPUAccessFlags = 0;
BindFlags = D3D11_BIND_VERTEX_BUFFER;
ByteWidth = sizeof(ST_Vertex) * _nVertexCount
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = pVerticesInfo;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pVertexBuffer);
This makes sure your vertex buffer is gpu visible only.
Next create a second dynamic buffer which has the size of a single vertex (you do not need any bind flags in that case, as it will be used only for copies)
_pCopyVertexBuffer
Usage = D3D11_USAGE_DYNAMIC; //Staging works as well
CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BindFlags = 0;
ByteWidth = sizeof(ST_Vertex);
D3D11_SUBRESOURCE_DATA d3dBufferData;
d3dBufferData.pSysMem = NULL;
hr = pd3dDevice->CreateBuffer(&descBuffer, &d3dBufferData, &_pCopyVertexBuffer);
when you move a vertex, copy the changed vertex in the copy buffer :
ST_Vertex changedVertex;
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
pBuffer->xfPosition.x = changedVertex.xfPosition.x;
pBuffer->.xfPosition.y = changedVertex.xfPosition.y;
pBuffer->.xfPosition.z = changedVertex.xfPosition.z;
pImmediateContext->Unmap(_pVertexBuffer, 0);
Since you use D3D11_MAP_WRITE_DISCARD, make sure to write all attributes there (not only position).
Now once you done, you can use ID3D11DeviceContext::CopySubresourceRegion to only copy the modified vertex in the current location :
I assume that vertexID is the index of the modified vertex :
pd3DeviceContext->CopySubresourceRegion(_pVertexBuffer,
0, //must be 0
vertexID * sizeof(ST_Vertex), //location of the vertex in you gpu vertex buffer
0, //must be 0
0, //must be 0
_pCopyVertexBuffer,
0, //must be 0
NULL //in this case we copy the full content of _pCopyVertexBuffer, so we can set to null
);
Now if you want to update a list of vertices, things get more complicated and you have several options :
-First you apply this single vertex technique in a loop, this will work quite well if your changeset is small.
-If your changeset is very big (close to almost full vertex size, you can probably rewrite the whole buffer instead).
-An intermediate technique is to use compute shader to perform the updates (thats the one I normally use as its the most flexible version).
Posting all c++ binding code would be way too long, but here is the concept :
your vertex buffer must have BindFlags = D3D11_BIND_VERTEX_BUFFER | D3D11_BIND_UNORDERED_ACCESS; //this allows to write wioth compute
you need to create an ID3D11UnorderedAccessView for this buffer (so shader can write to it)
you need the following misc flags : D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS //this allows to write as RWByteAddressBuffer
you then create two dynamic structured buffers (I prefer those over byteaddress, but vertex buffer and structured is not allowed in dx11, so for the write one you need raw instead)
first structured buffer has a stride of ST_Vertex (this is your changeset)
second structured buffer has a stride of 4 (uint, these are the indices)
both structured buffers get an arbitrary element count (normally i use 1024 or 2048), so that will be the maximum amount of vertices you can update in a single pass.
both structured buffers you need an ID3D11ShaderResourceView (shader visible, read only)
Then update process is the following :
write modified vertices and locations in structured buffers (using map discard, if you have to copy less its ok)
attach both structured buffers for read
attach ID3D11UnorderedAccessView for write
set your compute shader
call dispatch
detach ID3D11UnorderedAccessView for write (this is VERY important)
This is a sample compute shader code (I assume you vertex is position only, for simplicity)
cbuffer cbUpdateCount : register(b0)
{
uint updateCount;
};
RWByteAddressBuffer RWVertexPositionBuffer : register(u0);
StructuredBuffer<float3> ModifiedVertexBuffer : register(t0);
StructuredBuffer<uint> ModifiedVertexIndicesBuffer : register(t0);
//this is the stride of your vertex buffer, since here we use float3 it is 12 bytes
#define WRITE_STRIDE 12
[numthreads(64, 1, 1)]
void CS( uint3 tid : SV_DispatchThreadID )
{
//make sure you do not go part element count, as here we runs 64 threads at a time
if (tid.x >= updateCount) { return; }
uint readIndex = tid.x;
uint writeIndex = ModifiedVertexIndicesBuffer[readIndex];
float3 vertex = ModifiedVertexBuffer[readIndex];
//byte address buffers do not understand float, asuint is a binary cast.
RWVertexPositionBuffer.Store3(writeIndex * WRITE_STRIDE, asuint(vertex));
}
For the purposes of this question I'm going to assume you already have a mechanism for selecting a vertex from a list of vertices based upon ray casting or some other picking method and a mechanism for creating a displacement vector detailing how the vertex was moved in model space.
The method you have for updating the buffer is sufficient for anything less than a few hundred vertices, but on large scale models it becomes extremely slow. This is because you're updating everything, rather than the individual vertices you modified.
To fix this, you should only update the vertices you have changed, and to do that you need to create a change set.
In concept, a change set is nothing more than a set of changes made to the data - a list of the vertices that need to be updated. Since we already know which vertices were modified (otherwise we couldn't have manipulated them), we can map in the GPU buffer, go to that vertex specifically, and copy just those vertices into the GPU buffer.
In your vertex modification method, record the index of the vertex that was modified by the user:
//Modify the vertex coordinates based on mouse displacement
pVerticesInfo[SelectedVertexIndex].xfPosition.x += DisplacementVector.x;
pVerticesInfo[SelectedVertexIndex].xfPosition.y += DisplacementVector.y;
pVerticesInfo[SelectedVertexIndex].xfPosition.z += DisplacementVector.z;
//Add the changed vertex to the list of changes.
changedVertices.add(SelectedVertexIndex);
//And update the GPU buffer
UpdateD3DBuffer();
In UpdateD3DBuffer(), do the following:
D3D11_MAPPED_SUBRESOURCE d3dMappedResource;
pImmediateContext->Map(_pVertexBuffer, 0, D3D11_MAP_WRITE, 0, &d3dMappedResource);
ST_Vertex* pBuffer = (ST_Vertex*)d3dMappedResource.pData;
for (int i = 0; i < changedVertices.size(); ++i)
{
pBuffer[changedVertices[i]].xfPosition.x = pVerticesInfo[changedVertices[i]].xfPosition.x;
pBuffer[changedVertices[i]].xfPosition.y = pVerticesInfo[changedVertices[i]].xfPosition.y;
pBuffer[changedVertices[i]].xfPosition.z = pVerticesInfo[changedVertices[i]].xfPosition.z;
}
pImmediateContext->Unmap(_pVertexBuffer, 0);
changedVertices.clear();
This has the effect of only updating the vertices that have changed, rather than all vertices in the model.
This also allows for some more complex manipulations. You can select multiple vertices and move them all as a group, select a whole face and move all the connected vertices, or move entire regions of the model relatively easily, assuming your picking method is capable of handling this.
In addition, if you record the change sets with enough information (the affected vertices and the displacement index), you can fairly easily implement an undo function by simply reversing the displacement vector and reapplying the selected change set.
How do I use value from OpenCV matchShapes output? We implemented OpenCV matchShapes function to compare two images, particularly, shapes. But when we obtained the answer we are confused how to use these values?
The code is
- (bool) someMethod:(UIImage *)image :(UIImage *)temp {
RNG rng(12345);
cv::Mat src_base, hsv_base;
cv::Mat src_test1, hsv_test1;
src_base = [self cvMatWithImage:image];
src_test1 = [self cvMatWithImage:temp];
int thresh=150;
double ans=0, result=0;
Mat imageresult1, imageresult2;
cv::cvtColor(src_base, hsv_base, cv::COLOR_BGR2HSV);
cv::cvtColor(src_test1, hsv_test1, cv::COLOR_BGR2HSV);
std::vector<std::vector<cv::Point>>contours1, contours2;
std::vector<Vec4i>hierarchy1, hierarchy2;
Canny(hsv_base, imageresult1, thresh, thresh*2);
Canny(hsv_test1, imageresult2, thresh, thresh*2);
findContours(imageresult1,contours1,hierarchy1,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours1.size();i++)
{
//cout<<contours1[i]<<endl;
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult1,contours1,i,color,1,8,hierarchy1,0,cv::Point());
}
findContours(imageresult2,contours2,hierarchy2,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours2.size();i++)
{
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult2,contours2,i,color,1,8,hierarchy2,0,cv::Point());
}
for(int i=0;i<contours1.size();i++)
{
ans = matchShapes(contours1[i],contours2[i],CV_CONTOURS_MATCH_I1,0);
cout<<" "<<ans<<endl;
}
std::cout<<"The answer is "<<ans<<endl;
if (ans<=20) {
return true;
}
return false;
}
The output values are
0.225069
0.234417
0
7.63599
0
7.06392
0.335966
0.211358
0.327552
0.842969
0.761659
0.614039
The image is
See my comment on imoutidi's answer. Here is a visual explanation:
The first col are the two original images,the second the canny edges. The 3. col are an arbitrary selection of detected shapes with the same index in both images. As you see, it is not even guaranteed that they correspond to the same image parts as a human would see them. What you end up comparing are different triangles in this case, which say little about the overall shape similarity. The two shape arrays are not even of the same size, since there are more structures in the bottom drawing for example(like small shapes between a thick line). in The 4. col is the last shape in the array. This is the best bet you can make to compare the images. In this example, I get a value of 0.0920794532771 for their similarity.
If I understand correctly your question, you want to know what the return value of matchShapes() stands for.
In your case given the two contours (shapes) the function returns a similarity metric (value). A small value indicates that the two shapes are similar and a big value that they are not.
A good explanation is here: http://docs.opencv.org/3.1.0/d5/d45/tutorial_py_contours_more_functions.html (check the third paragraph).
Also check out the documentation: http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I need the ability to verify that a user has drawn a shape correctly, starting with simple shapes like circle, triangle and more advanced shapes like the letter A.
I need to be able to calculate correctness in real time, for example if the user is supposed to draw a circle but is drawing a rectangle, my hope is to be able to detect that while the drawing takes place.
There are a few different approaches to shape recognition, unfortunately I don't have the experience or time to try them all and see what works.
Which approach would you recommend for this specific task?
Your help is appreciated.
We may define "recognition" as the ability to detect features/characteristics in elements and compare them with features of known elements seen in our experience. Objects with similar features probably are similar objects. The higher the amount and complexity of the features, the greater is our power to discriminate similar objects.
In the case of shapes, we can use their geometric properties such as number of angles, the angles values, number of sides, sides sizes and so forth. Therefore, in order to accomplish your task you should employ image processing algorithms to extract such features from the drawings.
Below I present a very simple approach that shows this concept in practice. We gonna recognize different shapes using the numbers of corners. As I said: "The higher the amount and complexity of the features, the greater is our power to discriminate similar objects". Since we are using just one feature, the number of corners, we can differentiate a few different kinds of shapes. Shapes with the same number of corners will not be discriminated. Therefore, in order to improve the approach you might add new features.
UPDATE:
In order to accomplish this task in real time you might extract the features in real time. If the object to be drawn is a triangle and the user is drawing the fourth side of any other figure, you know that he or she is not drawing a triangle. About the level of correctness you might calculate the distance between the feature vector of the desired object and the drawn one.
Input:
The Algorithm
Scale down the input image since the desired features can ben detected in lower resolution.
Segment each object to be processed independently.
For each object, extract its features, in this case, just the number of corners.
Using the features, classify the object shape.
The Software:
The software presented below was developed in Java and using Marvin Image Processing Framework. However, you might use any programming language and tools.
import static marvin.MarvinPluginCollection.floodfillSegmentation;
import static marvin.MarvinPluginCollection.moravec;
import static marvin.MarvinPluginCollection.scale;
public class ShapesExample {
public ShapesExample(){
// Scale down the image since the desired features can be extracted
// in a lower resolution.
MarvinImage image = MarvinImageIO.loadImage("./res/shapes.png");
scale(image.clone(), image, 269);
// segment each object
MarvinSegment[] objs = floodfillSegmentation(image);
MarvinSegment seg;
// For each object...
// Skip position 0 which is just the background
for(int i=1; i<objs.length; i++){
seg = objs[i];
MarvinImage imgSeg = image.subimage(seg.x1-5, seg.y1-5, seg.width+10, seg.height+10);
MarvinAttributes output = new MarvinAttributes();
output = moravec(imgSeg, null, 18, 1000000);
System.out.println("figure "+(i-1)+":" + getShapeName(getNumberOfCorners(output)));
}
}
public String getShapeName(int corners){
switch(corners){
case 3: return "Triangle";
case 4: return "Rectangle";
case 5: return "Pentagon";
}
return null;
}
private static int getNumberOfCorners(MarvinAttributes attr){
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
int corners=0;
List<Point> points = new ArrayList<Point>();
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
// Is it a corner?
if(cornernessMap[x][y] > 0){
// This part of the algorithm avoid inexistent corners
// detected almost in the same position due to noise.
Point newPoint = new Point(x,y);
if(points.size() == 0){
points.add(newPoint); corners++;
}else {
boolean valid=true;
for(Point p:points){
if(newPoint.distance(p) < 10){
valid=false;
}
}
if(valid){
points.add(newPoint); corners++;
}
}
}
}
}
return corners;
}
public static void main(String[] args) {
new ShapesExample();
}
}
The software output:
figure 0:Rectangle
figure 1:Triangle
figure 2:Pentagon
The other way is you can use math with this problem using the average of each point that are smallest distance from the one your'e comparing it from,
first you must resize shape with the ones in your library of shapes and then:
function shortestDistanceSum( subject, test_subject ) {
var sum = 0;
operate( subject, function( shape ){
var smallest_distance = 9999;
operate( test_subject, function( test_shape ){
var distance = dist( shape.x, shape.y, test_shape.x, test_shape.y );
smallest_distance = Math.min( smallest_distance, distance );
});
sum += smallest_distance;
});
var average = sum/subject.length;
return average;
}
function operate( array, callback ) {
$.each(array, function(){
callback( this );
});
}
function dist( x, y, x1, y1 ) {
return Math.sqrt( Math.pow( x1 - x, 2) + Math.pow( y1 - y, 2) );
}
var square_shape = Array; // collection of vertices in a square shape
var triangle_shape = Array; // collection of vertices in a triangle
var unknown_shape = Array; // collection of vertices in the shape your'e comparing from
square_sum = shortestDistanceSum( square_shape, unknown_shape );
triangle_sum = shortestDistanceSum( triangle_shape, unknown_shape );
Where the lowest sum is the closest shape.
You have two inputs - the initial image and the user input - and you are looking for a boolean outcome.
Ideally you would convert all your input data to a comparable format. Instead, you could also parameterize both types of input and use a supervised machine learning algorithm (Nearest Neighbor comes to mind for closed shapes).
The trick is in finding the right parameters. If your input is a flat image file, this could be a binary conversion. If user input is a swiping motion or pen stroke, I'm sure there are ways to capture and map this as binary but the algorithm would probably be more robust if it used data closest to the original input.
My issue started when i was doing the texture to vertices example (https://gamedev.stackexchange.com/questions/30050/building-a-shape-out-of-an-texture-with-farseer) then i pop up if its posible to pass this "farseer vertices" to vertex data that can be used in DrawUserIndexedPrimitives in order to have the vertices ready for modification on alpha textures.
Example:
You draw your texture(with transparency in some places) over the triangle strip vertex data so you can manipulate the points in order to disort the image like this:
http://www.tutsps.com/images/Water_Design_avec_Photoshop/Water_Design_avec_Photoshop_20.jpg
As you can see the A letter was just a normal image on a PNG file but after the conversion iam looking it can be used to disort image.
plz any solution give some code or link to tutorial that can help me to figure out this...
Thanks all!!
P.D. i think the main issue is how to make the indexData and the textureCoordination from just the vertices that PolygonTools.CreatePolygon makes.
TexturedFixture polygon = fixture.UserData as TexturedFixture;
effect.Texture = polygon.Texture;
effect.CurrentTechnique.Passes[0].Apply();
VertexPositionColorTexture[] points;
int vertexCount;
int[] indices;
int triangleCount;
polygon.Polygon.GetTriangleList(fixture.Body.Position, fixture.Body.Rotation, out points, out vertexCount, out indices, out triangleCount);
GraphicsDevice.SamplerStates[0] = SamplerState.AnisotropicClamp;
GraphicsDevice.RasterizerState = new RasterizerState() { FillMode = FillMode.Solid, CullMode = CullMode.None, };
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColorTexture>(PrimitiveType.TriangleList, points, 0, vertexCount, indices, 0, triangleCount);
This will do the trick
Can anyone please help.
I have a cube which I have made in 3DS Max. I don't know the dimensions of the cube. Is there a way to get the vertices of each of the triangles of the faces of the cube? I am trying to get the normal to one of the faces of the cube to determine which way its pointing. So if I can determine the vertices I can get the normal for the face if I have 3 vertices, V1, V2 and V3, ordered in counterclockwise order, I can obtain the direction of the normal by computing (V2 - V1) x (V3 - V1), where x is the cross product of the two vectors.
I have looked in my models .fbx file and I can see a number of values there:
Vertices: *24 {
a: -15,-12.5,0,15,-12.5,0,-15,12.5,0,15,12.5,0,-15,-12.5,0.5,15,-12.5,0.5,-15,12.5,0.5,15,12.5,0.5}
PolygonVertexIndex: *36 {
a: 0,2,-4,3,1,-1,4,5,-8,7,6,-5,0,1,-6,5,4,-1,1,3,-8,7,5,-2,3,2,-7,6,7,-4,2,0,-5,4,6,-3}
Are these my models vertices?
Also, I would assume that Vertices: * 24 would be my list of vertices, but why is there only 24? Should a cube not have 36 vertices? And finally, if the coordinates for my vertices are PolygonVertexIndex: * 36 these values just seem off to me when I imagine the cube in my head with those dimensions?
Or alternatively, is there a automatic way to get the vertices of a cube without having to manually enter all the values for each vertex? I might have a couple of models to
Any help would be greatly appreciated
I can't figure why you need that... because when you load a model it is calculated , internally each vertex will have the normal,...
Anyway it is easy to calc...
The three first indexes define the first triangle of a face, the next three, the other triangle of a face.
You need only one triangle to calculate the normal...
So with the three indexes access to the veretex array and get three points... A, B and C
Now your normal is the result of the cross product between two vectors formed with that vertex.
Vector3 Normal = Vector3.Cross(B-A, C-B);
If the normal go back or forward will depend on the A,B,C order, can be CounterClockWise or ClockWise, but every triangle of the model will be ordered in one way. So you will have try it and fix it
You can write an XNA program which reads your normals without much hassle.
If you still want to calculate them, however, use this C# code, taken from FFWD, as a guide. Check the URL for a more detailed discussion on pros and cons. Personally, I'm not too happy with the result, but for the time being it works. Of course, since this code is FFWD related (implementation of Unity's API for XNA), it does not match XNA exactly, but the mathematics remain the same.
/// <summary>
/// Recalculates the normals.
/// Implementation adapted from http://devmaster.net/forums/topic/1065-calculating-normals-of-a-mesh/
/// </summary>
public void RecalculateNormals()
{
Vector3[] newNormals = new Vector3[_vertices.Length];
// _triangles is a list of vertex indices,
// with each triplet referencing the three vertices of the corresponding triangle
for (int i = 0; i < _triangles.Length; i = i + 3)
{
Vector3[] v = new Vector3[]
{
_vertices[_triangles[i]],
_vertices[_triangles[i + 1]],
_vertices[_triangles[i + 2]]
};
Vector3 normal = Vector3.Cross(v[1] - v[0], v[2] - v[0]);
for (int j = 0; j < 3; ++j)
{
Vector3 a = v[(j+1) % 3] - v[j];
Vector3 b = v[(j+2) % 3] - v[j];
float weight = (float)Math.Acos(Vector3.Dot(a, b) / (a.magnitude * b.magnitude));
newNormals[_triangles[i + j]] += weight * normal;
}
}
foreach (Vector3 normal in newNormals)
{
normal.Normalize();
}
normals = newNormals;
}