Dx11 vertex picking - directx

My task is to load obj file, show it to user and let him to choose specific vertices via mouse click. Then give information about this vertex.
I've got now loading and showing 3d object, but i dont know how to show vertices instead of solid object, and whats more important how to different clicked vertex from others.
I'm using code from: http://www.braynzarsoft.net/index.php?p=D3D11OBJMODEL

Related

8th Wall tap to place example not showing model replacement

I've replaced the tree.glb model in the ThreeJS placeground example (https://github.com/8thwall/web/tree/master/examples/threejs/placeground), but it's not showing. It works fine when using tree.glb.
To debug, I've also tried replacing it with the jellyfish-model.glb available in the examples, but it also doesn't show when tapping on the floor plane.
Is there something wrong with my code, or with the .glb models I'm replacing tree.gbl with?
const modelFile = 'tree.glb' // 3D model to spawn at tap
to
const modelFile = 'jellyfish-model.glb' // 3D model to spawn at tap
File structure on github: 8thwall-3js-test-github
Ideally, I'd like to replicate what I've done using Unity+Vuforia in this example (which basically places a .png onto a floor plane): https://www.youtube.com/watch?v=poWvXVB4044
I'd start by looking at the scale of the 3d model. The tree model in the link you provided is quite large, so it's being scaled down in size. See https://github.com/8thwall/web/blob/master/examples/threejs/placeground/index.js#L7-L8
Prove to yourself that the model is being loaded by adding a console.log('model added!') type statement into animateIn() (as that is the model loaded handler)
My guess is that your jellyfish-model.glb is there, just very small. Trying adjusting startScale and endScale to larger values and see if that helps.

Moving point with mouse

I drew a lot points in my program with webgl. Now I want to pick any point and move this point new position. The case is I don't know how to select point. So am I supposed to add actionlistener to each point?
WebGL is a rasterization library. It has no concept of movable, clickable position or points. It just draws pixels where you ask it to.
If you want to move things it's up to you to make your own data, use that data to decide if the mouse was clicked on something, update the data to reflect how the mouse changed it, and finally use WebGL to re-render something based on the data.
Notice none of those steps except the last one involve WebGL. WebGL has no concept of an actionlistener since WebGL has no actions you could listen to. It just draws pixels based on what you ask it to do. That's it. Everything else is up to you and outside the scope of WebGL.
Maybe you're using some library like three.js or X3D or Unity3d but in that case your question would be about that specific library as all input/mouse/object position related issues would be specific to that library (because again, WebGL just draws pixels)

Advice for library with GeoSpatial Mapping that allows users to place moving objects on a 2D map

I'm looking for a library/framework/toolkit that will allow me to render a 2D map from real GeoSpatial data and draw objects on the 2D map.
Requirements:
Map Tiling (when I zoom into the map, i want a more detailed image)
Pan (ability to use the mouse to move around the map)
Read various Geospatial images (satellite, street, etc)
Ability to draw objects onto the map (based on lat/longs) and have them move. For example, I want to be able to put an image of a bird on the map and have it move and rotate correctly.
Primitive shapes. It would be nice if it had built in ability to draw lines, circles, etc.
Complex drawing. For example, I want to draw a compass and have it show the heading of the current heading of the bird.
Mouse input. I want to be able to right click on the map and have a context menu appear. I want to click and hold an shape I draw on the map and drag it easily.
What I have looked at:
OpenSceneGraph with osgEarth. It's great, and fulfills my reqs, but is really slow and I had to do a lot of weird things to get things to work (especially with dragging objects on the map).
Cesium: looks promising, but somewhat slow, and I need it to work as a desktop application. I've seen online that some have managed to use Cesium inside Qt's Webkit, but I'm not sure I would want to take that risk.
EDIT:
I really want to stay away from a web-based framework if possible.
http://imgur.com/52DaJtQ
Here is a primitive picture of what I'm want to achieve. The aircraft icon should move and the degree circle along with it. I want to be able to drag the green waypoints and have the lines redraw as I move a waypoint. The red sensor footprint should adjust to what the aircraft can see.
http://imgur.com/52DaJtQ
Google Maps, Open Street Map, Bing Maps.
I use OpenSceneGraph/osgEarth extensively and am not dissatisfied with its performance.
What kind of weird things did you need to do?
If you want, you can contact me privately to troubleshoot your situation. Me website is AlphaPixel.com and there's a contact form there.

Storing game data for 2D game like Star Ocean Second Story and Legend of Mana

I'm trying to go for a 2D game like Legend of Mana and Star Ocean Second Story rather than tile based 2D games.
OVERVIEW
Currently, the way I'm going about building my game is like so:
I have geometry files and texture files. Geometry files store width, height, XY position, Texture ID number to use and texture name to use.
The way I'm trying to do is:
I will have a scene manager object that loads "scene" objects. Each scene object stores a list of all geometry objects in that scene as well as texture and sound objects (to be implemented).
Each scene is rendered using vertex array whereby the scene manager object would call a method like get scene vertices by scene name which returns a pointer to an array of GLFloats (GLfloat *) and this pointer to GLfloat gets used in OpenGL's glVertexPointer() function.
When I want to update each character's position (like the hero for example), my aim is to use the "hero" game objects in the current scene and call a function like:
Hero.Move(newXPosition, newYPosition);
which will actually alter the hero geometry object's vertex data. So it will do something like:
for(int i = 0; i < 8; i++)
{
vertexList[i] = vertexList[i] + newXPosition;
vertexList[i+1] = vertexList[i+1] + newYPosition;
...
}
This way, when I go to render the scene in the render frame, it will render the entire scene again with the updated vertex coordinates.
Each object in the game will just be a quadrilateral with a texture mapped to it.
THE PROBLEM
I'm using Objective C for my programming and OpenGL. This is for iOS platform.
I have been successful thus far using this method to render 1 object.
The problem I have is I'm using a NSMutableDictionary which is a data structure that uses key-value pair to store geometry instance objects in the scene object class. Dictionaries in Objective C doesn't retrieve data in the same order every time the code is run. It retrieves then in random order.
Becausing of this, I am having trouble combining all the vertex array data from each geometry object in the scene object and passing out 1 single vertex pointer to GLfloats.
Each geometry object stores it's own array of 8 vertex values (4 pairs of X,Y coordinate value). I would like each geometry object to manage it's own vertices (so I can use Move, Rotate like mentioned earlier) and at the same time, I would like my scene object to be able to output a single pointer reference to all vertices data of all geometry objects in the current scene for using in OpenGL's glVertexArray() function.
I am trying to avoid calling OpenGL's glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) multiple times. Like draw hero, draw map, draw AI agents, draw BG objects. That would not be very efficient. Minimizing the amount of GL draw calls as much as possible (to 1 draw call preferably), especially on limited hardware like the iPhone is what was suggested when I was reading about OpenGL game development.
SUGGESTIONS?
What is the best practice way of going about doing what I'm trying to do?
Should I use a SQL database to store my game data like geometry vertices data and load JUST 1 scene into iPhone memory from the sql database file on iPhone's disk?
You can use a a list of lists to keep track of draw layer and order, and use the dictionary solely for fast lookup.
What I don't understand is why you don't use Cocos2D, that happens to be built on the scene manager paradigm. Think of all the development time you will save...
I have worked in a company that did wonderful games. It eventually died because they kept putting out buggy games, due to the lack of development time for debugging. They did however find time to create a graphics rendering engine.
I thought, and still think, they had wrong priorities. It seems to me you are doing the same mistake: are you trying to make an iOS game, or are you trying to learn how to do a 2D gaming engine for iOS?
Note: Cocos2D is Open Source, you can therefore read it, now that you have thought about the process to create such an engine.

Three.js custom textured meshes

Is it possible to add a textured material to an object with a custom mesh in Three.js?
Whenever I try exporting an object from Blender to Three.js with a texture on it, the object disappears. Looking through the three.js examples, it seems like they've carefully avoided putting textures on anything other than the built-in geometries, and forcing a texture on such a mesh causes it to again disappear.
For example, if I edit scene_test.js, which is a scene file called from webgl_scene_test.html, if I apply the "textured_bg" to the "walt" head, it will disappear.
It looks like the missing piece of the puzzle was that you have to apply the set of UV coordinates to the mesh of the object in question.
First, select your texture, and under "Mapping" make sure that the "coordinates" dropdown is set to "UV"
Then, click on the "object data" button, and in the UV Texture list, click the plus icon. This seems to automatically add the UV data to the mesh.

Resources