I'm thinking about creating an iOS app that transforms a 3D sphere into a 2D image using the azimuthal equidistant projection. Here is a nice sample of this type of projection.
Azimuthal Map, Anywhere
I'm new to 3D programming, so I would like to ask for advice which framework / tool is good to use in this case. These are the options that I know:
Unity (+ OpenGL?)
SceneKit (+ CoreGraphics?)
Processing + Processing.js (inside WebView)
Please tell me if there are other solutions. I'd be also glad if you could tell me if there is any sample code or an open source library for this projection.
Thanks in advance!
this can easily be done using shaders, and does not require external libraries.
See http://rogerallen.github.io/webgl/2014/01/27/azimuthal-equidistant-projection/
I would highly recommend to use the c++ 3D libraries such as GXmap and VES/VTK.
GXmap is a small virtual globe and map program. Apart from showing an ordinary globe view of the earth, it can also generate Azimuthal equidistant projection maps suitable for amateur radio use.
VES is the VTK OpenGL ES Rendering Toolkit. It is a C++ library for mobile devices using OpenGL ES 2.0
Related
I wonder, which format for Normal Maps is the correct one to use within SceneKit content, for iOS? As referenced here: DirectX vs. OpenGL normal maps.
OpenGL or DirectX? Or does is not matter?
I had to figure it out by testing the OpenGL vs. DirectX Normal Map Typus side by side on planes. This gives me the following results:
This means, if you have the choice between the OpenGL or the DirectX Normal Map, you better choose OpenGL.
I'm working on a C++ project using a ToF camera. The camera is inside a room and has to detect walls, doors or other big planar surfaces. I'm currently using OpenCV but answers using other C++ libaries are also okay. What is a good algorithmn to detect the surfaces, also if they are rotated and aren't facing the camera directly. I've heard things like making a point cloud and using RANSAC. If you suggest me doing that please explain it in detail or provide a resource for explanation, because I don't know much about this topic (I'm a beginner in computer vision).
Thanks for your responses.
Are you familiar with PCL?
This tutorial shows how to find planar segments in a point-cloud using PCL.
we want to make a 3D Game for Apple iPad and we ar thinking about
a possibility, to import 3D-Models from Blender into Xcode.
Is there a way to do that?
We want to use opengl-es 2.0.
XCode isn't a game engine or 3D SDK. You can't 'import' blender files into XCode. What you can do is take your 3D assets and use them within your apps, either directly through OpenGL (rather low level), or using a 3D engine such as Unity (easier).
Either way, there are a number of questions already on Stackoverflow that you might find useful:
Opengl ES how to import a 3D model and map textures to it on runtime
Iphone + OpenGL ES + Blender Model: Rotation by Touch
Choosing 3D Engine for iOS in C++
...I highly recommend you take a look at what options are out there, decide on the best way to implement your 3D game (be it raw OpenGL or an engine), and go from there.
I am just starting to dive into the world of 3D objects and perspectives, esp. in Flash.
My goal is to have a 3d coordinate system with grids plus the option to define x, y, z values of a vector to be plotted.
Simple example:
It would also be great to use the mouse to rotate the coordinate system, e. g. like in this video or here.
Does anybody know if there is such a tool or library that provides such a 3d coordinate-system? I would prefer Flash AS2/AS3 but this is not a must-be. The only requirement: The tool must be running within the browser, no use of software such as Blender or SketchUp.
Maybe somebody has already written a program like that?
Thank you.
PS: I know that there are web services like wolframalpha that can plot in 3d, but I need an interactive tool.
PaperVision3D is an opensource actionscript3 library that can handle all sorts of 3D rendering. http://code.google.com/p/papervision3d/ . I've used it for programs that give you a full 3D environment with typical 6 degrees of freedom.
I am a new kinect developer and going to develop some application related to face tracking by using kinect v1.5 and XNA Framework in c# platform.
I can successfully get the face points and rectangle points to display in the screen by using the kinect sdk and Basic Effect of XNA 3D drawing.
However, What i want is to get back exactly the same color pixel of the user's face so that I can get mapping of the user's real face to a model.
Is there anybody that can help to answer my question?
Thank you very much!
One of the ways you can achieve this would be by using the RGB (colour) video stream and capturing a still. You can then use C# to enumerate through the X/Y axis of this image to get the colour if required.
The more effcient way however would be to use this still as the texture and "wrap" the 3D model you are creating using it. There is an example provided with the Kinect SDK which does something similar, the sample is called Face Tracking 3D - WPF. I would encourage you to use this as your base porting to XNA and work from there.