Creating a 3d coordinate system plus 3d vectors in ActionScript? - actionscript

I am just starting to dive into the world of 3D objects and perspectives, esp. in Flash.
My goal is to have a 3d coordinate system with grids plus the option to define x, y, z values of a vector to be plotted.
Simple example:
It would also be great to use the mouse to rotate the coordinate system, e. g. like in this video or here.
Does anybody know if there is such a tool or library that provides such a 3d coordinate-system? I would prefer Flash AS2/AS3 but this is not a must-be. The only requirement: The tool must be running within the browser, no use of software such as Blender or SketchUp.
Maybe somebody has already written a program like that?
Thank you.
PS: I know that there are web services like wolframalpha that can plot in 3d, but I need an interactive tool.

PaperVision3D is an opensource actionscript3 library that can handle all sorts of 3D rendering. http://code.google.com/p/papervision3d/ . I've used it for programs that give you a full 3D environment with typical 6 degrees of freedom.

Related

Augmented Reality – Lighting Real-World objects with Virtual light

Is it possible to import a virtual lamp object into the AR scene, that projects a light cone, which illuminates the surrounding space in the room and the real objects in it, e.g. a table, floor, walls?
For ARKit, I found this SO post.
For ARCore, there is an example of relighting technique. And this source code.
I have also been suggested that post-processing can be used to brighten the whole scene.
However, these examples are from a while ago and perhaps threre is a newer or a more straight forward solution to this problem?
At the low level, RealityKit is only responsible for rendering virtual objects and overlaying them on top of the camera frame.
If you want to illuminate the real scene, you need to post-process the camera frame.
Here are some tutorials on how to do post-processing:
Tutorial1⃣️
Tutorial2⃣️
If all you need is an effect like This , then all you need to do is add a CGImage-based post-processing effect for the virtual object (lights).
More specifically, add a bloom filter to the rendered image(You can also simulate bloom filters with Gaussian blur).
In this way, the code is all around UIImage and CGImage, so it's pretty simple😎
If you want to be more realistic, consider using the depth map provided by LiDAR to calculate which areas can be illuminated for a more detailed brightness.
Or If you're a true explorer, you can use Metal to create a real world Digital Twin point cloud in real time to simulate occlusion of light.
There's nothing new in relighting techniques based on 3D compositing principles in 2021. At the moment, when you're working with RealityKit or SceneKit, you have to personally implement the relighting functionality with the help of two additional render passes (RGB pass is always needed) - Normals pass and PointPosition pass. Both AOVs must be 32-bit.
However, in the near future, when Apple engineers finally implement texture capturing in Scene Reconstruction – any inexperienced AR developer will be able to apply a relighting procedure.
Watch this Vimeo Video to find out how relighting can be achieved in The Foundry NUKE.
A crucial point here, when implementing the Relighting effect, is the presence of a LiDAR scanner (or iToF sensor if you're using ARCore). In other words, today's relighting solution for iOS is Metal + RealityKit.

iOS:Which Augmented Reality SDK for virtual try room to be used?

I am working on iOS Augmented Reality project, Where i need to integrate virtual dressing concept.
I tried OpenCV, it worked as desired for me in Face Detection Scenario Only but when i did Upper Body Portion, That didn't work for me as desired.
I used UPPER_BODY_HAAR_CASCADE but it didn't work as it was desired
it came as something like
but my desired output is something like this
If someone has achieved this functionality in iOS, Please Reply me
Not exactly answer you are looking for. You make your app depending on the sdk you choose. Most of them are quite expensive to use and may suffer from changing the use policy. Additionally you drag all the extensive functionality you don't need into your app. So at the end of day your app is 60-100MB in size.
If I was you (and I was in similar situation), I would develop own little sdk with the functionality you need. If you know how to do it then it takes couple days for the basic things to work. Plus opencv and you are in good shape.
PS. #Tommy asked interesting question. How one can approach to implement something like on this video: youtube.com/watch?v=IBE11ROpxHE
Adding some info which is too long for comment.
#Tommy Nice video. It seems to have all we need to proceed. First of all, for any AR application you need your camera (mobile phone camera) calibration info. In simple case, it contains two matrixes: camera matrix and distortion matrix. Camera matrix is then used for creating opengl projection matrix (how the 3d model is projected to 2d flat screen, field of view, planes, etc). And distortions matrix is used for example, for warping parts of your input frame in case of detecting something. In the example with watches, we need to detect the belt and watches body in order to place the 3d model in that position. Given the paper watches is not having ideal perspective with 90 degrees angle to the eye, it needs to be transformed to this view.
In other words, your paper watches looks like this:
/---/
/ /
/---/
And for the analysis and detecting the model name you need it look like this:
---
| |
| |
---
This is where distortion matrix is used in order to have precise transformation. And different cameras have their own distortions.
Most of application use so called offline calibration. There is a chessboard and its feed into opencv functions that detect cells on series of frames with different perspective, and build the matrices based on how the cells are shaped.
In your case, the belt of your watch may be designed in a way that it will contain all the needed for online calibration. On your video it has special pattern, I'm pretty sure its done exactly for this purpose. You may do the same and use chessboard pattern for simplicity.
Then you could use lets say 25 first frames for online calibration and then having all the matrixes you go for detecting paper watches, building projection matrix and replace it with your 3d model. If all is done right then your paper watcthes will have coord 0 0 0 in 3d space and you could easily place something else in that position.

Infrared LED tracking: Using OpenCV to track x, y, z positions

I am looking for a way to approach a computer vision problem I'm having.
I have working tracking system:
4-8 cameras
Gives (x,y,z) of a infrared led
Each led Transmits a unique 8 bit signal
The tracking system is expensive and the interface is too hard for our users to work with. I want to replace it with a possible my own/ OpenCV implementation.
My current approach which seems to require a lot of development of what seems to be common problems:
Calibrate the cameras to make a 3D space - The cameras need to know where they are in space and in relation to each other.
Given two or more camera sees a unique led it uses gray-scale image with the pixel to calculate the 3D position (x, y, z) of that led.
Right now I am attempting to write my own custom algorithm for both task and its proving to be a lot of work. Is it possible to approach this with OpenCV to help with the heavy lifting.
Take a look at Free track : http://www.free-track.net/english/ you can download sources there.

Simple Delphi 3d functions

Could anyone help me with examples of some bare-bone, old school 3d methods in Delphi? Not using openGL or firemonkey or any external library (vanilla canvas coding). What i want to do is to be able to rotate X number of points around a common origo. From what i remember from the old days, you subtract left from right (on the 3d points) so that origo is always 0,0 - then perform the calculations, and finally add the left/top pixel offset to get the actual screen positions.
What im looking for is a set of small, ad-hoc routines, ala:
RotateX(aValue:T3dpoint; degr:float):T3dPoint;
RotateY(--/--)
RotateZ(--/--)
Using these functions it should be fairly easy to create the old "rotating 3d cube" (8 points).
Also, are there functions for figuring out the visible "faces"? If i want a filled vector cube, then i guess i need to extract visible regions (based on distance/overlapping?) which in turn is drawn as X number of filled polygons? And these must no doubt be sorted by depth to not come out a mess.
for instance:
PointsToFaces(const a3dObject:T3dPointArray):TPolyFaceArray;
SortFaces(Const aFaces:TPolyFaceArray):TPolyFaceArray;
Any help is welcome!
Here are some nice good-old resource for Delphi Math from efg's Reference.
You can find a list of graphic projects.
2D/3D Lab Vector graphics: translation, rotation, scaling, view transform, homogeneous coordinates, clipping, projections, vectors, matrices etc...
I did write a simple 3D rendering 'engine' a few years ago, using only naïve linear algebra. Might not be the most efficient one, though. A few thousand of points is the limit if you want to be able to move reasonably smooth. Sample EXE. You can get the code if you like, but it might not be that pretty.

Satellite Map Analysis for Building Generation

Has anyone every heard of a program which analyses a satellite map and attempts to generate three dimensional buildings that roughly match the length/width of their real life counterparts?
The use in programs like Google Earth or FlightGear would be phenomenal.
Anybody heard of something like this already existing?
EDIT:
Any references to related work would be great as well!
This can be achieved using photogrammetry from stereo imagery (airborne or high-resolution satellite). Stereo imagery consists of a pair of registered images taken from slightly different angles or from different positions and can be used to calculate elevations very precisely. You can also derive information from building shadows if you know when and at what exact time the image was taken and have information on the sensor and image geometry.
Two other options would be 1) to use LIDAR (expensive, not readily available), or 2) to obtain shapefiles with building footprints and heights (sometimes available from local governments or other sources).
Stereo imagery can be a powerful resource to create 3D models. C3 Technologies developed a really interesting app for hitta.se:
Go to http://www.hitta.se/LargeMap.aspx
Click on 3D
Go to Stockholm
Zoom in, zoom in.. it takes a while to load
Really beautiful 3D models from stereo imagery

Resources