papervision 3d instacning - actionscript

anyone actually successfully implement 3D object instancing with papervision 3d?
and does it actually perform good enough to be used to instancing many copies of a single model (tress for example)?
so far I failed to correctly display the instances and I doubt the performance benefit of doing this as papervision 3d is using software render.
what I did basically is:
new_do3d.geometry = master_do3d.geometry;

You'll get better performances with Away3D or Alternativa3D

Related

Creating Meshes from Pointclouds of Urban Scenes

I want to create high fidelity meshes of urban street scenes using pointcloud data. The data consists of pointclouds using a HDL64-e and the scenes are very similar to the one in the Kitti Dataset.
Currently im only able to use the 'raw' point clouds and odometry of the car. Previous works already implemented the LeGO-LOAM algorithm to create a monolithic map and better odometry estimates.
Available Data:
Point Clouds with 10Hz timings
Odometry estimates with higher frequencies (LOAM Output)
Monolithic map of the scene (LOAM Output) (~1.500.000 Points)
I already did some research and came to the conclusion, that I can either
use the monolithic map with algorithms like Poisson Reconstruction, Advancing Front, etc... (using CGAL)
go the robotics way and use some packages like Voxgraph (which uses Marching Cubes internally)
As we might want to integrate image data at a later step the second option would be preferred.
Questions:
Is there a State-of-the-Art way to go?
Is it possible to get a mesh that can preserve small features like curbs and sign posts? (I know there might be a feasable limit on how fine the mesh can be)
I am very interested in some feedback and a discourse on how to tackle this problem 'the right way'.
Thank you for your suggestions/answers in advance!

How to track Fast Moving Objects?

I'm trying to create an application that will be able to track rapidly moving objects in video/camera feed, however have not found any CV/DL solution that is good enough. Can you recommend any computer vision solution for tracking fast moving objects on regular laptop computer and web cam? A demo app would be ideal.
For example see this video where the tracking is done in hardware (I'm looking for software solution) : https://www.youtube.com/watch?v=qn5YQVvW-hQ
Target tracking is a very difficult problem. In target tracking you will have two main issues: the motion uncertainty problem, and the origin uncertainty problem. The first one refers to the way you model object motion so you can predict its future state, and the second refers to the issue of data association(what measurement corresponds to what track, and the literature is filled with scientific ways in which this issue can be approached).
Before you can come up with a solution to your problem you will have to answer some questions yourself, regarding the tracking problem you want to solve. For example: what are the values that you what to track(this will define your state vector), how are those values related to one another, are you trying to perform single object tracking or multiple object tracking, how are the objects moving( do they have a relatively constant acceleration or velocity ) or not, do objects make turns, can objects also be occluded or not and so on.
The Kalman Filter is good solution to predict the next state of your system (once you have identified your process model). A deep learning alternative to the Kalman filter is the so called Deep Kalman Filter which essentially is used to do the same thing. In case your process or measurement models are not linear, you will have to linearize them before predicting the next state. Some solutions that deal with non-linear process or measurement models are the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF).
Now related to fast moving objects, an idea you can use is to have a larger covariance matrix since the objects can move a lot more if they are fast, so the search space for the correct association has to be a bit larger. Additionally you can use multiple motion models in case your motion model cannot be satisfied with only one model.
In case of occlusions I will leave you this stack overflow thread, where I have given an answer covering more details regarding occlusion handling in case of tracking. I have added some references for you to read. You will have to provide more details in your question, if you would like to receive more information regarding a solution (for example you should define fast moving objects with respect to camera frame rate).
I personally do not think there is a silver bullet solution for the tracking problem, I prefer to tailor a solution to the problem I am trying to solve.
The tracking problem is complicated. It is also more in the realm of control systems than computer vision. It would be also helpful to know more about your situation, as the performance of the chosen method pretty much depends on your problem constraints. Are you interested in real-time tracking? Are you trying to reconstruct an existing trajectory? Are there multiple targets? Just one? Are the physical properties of the targets (i.e. velocity, direction, acceleration) constant?
One of the most basic tracking methods is implemented by a Linear Dynamic System (LDS) description, in concrete, a discrete implementation, since we’re working with discrete frames of information. This method is purely based on physics, and its prediction is very sensitive. Depending on your application, the error rate could be acceptable… or not.
A more robust solution is Kalman’s Filter, and it is pretty much the go-to answer when tracking is needed. It implements prediction based on all the measurements obtained so far during the model's lifetime. It mainly works on constant-based measurements (velocity and acceleration) although it can be extended to handle non-constant models. If you are working with targets that won't exhibit a drastic change in their velocity, this is what you (probably) should implement.
I'm sorry I can't provide you with more, but the topic is pretty extensive and, admittedly, the details are beyond my area of expertise. Hopefully, this info should give you a little bit of context for finding a solution.
The problem of tracking fast-moving objects (FMO) is a known research topic in computer vision. FMOs are defined as objects which move over a distance larger than their size in one video frame. The solutions which have been proposed use classical image processing and energy minimization to establish their trajectories and sharp appearance.
If you need a demo app, I would suggest this GitHub repository: https://github.com/rozumden/fmo-cpp-demo. The demo is written in OpenCV/C++ and runs in real-time. The authors also provide a mobile app version, which is still in testing mode. Using this demo app you can track any fast moving objects in real-time without even providing an object model. However, if you provide object size in real-world units, the app can also estimate object speed.
A more sophisticated algorithm is open-sourced here: https://github.com/rozumden/deblatting_python, written in Python and PyTorch for speed-up. The repository contains a solution to the deblatting (deblurring and matting) problem, exactly what happens when a Fast Moving Object appears in front of a camera.

Objects tracking in realtime with OpenCv multitracker

I am using Opencv (Python) default trackers in an object detection and tracking task. As far as I know, we need to initialize some bounding boxes in order to make our trackers work so a detection algorithm will handle this task for us. The problem is if one (or even more) object was not detected in the initializing phase, the trackers will have to deal with it. The simplest solution is to perform the detection algorithm several times through frames to make sure that all objects are included, which is probably very computationally expensive.
I wonder if there exists an effective way to deal with this problem?
Most people do it manually on the first frame using the cv2.selectROI() method, which returns a bounding box. This is described in some detail here:
https://www.learnopencv.com/multitracker-multiple-object-tracking-using-opencv-c-python/
For cases where that is not practical (e.g., thousands of objects), you can automate the object detection part, and just make sure you find a good frame to start with that gets things right.

Is there a difference in printing quality between polys vs. NURBS for Maya 3D models?

I've been reading a lot about the many differences, pros and cons between NURBS and polys, but is there a difference when it comes to 3D printing?
The printed model is typically polygonized before printing - it's easier to do things like watertightness checks and so on using triangle meshes. A nurbs model can be polygonized at various resolutions, so it should be possible to get a higher quality, smoother looking print by starting with a NURBS model and using a very generous tesselation at printing time. The tesselation may not always produce a watertight mesh - depending on the software used to do the printing that might cause problems which need to be fixed up by hand.
So, overall the main advantage of a nurbs model in this context is that you can work with a more efficient, lightweight representation of the data up until its time to print: the final printed mesh may be impractically dense for most ordinary applications (millions of triangles).
To add to #theodox answer. The other reason is that CAD/CAE applications do not really like polygon models, and treat them as a second class citizen at best. So if you need to do some analysis on the model and do some extra operations or send it to a engineer the NURBS model is MUCH better. For the engineer it allows to optimize production paths so if they are using high end printers or CNC machines instead it allows them to do a much better job. If you do not use a NURBS model the engineer will just most likely reverse engineer your model and throw your data away.
Maya on the other hand is not a very conductive engineering application. But as a upside you can just use subdivision surfaces and get both a NURBS model and benefits of polygon modeling.
PS: For a engineering application making the model watertight is no problem whatsoever if your gaps are not too big.
Depends what you mean by polys. Most of the time, what people mean is you model a poly and then smooth it (by hitting '3' or turning it into a subd).
If you're doing that, nurbs have absolutely no advantages over subds for 3d printing in terms of smoothness.
NURBS surfaces created with Class A technical surfacing may be considered "airtight" surface meshes. B-spline mathematical surfaces include physical science dynamic compression/tension surface characteristics as "structurally loaded" systems model architectures. G-Code file formats now apply b-spline data in vector based tool path manufacturing. Raster based polygon smooth function is contrary to accurate modeling for functional engineered prototypes of zero tolerance accuracy. Smooth function provides unpredictable mesh as is undefinable approximation. Professional 3D Print solutions employ NURBS geometry G-Code directly and do NOT create polygon tessellated mesh as seen in common STL file formats.The future of 3D modeling and additive manufacturing is clearly vector based b-spline NURBS surface product architecture.

Learning WebGL and three.js [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm new and starting to learn about 3D computer graphics in web browsers. I'm interested in making 3D games in a browser. For anyone who has learned both WebGL and three.js...
Is knowledge of WebGL required to use three.js?
What are the advantages of using three.js vs. WebGL?
Since you have big ambitions, you have to invest the time to learn the fundamentals. It is not a matter of what you learn first -- you can learn them simultaneously if you want to. (That's what I did.)
This means that you need to understand:
WebGL concepts
Three.js
The underlying mathematical concepts
Three.js. Three.js does an excellent job of abstracting away many of the details of WebGL, so personally, I'd suggest using Three.js for your project. But remember, Three.js is in alpha, and it is changing frequently, so you have to be prepared for that. Most people learn Three.js by studying the examples. Avoid outdated books and tutorials, and avoid examples from the net that link to old versions of the library.
WebGL. If you use Three.js, you don't need to know how to program in WebGL, you just need to understand the WebGL concepts. That means, that you just need to be able to read someone else's WebGL code and understand what you read. That is a lot easier than being expected to write a WebGL program yourself from scratch. You can learn the WebGL concepts sufficiently well using any of the tutorials on the net, such as the beginner's tutorial at WebGLFundamentals.org and Learning WebGL.
Math. Again, you at least need to understand the concepts. Three good books are:
3D Math Primer for Graphics and Game Development by Fletcher Dunn and Ian Parberry
Essential Mathematics for Games and Interactive Applications: A Programmer’s Guide by James M. Van Verth and Lars M. Bishop
Mathematics for 3D Game Programming and Computer Graphics by Eric Lengyel
There is a very good online course - Interactive 3D Graphics at https://www.udacity.com/course/cs291 on THREE.js. This course includes assignments also to get hands-on experience.
It covers all the basic concepts of Three.js and Computer Graphics
My personal thoughts are the following:
If you have plenty time, you could learn both, but note that WebGL is much lower level than Three.js.
For a first 3d project, experts suggest using a library like Three.js in order to get used to the terms and the general 3d model.
Whichever direction you choose to go, I suggest you learn/polish up on your linear algebra skills. Then go ahead and learn or polish up your understanding about MVP dimensions (Model View Projection). Three.JS can abstract much of that away, but I think it's key that one understands those concepts well before getting serious about any 3D development.
I wrote an introductory article about MVP when I was first learning 3D programming with OpenGL. I realized that until I was able to explain what those transformation matrices are, and how they relate to the various dimensions/spaces, I really didn't know any 3D programming at all, though I could render objects to the screen.
Since your goal is to create games, I think you'll benefit much from learning some raw WebGL first, even if you end up using a framework like Three.js to help you write your code later.
"WebGL is a 2D API and not a 3D API"
http://webglfundamentals.org/webgl/lessons/webgl-2d-vs-3d-library.html
This article describes the fundamental differences between WebGL & 3d libraries like three.js.
Which made my choice between WebGL or Three.js a no-brainer.
I came from a Unity3D background as well as Papervision3D back in the day, so I had a good understanding of how to deal with 3D space. Three.js is the way to go for your initial jump into learning how to deal with WebGL projects. The api is very good, it's very powerful and if you're coming from another 3D technology, you'll be up and running with very little time.
I spent a lot of time with Threejs.org's examples - there's a ton of them and they're very good at getting you off and running in the right direction. The docs are decent enough, especially if you're comparing them to other webGL 3D api's out there.
You might also consider getting the free version of Unity3D and the free collada (was free when I got it) exporter from their app store (Window>App store). I found it easy enough to setup my scene in Unity and export it to Collada for use with Three.js.
Also, I posted this class that I use with Three.js called neo ( http://rockonflash.com/webGL/three/neo.js ). Just add it to your project, then call Neo.JackIntoThree() and it will add the methods/properties to Object3D for use in your project. Things like DrawAllAxis() are invaluable when debugging your scene etc.
Hands down though, Three.js is a great way to go - it's flexible enough to let you write your own shaders/objects etc, and powerful enough right out of the box to help you accomplish your goals.
I picked up three.js, but also jumped into GLSL and experimented a lot with three.js shaderMaterial. One way of going about it - three.js still abstracts much of the stuff for you, but it also gives you a very clean, low level access to all the rendering (projection, animation) capabilities.
This way, you can follow even something like this awesome open-gl tutorial. You don't have to set up the matrices, typed arrays, because three already sets it up for you, updating them when needed. The shader though, you can write from scratch - a simple color rendering would be two lines of GLSL. There is also a post processing plug-in for three.js that sets up all the buffers, full screen quads and stuff you need to do the effects, but the shader can be very simple to begin with.
Since programmable shaders are the essence of modern 3d graphics, i hope my answer is not missing the point :) Sooner or later, anyone who does this needs to at least understand what goes on under the hood, it's the nature of the beast. Also, understanding the 4th dimension in homogeneous space is probably important as well.
This book is good for WebGL.
I just learnt a little of both and I feel that understand the basics of webgl, I think an introduction on webgl is sufficient and then jump into three js. It will be pretty easy once you understand the underlying concepts of WebGL.
Useful links:
Best Intro I have read:
http://dev.opera.com/articles/view/an-introduction-to-webgl/
Comprehensive tutorials:
http://www.johannes-raida.de/tutorials.htm

Resources