I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.
Related
I want to simulate a 2D heat transfer process in the subsurface on a region which is infinite on the r-direction. So, as you know, the very basic way to model this is to draw a geometry that is very long in the r direction. I have done this, and the results that I obtain is correct as in this case, the results are matched with the analytical solution. As you know, there is a capability in Comsol called infinite element domain which serves the purpose to the problem mentioned above. In this case, we need to define a limited geometry on which we want to solve the PDE, and also need to draw a small domain acting as the Infinite Element Domain. However, in this case, the results are not correct because they are not matched with the analytical solution. Is there anything that I am missing to correctly use Infinite Element Domain in comsol?
Any help or comment would be appreciated.
Edit:
I edited the post to be more specific.
Please consider the following figure where a fluid with high temperature is being injected into a region with lower temperature:
https://i.stack.imgur.com/BQycC.png
The equation to solve is:
https://i.stack.imgur.com/qrZcK.png
With the following initial and boundary conditions (note that the upper and lower boundary condition is no-flux):
https://i.stack.imgur.com/l7pHo.png
We want to obtain the temperature profile over the length of rw<r<140 m (rw is very small and is equal to 0.005 m here) at different times. One way to model this numerically in Comsol is to draw a rectangle that is 2000 m in the r-direction, and get results only in the span of r [rw,140] m:
https://i.stack.imgur.com/BKCOi.png
The results of this case is fine, because they are well-matched with the analytical solution.
Another way to model this is to replace the above geometry with a bounded one that is [rw, 140] m in the r-direction and then augment it with an Infinite Element domain that is meshed mapped, as follows:
https://i.stack.imgur.com/m9ksm.png
Here, I have set the thickness of Infinite Element to 10 m in the r-direction. However, the results in this case are not matched with the analytical solution (or the above case where Infinite Element domain was not used). Is there anything that I am missing in Comsol? I have also changed some variables with regard to Infinite Element in Comsol such as physical width or distance, but I didn't see any changes in the results.
BTW, here are the results:
https://i.stack.imgur.com/cdaPH.png
I'm just starting out with procedural generation and I've made a program that generates lines using a D0L-systems by following Paul Bourke's website. For the first two simple examples it works great, but when I input the rules of the L-System Leaf, my results are incorrect as can be seen on this image.
Could any of you more experienced people point out where I might be going wrong? I'm pretty sure that I'm misunderstanding something about the usage of the length factor. In my case, lengthFactor is a static float, that is set once before the generation starts and is used to multiply/divide line's length in the current drawing state. lenghFactor itself won't change during the generation.
I'm using OpenGL for rendering and programming in C++.
What i'm doing is GPGPU on WebGL and I don't know the access pattern which I'd be talking about applies to general graphics and gaming programs. In our code, frequently, we come across data which needs to be summarized or reduced per output texel. A very simple example is matrix multiplication during which, for every output texel, your return a value which is a dot product of a row of one input and a column of the other input.
This has been the sore point of our performance because of not so much the computation but multiplied data access. So I've been trying to find a pattern of reads or data layouts which would expedite this operation and I have been completely unsuccessful.
I will be describing some assumptions and some schemes below. The sample code for all these are under https://github.com/jeffsaremi/webgl-experiments
Unfortunately due to size I wasn't able to use the 'snippet' feature of StackOverflow. NOTE: All examples write to console not the html page.
Base matmul implementation: Example: [2,3]x[3,4]->[2,4] . This produces in a simplistic form 2 textures of (w:3,h:2) and (w:4,h:3). For each output texel I will be reading along the X axis of the left texture but going along the Y axis of the right texture. (see webgl-matmul.html)
Assuming that GPU accesses data similar to CPU -- that is block by block -- if I read along the width of the texture I should be hitting the cache pretty often.
For this, I'd layout both textures in a way that I'd be doing dot products of corresponding rows (along texture width) only. Example: [2,3]x[4,3]->[2,4] . Note that the data for the right texture is now transposed so that for each output texel I'd be doing a dot product of one row from the left and one row from the right. (see webgl-matmul-shared-alongX.html)
To ensure that the above assumption is indeed working, I created a negative test also. In this test I'd be reading along the Y axis of both left and right textures which should have the worst performance ever. Data is pre-transposed so that the results make sense. Example: [3,2]x[3,4]->[2,4]. (see webgl-matmul-shared-alongY.html).
So I ran these -- and I hope you could do as well to see -- and I found no evidence to support existence or non-existence of such caching behavior. You need to run each example a few times to get consistent results for comparison.
Then I came along this paper http://fileadmin.cs.lth.se/cs/Personal/Michael_Doggett/pubs/doggett12-tc.pdf which in short claims that the GPU caches data in blocks (or tiles as I call them).
Based on this promising lead I created a version of matmul (or dot product) which uses blocks of 2x2 to do its calculation. Prior to using this of course I had to rearrange my inputs into such layout. The cost of that re-arrangement is not included in my comparison. Let's say I could do that once and run my matmul many times after. Even this scheme did not contribute anything to the performance if not taking something away. (see webgl-dotprod-tiled.html).
A this point I am completely out of ideas and any hints would be appreciated.
thanks
I am using the ELKI MiniGUI to run LOF. I have found out how to normalize the data before running by -dbc.filter, but I would like to look at the original data records and not the normalized ones in the output.
It seems that there is some flag called -normUndo, which can be set if using the command-line, but I cannot figure out how to use it in the MiniGUI.
This functionality used to exist in ELKI, but has effectively been removed (for now).
only a few normalizations ever supported this, most would fail.
there is no longer a well defined "end" with the visualization. Some users will want to visualize the normalized data, others not.
it requires carrying over normalization information along, which makes data structures more complex (albeit the hierarchical approach we have now would allow this again)
due to numerical imprecision of floating point math, you would frequently not get out the exact same values as you put in
keeping the original data in memory may be too expensive for some use cases, so we would need to add another parameter "keep non-normalized data"; furthermore you would need to choose which (normalized or non-normalized) to use for analysis, and which for visualization. This would not be hard with a full-blown GUI, but you are looking at a command line interface. (This is easy to do with Java, too...)
We would of course appreciate patches that contribute such functionality to ELKI.
The easiest way is this: Add a (non-numerical) label column, and you can identify the original objects, in your original data, by this label.
I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map
proj_x, proj_y <--> cam_x, cam_y for scattered point pairs
My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows
[pgridx, pgridy] = meshgrid(allprojxpts, allprojypts)
fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy);
fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy);
and the reverse for the camera to projector mapping
Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle)
I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick.
So my question is one of the following
1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ?
2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV.
Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"
So this ends up being a specific fix for bugs in Labview 8.5. Nevertheless, since they're poorly documented, and I've spent a day of pain on them, I figure I'll post them so someone else googling this problem will come across it.
1) Meshgrid bombs. Don't know when this was fixed, definitely a bug in 8.5. Solution: use the meshgrid-like function on the interpolation&extrapolation pallet instead. Or upgrade to LV2009 which apparently works (thanks Underflow)
2) Griddata is defective in 8.5. This is badly documented. The 8.6 upgrade notes say that a problem with griddata and the "cubic" setting, but it is fact also a problem with the DEFAULT LINEAR setting. Solutions in descending order of kludginess: 1) pass 'v4' flag, which does some kind of spline interpolation, but does not have bugs. 2) upgrade to at least version 8.6. 3) Beat the ni engineers with reeds until they document bugs properly.
3) I was able to use the openCV remap function to do the actual transformation from one image to another. I tried just using the matlab built in interp2 vi, but it choked on large arrays and gave me out of memory errors. On the other hand, it is fairly straightforward to map an IMAQ image to an IPL image, so this isn't that bad, except for the addition of the outside library.