Could anyone help me with examples of some bare-bone, old school 3d methods in Delphi? Not using openGL or firemonkey or any external library (vanilla canvas coding). What i want to do is to be able to rotate X number of points around a common origo. From what i remember from the old days, you subtract left from right (on the 3d points) so that origo is always 0,0 - then perform the calculations, and finally add the left/top pixel offset to get the actual screen positions.
What im looking for is a set of small, ad-hoc routines, ala:
RotateX(aValue:T3dpoint; degr:float):T3dPoint;
RotateY(--/--)
RotateZ(--/--)
Using these functions it should be fairly easy to create the old "rotating 3d cube" (8 points).
Also, are there functions for figuring out the visible "faces"? If i want a filled vector cube, then i guess i need to extract visible regions (based on distance/overlapping?) which in turn is drawn as X number of filled polygons? And these must no doubt be sorted by depth to not come out a mess.
for instance:
PointsToFaces(const a3dObject:T3dPointArray):TPolyFaceArray;
SortFaces(Const aFaces:TPolyFaceArray):TPolyFaceArray;
Any help is welcome!
Here are some nice good-old resource for Delphi Math from efg's Reference.
You can find a list of graphic projects.
2D/3D Lab Vector graphics: translation, rotation, scaling, view transform, homogeneous coordinates, clipping, projections, vectors, matrices etc...
I did write a simple 3D rendering 'engine' a few years ago, using only naïve linear algebra. Might not be the most efficient one, though. A few thousand of points is the limit if you want to be able to move reasonably smooth. Sample EXE. You can get the code if you like, but it might not be that pretty.
Related
Currently, I desperately try to detect an object (robot) based on 2D laser scans (of another robot). In the following two pictures, the blue arrow corresponds to the pose of the laser scanner and points towards the object, that I would like to detect.
one side of the object
two sides of the object
Since it is basically a 2D picture, my first approach was to to look for some OpenCV implementations such as HoughLinesP or LSDDetector in order to detect the lines. Unfortunately, since the focus of OpenCV is more on "real" images with "real" lines, this approach does not really work with the point clouds, as far as I have understood it correctly. Another famous library is the point-cloud library, which on the other hand focus more on 3D point clouds.
My current approach would be to segment the laser scans and then use some iterative closest point (ICP) C++ implementation to find a 2D point cloud template in the laser scans. Since I am not that familiar with object detection and all that nice stuff, I am quite sure that there are some more sophisticated solutions...
Do you have any suggestions?
Many thanks in advance :)
To get lines from points, you could try RANSAC.
You would iteratively fit lines to the points, then remove points corresponding to the new line and repeat until there is not enough points or the support is too low or something like that.
Hope it helps.
I am making a math drawing app on ipad. Users can manipulate the obejects on the screen like quadratic curve or sin curve. To update all objects on the screen, I need to redraw the whole screen at 60 fps, which costs lots of time. I currently implement drawing with Quartz2D, but the performance is bad when there are many objects on the screen. I heard that directly using openGL ES is better, because it use GPU to draw. But I am wondering how to draw cubic or quadratic curve with openGL ES. Or, is there other better choice to improve the drawing?
The openGL should be quite good at doing this but the curves are not just out of the box. Assuming you find some sample or create one from the scratch till the point you can draw some shapes there may be quite a few procedures accomplishing this.
The most direct one would be to create lines from points with specific resolution. That would mean when having function f(x) simply iterate x through the resolution. Say you are seeing the curve on interval [a, b] and want a resolution of R intervals that would produce the loop for(float x=a; x<=b; x+=(b-a)/R). Now just draw this as a line strip. In most cases the resolution can be quite high and you will have a nice result but there are cases where this will not work, the steep functions or some alternating functions will be missing some points. A problematic kind are for instance sin(x*some_large_factor).
Using the same procedure it might work better if you could transform the function to be dependent on the curve length instead of the X. This procedure would also be able to draw curves such as circles.
Another way is injecting a function into the shader and checking if the point is near enough the function. You will receive the x and y positions which you may insert into your function and check their distance in a way if(abs(y-f(x)) < lineWidth) // do draw. This procedure will be totally accurate but the problem is now the line width is defined by Y which will make the steep parts of the curve appear thicker. If you are able to find the true distance to the curve (the nearest point on the curve) this would work perfectly...
Have you tried finding some library or some open source to draw the curve in openGL though?
My question maybe a bit too broad but i am going for the concept. How can i create surface as they did in "Cham Cham" app
https://itunes.apple.com/il/app/cham-cham/id760567889?mt=8.
I got most of the stuff done in the app but the surface change with user touch is quite different. You can change its altitude and it grows and shrinks. How this can be done using sprite kit what is the concept behind that can anyone there explain it a bit.
Thanks
Here comes the answer from Cham Cham developers :)
Let me split the explanation into different parts:
Note: As the project started quite a while ago, it is implemented using pure OpenGL. The SpiteKit implementation might differ, but you just need to map the idea over to it.
Defining the ground
The ground is represented by a set of points, which are interpolated over using Hermite Spline. Basically, the game uses a bunch of points defining the surface, and a set of points between each control one, like the below:
The red dots are control points, and eveyrthing in between is computed used the metioned Hermite interpolation. The green points in the middle have nothing to do with it, but make the whole thing look like boobs :)
You can choose an arbitrary amount of steps to make your boobs look as smooth as possible, but this is more to do with performance.
Controlling the shape
All you need to do is to allow the user to move the control points (or some of them, like in Cham Cham; you can define which range every point could move in etc). Recomputing the interpolated values will yield you an changed shape, which remains smooth at all times (given you have picked enough intermediate points).
Texturing the thing
Again, it is up to you how would you apply the texture. In Cham Cham, we use one big texture to hold the background image and recompute the texture coordinates at every shape change. You could try a more sophisticated algorithm, like squeezing the texture or whatever you found appropriate.
As for the surface texture (the one that covers the ground – grass, ice, sand etc) – you can just use the thing called Triangle Strips, with "bottom" vertices sitting at every interpolated point of the surface and "top" vertices raised over (by offsetting them against "bottom" ones in the direction of the normal to that point).
Rendering it
The easiest way is to utilize some tesselation library, like libtess. What it will do it covert you boundary line (composed of interpolated points) into a set of triangles. It will preserve texture coordinates, so that you can just feed these triangles to the renderer.
SpriteKit note
Unfortunately, I am not really familiar with SpriteKit engine, so cannot guarantee you will be able to copy the idea over one-to-one, but please feel free to comment on the challenging aspects of the implementation and I will try to help.
I need to paint a square image, mapped or transformed to an unknown-at-compile-time four-sided polygon. How can I do this?
Longer explanation
The specific problem is rendering a map tile with a non-rectangular map projection. Suppose I have the following tile:
and I know the four corner points need to be here:
Given that, I would like to get the following output:
The square tile may be:
Rotated; and/or
Be narrower at one end than at the other.
I think the second item means this requires a non-affine transformation.
Random extra notes
Four-sided? It is plausible that to be completely correct, the tile should be
mapped to a polygon with more than four points, but for our purposes
and at the scale it is drawn, a square -> other four-cornered-polygon
transformation should be enough.
Why preferably GDI only? All rendering so far is done using GDI, and I want to keep the code (a) fast and (b) requiring as few extra
libraries as possible. I am aware of some support for
transformations in GDI and have been experimenting with them
today, but even after experimenting with them I'm not sure if they're
flexible enough for this purpose. If they are, I haven't managed to
figure it out, and so I'd really appreciate some sample code.
GDI+ is also ok since we use it elsewhere, but I know it can be slow, and speed is
important here.
One other alternative is anything Delphi- /
C++Builder-specific; this program is written mostly in C++ using
the VCL, and the graphics in question are currently painted to a
TCanvas with a mix of TCanvas methods and raw WinAPI/GDI calls.
Overlaying images: One final caveat is that one colour in the tile may be for color-key
transparency: that is, all the white (say) squares in the above tile
should be transparent when drawn over whatever is underneath.
Currently, tiles are drawn to square or axis-aligned rectangular
targets using TransparentBlt.
I'm sorry for all the extra caveats that make this question more complicated
than 'what algorithm should I use?' But I will happily accept answers with
only algorithmic information too.
You might also want to have a look at Graphics32.
The screen shot bewlow shows how the transfrom demo in GR32 looks like
Take a look at 3D Lab Vector graphics. (Specially "Football field" in the demo).
Another cool resource is AggPas with full source included (download)
AggPas is Open Source and free of charge 2D vector graphics library. It is an Object Pascal native port of the Anti-Grain Geometry library - AGG, originally written by Maxim Shemanarev in C++. AggPas doesn't depend on any graphic API or technology. Basically, you can think of AggPas as of a rendering engine that produces pixel images in memory from some vectorial data.
Here is how the perspective demo looks like:
After transformation:
The general technique is described in George Wolberg's "Digital Image Warping". It looks like this abstract contains the relevant math, as does this paper. You need to create a perspective matrix that maps from one quad to another. The above links show how to create the matrix. Once you have the matrix, you can scan your output buffer, perform the transformation (or possibly the inverse - depending on which they give you), and that will give you points in the original image that you can copy from.
It might be easier to use OpenGL to draw a textured quad between the 4 points, but that doesn't use GDI like you wanted.
Given an image that can contain any variety of solid color images, what is the best method for parsing the image at a given point and then determining the slope (or Vector if you prefer) of that area?
Being new to XNA development, I feel there must be an established method for doing this sort of thing but I have Googled this issue for awhile now.
By way of example, I have mocked up a quick image to demonstrate what I am trying to do. The white portion of the image (where the labels are shown) would be transparent pixels. The "ground" would be a RenderTarget2D or Texture2D object that will provide the Color array of pixels.
Example
What you are looking for is the tangent, which is 90 degrees to the normal (which is more commonly used). These two terms should assist you in your searching.
This is trivial if you've got the polygon outline data. If all you have is an image, then you have to come up with a way to convert it into a polygon.
It may not be entirely suitable for your problem, but the first place I would go is the Farseer Physics Engine, which has a "texture to polygon" feature you could possibly reuse.
If you are using the terrain as some kind of "ground", you can possibly cheat a bit by looking at the adjacent column of pixels and using that to determine the ground slope at that exact point. Kind of like what Lemmings and Worms do.
If you make that determination at the boundary between each pixel, you can get gradients of rise:run between two pixels horizontally. Usually you just break it into categories: so flat (1:1), 45 degrees (2:1) or too steep (>3:1). With a more complicated algorithm, that looks outwards to more columns, you can get better resolution.