WebGL - Building objects with block - webgl

Im trying to build some text using blocks, which I intend to customize later on. The attached image is a mockup of what i intend to do.
I was thinking of using WebGL, since I want to do it in 3D and I cant do any flash, but Im not sure how to contruct the structure of cubes from the letters. Can anyone give me a suggestion or a technique to map text to a series of points so that seen from far aside they draw that same text?

First, you need a font — a table of shapes for the characters — in a format you can read from your code. Do you already have one? If it's just a few letters, you could manually create polygons for each character.
Then, use a rasterization algorithm to convert the character shape into an array of present-or-absent points/cubes. If you have perfectly flat text, then use a 2D array; if your “customizations” will create depth effects then you will want a 3D array instead (“extruding” the shape by writing it identically into multiple planes of the array).
An alternative to the previous two steps, which is appropriate if your text does not vary at runtime, is to first create an image with your desired text on it, then use the pixels of the image as the abovementioned 2D array. In the browser, you can do this by using the 2D Canvas feature to draw an image onto a canvas and then reading the pixels out from it.
Then to produce a 3D shape from this voxel array, construct a polygon face for every place in the array where a “present” point meets an “absent” point. If you do this based on pairs of neighbors, you get a chunky pixel look (like Minecraft). If you want smooth slopes (like your example image), then you need a more complex technique; the traditional way to produce a smooth surface is marching cubes (but just doing marching cubes will round off all your corners).

Related

Finding simple shapes in 2D point clouds

I am currently looking for a way to fit a simple shape (e.g. a T or an L shape) to a 2D point cloud. What I need as a result is the position and orientation of the shape.
I have been looking at a couple of approaches but most seem very complicated and involve building and learning a sample database first. As I am dealing with very simple shapes I was hoping that there might be a simpler approach.
By saying you don't want to do any training I am guessing that you mean you don't want to do any feature matching; feature matching is used to make good guesses about the pose (location and orientation) of the object in the image, and would be applicable along with RANSAC to your problem for guessing and verifying good hypotheses about object pose.
The simplest approach is template matching, but this may be too computationally complex (it depends on your use case). In template matching you simply loop over the possible locations of the object and its possible orientations and possible scales and check how well the template (a cloud that looks like an L or a T at that location and orientation and scale) matches (or you sample possible locations orientations and scales randomly). The checking of the template could be made fairly fast if your points are organised (or you organise them by e.g. converting them into pixels).
If this is too slow there are many methods for making template matching faster and I would recommend to you the Generalised Hough Transform.
Here, before starting the search for templates you loop over the boundary of the shape you are looking for (T or L) and for each point on its boundary you look at the gradient direction and then the angle at that point between the gradient direction and the origin of the object template, and the distance to the origin. You add that to a table (Let us call it Table A) for each boundary point and you end up with a table that maps from gradient direction to the set of possible locations of the origin of the object. Now you set up a 2D voting space, which is really just a 2D array (let us call it Table B) where each pixel contains a number representing the number of votes for the object in that location. Then for each point in the target image (point cloud) you check the gradient and find the set of possible object locations as found in Table A corresponding to that gradient, and then add one vote for all the corresponding object locations in Table B (the Hough space).
This is a very terse explanation but knowing to look for Template Matching and Generalised Hough transform you will be able to find better explanations on the web. E.g. Look at the Wikipedia pages for Template Matching and Hough Transform.
You may need to :
1- extract some features from the image inside which you are looking for the object.
2- extract another set of features in the image of the object
3- match the features (it is possible using methods like SIFT)
4- when you find a match apply RANSAC algorithm. it provides you with transformation matrix (including translation, rotation information).
for using SIFT start from here. it is actually one of the best source-codes written for SIFT. It includes RANSAC algorithm and you do not need to implement it by yourself.
you can read about RANSAC here.
Two common ways for detecting the shapes (L, T, ...) in your 2D pointcloud data would be using OpenCV or Point Cloud Library. I'll explain steps you may take for detecting those shapes in OpenCV. In order to do that, you can use the following 3 methods and the selection of the right method depends on the shape (Size, Area of the shape, ...):
Hough Line Transformation
Template Matching
Finding Contours
The first step would be converting your point to a grayscale Mat object, by doing that you basically make an image of your 2D pointcloud data and so you can use other OpenCV functions. Then you may smooth the image in order to reduce the noises and the result would be somehow a blurry image which contains real edges, if your application does not need real-time processing, you can use bilateralFilter. You can find more information about smoothing here.
The next step would be choosing the method. If the shape is just some sort of orthogonal lines (such as L or T) you can use Hough Line Transformation in order to detect the lines and after detection, you can loop over the lines and calculate the dot product of the lines (since they are orthogonal the result should be 0). You can find more information about Hough Line Transformation here.
Another way would be detecting your shape using Template Matching. Basically, you should make a template of your shape (L or T) and use it in matchTemplate function. You should consider that the size of the template you want to use should be in the order of your image, otherwise you may resize your image. More information about the algorithm can be found here.
If the shapes include areas you can find contours of the shape using findContours, it will give you the number of polygons which are around your shape you want to detect. For instance, if your shape is L, it would have polygon which has roughly 6 lines. Also, you can use some other filters along with findContours such as calculating the area of the shape.

how to connect points after identifying them in cvgoodfeaturesTotrack

I want to identify an object and draw a shape around it ...
I used previously the color identification but wasn't a good option since color change dramatically from place to place .. so I though why not identifying objects by features such as edges .. and I did that using this function in openCV
cvgoodfeaturesTotrack
it returns the (x,y)-coordinates of the points .. now I want to connect those points.. well not all of them but the one who are close to each other to draw a shape around the different objects. Any ideas ?
I don't think there is a free lunch in this case. You are trying to reconstruct a polygon if you only know the corner points of the polygon. There is no unique solution to this problem: you can draw all sorts of polygons through the corners. If you are certain the shape you are after is convex, then you can construct the convex span of the corner points, but the result will be horrible if you include any corners that were not part of the original object.
It seems to me that detecting corners is not the way to segment an object that is more or less delimited by lines. You probably want to try an edge detector instead, or a proper segmentation technique such as watershed.

How can I render a square bitmap to an arbitrary four-sided polygon using GDI?

I need to paint a square image, mapped or transformed to an unknown-at-compile-time four-sided polygon. How can I do this?
Longer explanation
The specific problem is rendering a map tile with a non-rectangular map projection. Suppose I have the following tile:
and I know the four corner points need to be here:
Given that, I would like to get the following output:
The square tile may be:
Rotated; and/or
Be narrower at one end than at the other.
I think the second item means this requires a non-affine transformation.
Random extra notes
Four-sided? It is plausible that to be completely correct, the tile should be
mapped to a polygon with more than four points, but for our purposes
and at the scale it is drawn, a square -> other four-cornered-polygon
transformation should be enough.
Why preferably GDI only? All rendering so far is done using GDI, and I want to keep the code (a) fast and (b) requiring as few extra
libraries as possible. I am aware of some support for
transformations in GDI and have been experimenting with them
today, but even after experimenting with them I'm not sure if they're
flexible enough for this purpose. If they are, I haven't managed to
figure it out, and so I'd really appreciate some sample code.
GDI+ is also ok since we use it elsewhere, but I know it can be slow, and speed is
important here.
One other alternative is anything Delphi- /
C++Builder-specific; this program is written mostly in C++ using
the VCL, and the graphics in question are currently painted to a
TCanvas with a mix of TCanvas methods and raw WinAPI/GDI calls.
Overlaying images: One final caveat is that one colour in the tile may be for color-key
transparency: that is, all the white (say) squares in the above tile
should be transparent when drawn over whatever is underneath.
Currently, tiles are drawn to square or axis-aligned rectangular
targets using TransparentBlt.
I'm sorry for all the extra caveats that make this question more complicated
than 'what algorithm should I use?' But I will happily accept answers with
only algorithmic information too.
You might also want to have a look at Graphics32.
The screen shot bewlow shows how the transfrom demo in GR32 looks like
Take a look at 3D Lab Vector graphics. (Specially "Football field" in the demo).
Another cool resource is AggPas with full source included (download)
AggPas is Open Source and free of charge 2D vector graphics library. It is an Object Pascal native port of the Anti-Grain Geometry library - AGG, originally written by Maxim Shemanarev in C++. AggPas doesn't depend on any graphic API or technology. Basically, you can think of AggPas as of a rendering engine that produces pixel images in memory from some vectorial data.
Here is how the perspective demo looks like:
After transformation:
The general technique is described in George Wolberg's "Digital Image Warping". It looks like this abstract contains the relevant math, as does this paper. You need to create a perspective matrix that maps from one quad to another. The above links show how to create the matrix. Once you have the matrix, you can scan your output buffer, perform the transformation (or possibly the inverse - depending on which they give you), and that will give you points in the original image that you can copy from.
It might be easier to use OpenGL to draw a textured quad between the 4 points, but that doesn't use GDI like you wanted.

Sprite pixel parsing to determine Vector

Given an image that can contain any variety of solid color images, what is the best method for parsing the image at a given point and then determining the slope (or Vector if you prefer) of that area?
Being new to XNA development, I feel there must be an established method for doing this sort of thing but I have Googled this issue for awhile now.
By way of example, I have mocked up a quick image to demonstrate what I am trying to do. The white portion of the image (where the labels are shown) would be transparent pixels. The "ground" would be a RenderTarget2D or Texture2D object that will provide the Color array of pixels.
Example
What you are looking for is the tangent, which is 90 degrees to the normal (which is more commonly used). These two terms should assist you in your searching.
This is trivial if you've got the polygon outline data. If all you have is an image, then you have to come up with a way to convert it into a polygon.
It may not be entirely suitable for your problem, but the first place I would go is the Farseer Physics Engine, which has a "texture to polygon" feature you could possibly reuse.
If you are using the terrain as some kind of "ground", you can possibly cheat a bit by looking at the adjacent column of pixels and using that to determine the ground slope at that exact point. Kind of like what Lemmings and Worms do.
If you make that determination at the boundary between each pixel, you can get gradients of rise:run between two pixels horizontally. Usually you just break it into categories: so flat (1:1), 45 degrees (2:1) or too steep (>3:1). With a more complicated algorithm, that looks outwards to more columns, you can get better resolution.

rendering a photoshop style brush in openGL

I have lines that are programmatically defined by my program. what I want to do is render a brush stroke along them.
the way I think the type of brush I want works is, it simply has a texture, mostly transparent, and what you do is, render this texture centered on EVERY PIXEL in the path, and they blend together to create the stroke.
now assuming this even works, I'm going to make a bet that it will be WAY too expensive (targeting the ipad and other mobile chips, which HATE fillrate and alpha blending)
so, what other options are there?
if it could be done in realtime (that is, the path spline updating every frame) that would be ideal. but if not, within a fraction of a second on the ipad would be good too (the splines connect nodes, the user can drag nodes around thus transforming the spline, but it would be acceptable to revert to a simpler fill for the spline while it was moving around, then recalculate the brush once they release it)
for those wondering, I'm trying to get it so the thick lines look like they have been made with a pencil. it should look as real life as possible.
I considered just rendering the brushed spline to a texture, but as the spline can be any length, in any direction, dedicating a WHOLE rectangular texture to encompass the whole spline would be way to costly...
the spline is inevitably broken up into quads for rendering, so I thought of initially rendering the brush to a texture, then generating an optimized texture with each of the quads separated and packed as neatly as possible into the texture.
but two renders to texture... algorithm to create the optimized texture, making it so the quads still seamlessly blend with each other... sounds like a nightmare, and thats not even making it realtime.
so yeah, any ideas on how to draw thick, pencil like lines that follow a spline in real time on the ipad in openGL?
From my point of view, what you want is to render a line that:
is textured
has the edges fade off (i.e. no sharp edge to it)
follows a spline
To achieve these goals I would first of all break the spline up into a series of line segments that closely approximate the curve (you can make it more or less fine-grained depending on how accurate you want it to be versus how fast you want it to render).
Once you have these, you will need to make each segment into 3 quads, one that goes over the middle of the line segment that serves as the fully opaque part of the line and one on each edge of the line that will fade out to be totally transparent.
You will need to use a little bit of math to make sure that you extrude the quads along a vector that bisects 2 segments equally (i.e. so that the angle between the each segment and the extrusion vector are equal). This will ensure that you don't have gaps in the obtuse part of the join and overlaps in the acute parts.
After all of this, you just need to use the vertex positions as the UV co-ordinates (probably scaled though) and allow the texture to wrap around.
Using this method, you should end up with a mesh that has a solid thick line running through the middle of your spline with "fins" that taper off into complete transparency. This should approximate the effect you want quite closely while only rendering to relevant pixels (i.e. no giant areas of completely transparent pixels) and with very litter memory overhead.
I've been a little vague here as its kind of hard to explain with text alone and without writing an in depth tutorial. If you need more info, just comment on what your stuck on and I'll elaborate further.

Resources