i am new to the concept of webgl. I have a webgl code where i use different images as textures onto different sides of the cube, but the images appear to be blur. I have tried out using various images with different resolutions but none seems to work. Worst part is am even not good at image processing. Anyone please help me on this issue. Can we programmatically maintain the clarity of image while projecting it onto sides of cube.
Thanks
Related
If you have a single power-of-two width/height texture (say 2048) and you want to blit out scaled and translated subsets of it (say 64x92-sized tiles scaled down) as quickly as possible onto another texture (as a buffer so it can be cached when not dirty), then draw that texture onto a webgl canvas, and you have no more requirements - what is the fastest strategy?
Is it first loading the source texture, binding an empty texture to a framebuffer, rendering the source with drawElementsInstancedANGLE to the framebuffer, then unbinding the framebuffer and rendering to the canvas?
I don't know much about WebGL and I'm trying to write a non-stateful version of https://github.com/kutuluk/js13k-2d (that just uses draw() calls instead of sprites that maintain state, since I would have millions of sprites). Before I get too far into the weeds, I'm hoping for some feedback.
There is no generic fastest way. The fastest way is different by GPU and also different by specifics.
Are you drawing lots of things the same size?
Are the parts of the texture atlas the same size?
Will you be rotating or scaling each instance?
Can their movement be based on time alone?
Will their drawing order change?
Do the textures have transparency?
Is that transparency 100% or not (0 or 1) or is it various values in between?
I'm sure there's tons of other considerations. For every consideration I might choose a different approach.
In general your idea if using drawElementsAngleInstanced seems fine but without knowing exactly what you're trying to do and on which device it's hard to know.
Here's some tests of drawing lots of stuff.
I have a set of images collected from a drone that I now want to stitch together. The approach I started going towards is to rotate all the images to the proper orientation, then try to stitch those together. However since the rotated images are no longer rectangular, I have large empty areas with no image data. As expected, these dark areas have caused poor stitching results.
I have seen the api for stitch:
Status stitch (InputArrayOfArrays images, const std::vector< std::vector< Rect > > &rois, OutputArray pano)
I like that there is an "ROI" parameter, however the data type is a "Rect" instead of something like a "RotatedRect" or a mask. So this looks like it won't work either.
The only other approach I can think of is to further crop the image to remove the no-data areas of the image (which would require more images to make up for the lost data).
I am not an expert in OpenCV nor in image stitching, so I'm looking for some awesome ideas. Is there a better way to approach this?
I have unicolor image and i need to resize some parts from it, with different scale. Desired result is showed at image.
I've looked at applying grid mesh in opengles but i could not find some sample code or more detailed tutorial.
I've also looked at imgwrap but as far i can see this library requires qt framework. Any ideas, sample code or links for further read will be appreciated, thanks.
The problem you are facing is called "image warping" in computer graphics. First you have to define some control points in the original image and corresponding points in a sample destination image. Then you have to calculate a dense displacement field (in this application called also wrapping grid) and simple apply this field to the original image.
More practically: your best bet on iOS will be to create a 2D grid of vertices in OpenGL. Map your original image as a texture over this grid and deform the original grid by displacing some of its points. Then you simple take a screenshot of the resulting image with glReadPixels.
I do not know any CIFilter that would implement displacement field mapping of this kind.
UPDATE: I found also an example code that uses 8 control points to morph images with OpenCV. http://engineeering.blogspot.it/2008/07/image-morphing-with-opencv.html
OpenCV has working ports to iOS, so you could simple experiment with the code on the link above also on a target device.
I am not sure but i Suggest that if you want do this types of work then you need to crop some part for image and applies your resize feature/functionality in this croped part of image and put at position as it is. I just give my opinion not sure that it is true for you or not.
Here also i give you link of Question please read it, It might be helpful in your case:
How to scale only specific parts of image in iPhone app?
A few ideas:
Copy parts of the UIImage into different CGContext using
CGBitmapContextCreateImage(), copy the parts around, scale
individually, put back together.
Use CIFilter effects on parts of your Image, masking the parts you want to scale. (Core Image Programming Guide)
I suggest you check out Brad Larson's GPUImage project on GitHub. Under Visual effects you will find filters such as GPUImageBulgeDistortionFilter, which you should be able to adapt to your needs.
You might want to try this example using thin plate splines and OpenCV. This, in my opinion, is the easiest-to-try solution that is online.
You'll probably want to look at OpenGL shaders. What I'd do is I'd load the image in as a texture, apply a fragment shader (which is a small program that will let you alter distort the image), render the results back to a texture, and either display that texture or save it as a bitmap.
It's not going to be simple, but there is some sample code out there for other distortions. Here's a swirl in shaders:
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
I don't think there is an easier way to do this without involving OpenGL, and you probably wouldn't find great performance in doing this outside of the GPU either.
I'm working on an Open GL app that uses 1 particularly large texture 2250x1000. Unfortunately, Open GL ES 2.0 doesn't support textures larger than 2048x2048. When I try to draw my texture, it appears black. I need a way to load and draw the texture in 2 segments (left, right). I've seen a few questions that touch on libpng, but I really just need a straight forward solution for drawing large textures in opengl es.
First of all the texture size support depends on device, I believe iPad 3 supports 4096x4096 but don't mind that. There is no way to push all those data as they are to most devices onto 1 texture. First you should ask yourself if you really need such a large texture, will it really make a difference if you resample it down to 2048x_. If the answer is NO you will need to break it at some point. You could cut it by half in width and append of the cut parts to the bottom of the texture resulting in 1125x2000 texture or simply create 2 or more textures and push to them certain parts of the texture image. In any of the cases you might have trouble with texture coordinates but this all heavily depends on what you are trying to do, what is on that texture (a single image or parts of a sophisticated model; color mapping or some data you can not interpolate; do you create it at load time or it is modified as it goes...). Maybe some more info could help us solve your situation more specifically.
I know it is possible, and a lot faster than using GDI+. However I haven't found any good example of using DirectX to resize an image and save it to disk. I have implemented this over and over in GDI+, thats not difficult. However GDI+ does not use any hardware acceleration, and I was hoping to get better performance by tapping into the graphics card.
You can load the image as a texture, texture-map it onto a quad and draw that quad in any size on the screen. That will do the scaling. Afterwards you can grab the pixel-data from the screen, store it in a file or process it further.
It's easy. The basic texturing DirectX examples that come with the SDK can be adjusted to do just this.
However, it is slow. Not the rendering itself, but the transfer of pixel data from the screen to a memory buffer.
Imho it would be much simpler and faster to just write a little code that resizes an image using bilinear scaling from one buffer to another.
Do you really need to use DirectX? GDI+ does the job well for resizing images. In DirectX, you don't really need to resize images, as most likely you'll be displaying your images as textures. Since textures can only applies on 3d object (triangles/polygons/mesh), the size of the 3d object and view port determines the actual image size displayed. If you need to scale your texture within the 3d object, just play the texture coordinate or matrix.
To manipute the texture, you can use alpha blending, masking and all sort of texture manipulation technique, if that's what you're looking for. To manipulate individual pixel like GDI+, I still think GDI+ is the way to do. DirectX was never mend to do image manipulation.