I try to get a point cloud using a 32bit color depth image from Hololens, But I am having a hard time because I do not have much information about it. Do I have to have camera parameters to get point clouds from the depth image? Is there a way to convert from PCL or OpenCV?
I add some comment and picture. Finally I can get the point cloud using depth image from hololens. But I convert 32bit depth image to grayscale and feel that the sensors of the lens alone have a lot of distortion. To complement this, I think we need to find a way to undistortion and filtering the depth image.
Do you have any other information about this?
I have 3 CIImage objects that are gray 8-bpp images that are meant to be the 8-bit R, G, and B channels of a new image. Aside from low-level image pixel data operations, is there a way to construct the CIImage (from filters or some other easier way)
I realize I can do this by looping through the pixels of a new RGB image and setting it from the gray channels I have -- I was wondering if there was a more idiomatic way to work with channels.
For example, in Pillow for Python, it's Image.merge([rChannel, gChannel, bChannel]) -- I know how to code the pixel access way if there is no built in way.
The book, Core Image for Swift, covers how to do this and provides the code to do it here:
https://github.com/FlexMonkey/Filterpedia/blob/master/Filterpedia/customFilters/RGBChannelCompositing.swift
The basic idea is that you need to provide a color kernel function in GPU shader language and wrap it in a CIFilter subclass.
NOTE: The code is not copied here because it's under GPL, which is an incompatible license with StackOverflow answers. You can follow the link if you want to see how it's done, and use it if it's compatible with your license.
I have unicolor image and i need to resize some parts from it, with different scale. Desired result is showed at image.
I've looked at applying grid mesh in opengles but i could not find some sample code or more detailed tutorial.
I've also looked at imgwrap but as far i can see this library requires qt framework. Any ideas, sample code or links for further read will be appreciated, thanks.
The problem you are facing is called "image warping" in computer graphics. First you have to define some control points in the original image and corresponding points in a sample destination image. Then you have to calculate a dense displacement field (in this application called also wrapping grid) and simple apply this field to the original image.
More practically: your best bet on iOS will be to create a 2D grid of vertices in OpenGL. Map your original image as a texture over this grid and deform the original grid by displacing some of its points. Then you simple take a screenshot of the resulting image with glReadPixels.
I do not know any CIFilter that would implement displacement field mapping of this kind.
UPDATE: I found also an example code that uses 8 control points to morph images with OpenCV. http://engineeering.blogspot.it/2008/07/image-morphing-with-opencv.html
OpenCV has working ports to iOS, so you could simple experiment with the code on the link above also on a target device.
I am not sure but i Suggest that if you want do this types of work then you need to crop some part for image and applies your resize feature/functionality in this croped part of image and put at position as it is. I just give my opinion not sure that it is true for you or not.
Here also i give you link of Question please read it, It might be helpful in your case:
How to scale only specific parts of image in iPhone app?
A few ideas:
Copy parts of the UIImage into different CGContext using
CGBitmapContextCreateImage(), copy the parts around, scale
individually, put back together.
Use CIFilter effects on parts of your Image, masking the parts you want to scale. (Core Image Programming Guide)
I suggest you check out Brad Larson's GPUImage project on GitHub. Under Visual effects you will find filters such as GPUImageBulgeDistortionFilter, which you should be able to adapt to your needs.
You might want to try this example using thin plate splines and OpenCV. This, in my opinion, is the easiest-to-try solution that is online.
You'll probably want to look at OpenGL shaders. What I'd do is I'd load the image in as a texture, apply a fragment shader (which is a small program that will let you alter distort the image), render the results back to a texture, and either display that texture or save it as a bitmap.
It's not going to be simple, but there is some sample code out there for other distortions. Here's a swirl in shaders:
http://www.geeks3d.com/20110428/shader-library-swirl-post-processing-filter-in-glsl/
I don't think there is an easier way to do this without involving OpenGL, and you probably wouldn't find great performance in doing this outside of the GPU either.
I'm looking for a possibility to convert raster images to vector data using OpenCV. There I found a function cv::findContours() which seems to be a bit primitive (more probably I did not understand it fully):
It seems to use b/w images only (no greyscale and no coloured images) and does not seem to accept any filtering/error suppresion parameters that could be helpful in noisy images, to avoid very short vector lines or to avoid uneven polylines where one single, straight line would be the better result.
So my question: is there a OpenCV possibility to vectorise coloured raster images where the colour-information is assigned to the resulting polylinbes afterwards? And how can I apply noise reduction and error suppression to such a algorithm?
Thanks!
If you want to raster image by color than I recommend you to clusterize image on some group of colors (or quantalize it) and after this extract contours of each color and convert to needed format. There are no ready vectorizing methods in OpenCV.
I have a program that will load an image from the hard disk. The program is written using emgu cv and the image is a Bgr image. I want to allow the user to increase/decrease the brightness/contrast of the image. How can I do this? Some sample code would be appreciated (because I am still a newbie). Thanks.
It depends on your image adjustment requirements.
You can start using some basic techniques already wrapped in emguCV such as histogram equalization and gamma correction. You can also combine them to achieve better result.
Image<Bgr, byte> inputImage;
inputImage._EqualizeHist();
inputImage._GammaCorrect(1.8d);