I am using Android Vision Camera in a Xamarin.Android app. Is their any way we can set the image size say 32 * 32 pixels taken by the camera not the preview size.
Use the CameraSource.PictureCallback, convert the byte array to a bitmap using BitmapFactory.DecodeByteArray() in combination with BitmapFactory.Options
Related
I'm trying to display 10-bit grayscale images on an HDR10 monitor.
A Windows app was implemented by DirectXTK: Using HDR rendering (which is based on Direct3D 11). For the purpose of comparison between HDR and SDR, I also duplicated the same app but disabled HDR.
I did a test to display a grayscale gradient image with 26 floors, but found that the middle-high floors in HDR app were "whiter" (or lighter) than that in SDR app:
Grayscale gradient: HDR vs. SDR. This would make my real images become blurred in some case if pixel values in a region range in those floors.
I was expecting the middle floor (12th or 13th floor) should be nearly gray in both HDR and SDR apps, but HDR wasn't in my test. Similar result can be also seen from Microsoft D3D12HDR sample. Is my concept of HDR rendering wrong?
I currently use an Intel d435 camera.
I want to align with the left-infrared camera and the color camera.
the align function provided by the RealSense library has only the ability to align depth and color.
I heard that RealSense Camera is already aligned with the left-infrared camera and the depth camera.
However, I cannot map the infrared image and the color image with this information. The depth image is again set to the color image through the align function. I wonder how I can fit the color image with the left-infrared image that is set to the depth of the initial state.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
[Realsense Customer Engineering Team Comment]
#Panepo
The align class used in librealsense demos maps between depth and some other stream and vice versa. We do not offer other forms of stream alignments.
But one suggestion for you to have a try, Basically the mapping is a triangulation technique where we go through the intersection point of a pixel in 3D space to find its origin in another frame, this method work properly when the source data is depth (Z16 format). One possible way to map between two none-depth stream is to play three streams (Depth+IR+RGB), then calculate the UV map for Depth to Color, and then use this UV map to remap IR frame ( remember that Depth and left IR are aligned by design).
Hope the suggestion give you some idea.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
This is the method suggested by Intel Corporation.
Can you explain what it means to be able to solve the problem by creating a UV map using deep and color images? and does the RealSense2 library have a UV map function?
I need your precious answer.
Yes, Intel RealSense SDK 2.0 provides class PointCloud.
So, you
-configure sensors
-start streaming
-obtain color & depth frames
-get UV Map as follows (C#):
var pointCloud = new PointCloud();
pointCloud.MapTexture(colorFrame);
var points = pointCloud.Calculate(depthFrame);
var vertices = new Points.Vertex[depth frame height * depth frame width];
var uvMap = new Points.TextureCoordinate[depth frame height * depth frame width];
points.CopyTo(vertices);
points.CopyTo(uvMap);
uvMap you'll get is a normalized depth to color mapping
NOTE: if depth is aligned to color, size of vertices and UV Map is calculated using color frame width and height
I am using Pylon software(Basler camera) in c#.NET. I grabbed frames from cameras with "bitmap" format, but i need images with "gray" format. How can i grab pylon images with "gray" format?
Thanks
Short answer:
call this code in C# after you open camera and before you start grabbing:
camera.Parameters[PLCamera.PixelFormat].SetValue(PLCamera.PixelFormat.Mono8);
Long answer:
Images from cameras are always "Bitmap"
Basler cameras can provide you with different "Pixel Format"
For example: you can setup camera "acA2040-120uc" to provide you with one from the following Pixel Format:
Mono 8
Bayer RG8
Bayer RG12
Bayer RG12 Packed
RGB 8
BGR 8
YCbCr422_8
Pylon API provides you access to camera settings.
You can set "gray" format in C# using this code:
camera.Parameters[PLCamera.PixelFormat].SetValue(PLCamera.PixelFormat.Mono8);
(Mono8 means monochrome 8-bit)
I need to rectify an image with texture projection on GPU (GLSL/shaders), do you have any resources/tutorials/insights to share? I have the 3D pose of the camera that created the image and the image itself as an input.
My images are 640x480 and from what I understand the buffer memory on iPhone 4S (one of the target devices) is less than that
OK, so the size is not a problem. Also to do rectification, once you have the homography that provides the rectification, use it in the vertex shader to multiply all of the initial 2D homogeneous coordinates.
I'm currently creating an iOS app and I'm having trouble understanding the relationship between taking pixels from an ofGrabber and drawing them using an ofTexture.
My current code:
In setup():
//Set iOS orientation
ofSetOrientation(OF_ORIENTATION_90_LEFT);
//Inits the camera to specified dimensions and sets up texture to display on screen
grabber.initGrabber(640, 480, OF_PIXELS_BGRA); //Options: 1280x720, 640x480
//Allocate opengl texture
tex.allocate(grabber.width, grabber.height, GL_RGB);
//Create pix array large enough to store an rgb value for each pixel
//pix is a global that I use to do pixel manipulation before drawing
pix = new unsigned char[grabber.width * grabber.height * 3];
In update()
//Loads the new pixels into the opengl texture
tex.loadData(pix, grabber.width, grabber.height, GL_RGB);
In draw():
CGRect screenBounds = [[UIScreen mainScreen] bounds];
CGSizeMake screenSize = CGSizeMake(screenBounds.size.width, screenBounds.size.height);
//Draws the texture we generated onto the screen
//On 1st generation iPad mini: width = 1024, height = 768
tex.draw(0, 0, screenSize.height, screenSize.width); //Reversed for 90 degree rotation
What I'm wondering:
1) Why does the ofGrabber and the ofTexture use seemingly different pixel formats? (These formats are the same used in the VideoGrabberExample)
2) What exactly is the texture drawing with the resolution? I'm loading the pix array into the texture. The pix array represents a 640x480 image, while the ofTexture is drawing a 1024x768 (768x1024 when rotated) image to the screen. How is it doing this? Does it just scale everything up since the aspect ratio is basically the same?
3) Is there a list anywhere that describes the OpenGL and OpenFrameworks pixel formats? I've searched for this but haven't found much. For example, why is it OF_PIXELS_BGRA instead of OF_PIXELS_RGBA? For that matter, why does my code even work if I'm capturing BGRA formatted data (which I assume included a gamma value) yet I am only drawing RGB (and you can see that my pix array is sized for RGB data).
I might also mention that in main() I have:
ofSetupOpenGL(screenSize.height, screenSize.width, OF_FULLSCREEN);
However, changing the width/height values above seem to have no effect whatsoever on my app.
ofGrabber is CPU based, so it uses OF_PIXELS_BGRA by the programmers choice. It is common for some cameras to have BGRA pixel order, so this just avoids the grabber to perform a costly memcpy when grabbing from the source. ofTexture maps GPU memory, so it maps to what you'll see on screen (RGB). Note that GL_RGB is an OpenGL definition.
ofTexture does scale to whatever you tell it to. This is done in GPU so it's quite cheap. It does not need to have the same aspect ratio.
This is quite up to the programmer or your requirements. Some cameras provide BGRA streams, other cameras or files provide RGB directly, or even YUV-I420. Color formats are very heterogeneous. OpenFrameworks will handle conversions in most cases, look into ofPixels to see where and how it's used. In a nutshell:
OF_PIXELS_XXX : Used by ofPixels, basically a RAM mapped bitmap
OF_IMAGE_XXX : Used by ofImage, which wrapps ofPixels and makes it simpler to use
GL_XXX : Used by OpenGL and ofTexture, low level GPU Memory mapped