I am using Pylon software(Basler camera) in c#.NET. I grabbed frames from cameras with "bitmap" format, but i need images with "gray" format. How can i grab pylon images with "gray" format?
Thanks
Short answer:
call this code in C# after you open camera and before you start grabbing:
camera.Parameters[PLCamera.PixelFormat].SetValue(PLCamera.PixelFormat.Mono8);
Long answer:
Images from cameras are always "Bitmap"
Basler cameras can provide you with different "Pixel Format"
For example: you can setup camera "acA2040-120uc" to provide you with one from the following Pixel Format:
Mono 8
Bayer RG8
Bayer RG12
Bayer RG12 Packed
RGB 8
BGR 8
YCbCr422_8
Pylon API provides you access to camera settings.
You can set "gray" format in C# using this code:
camera.Parameters[PLCamera.PixelFormat].SetValue(PLCamera.PixelFormat.Mono8);
(Mono8 means monochrome 8-bit)
Related
I have bought BOSON FLIR camera and I tested with Jetson Xavier and it works by streaming with Python & opencv. I have an issue that I am getting grayscale image while I am looking for video with RGB color like Ironbow color. This is the code that I am using with python on nvidia board
import cv2
print(cv2.__version__)
dispW=640
dispH=480
flip=2
cam=cv2.VideoCapture(0)
while True:
ret, frame = cam.read()
cv2.imshow('nanoCam',frame)
if cv2.waitKey(1)==ord('q'):
break
cam.release()
cv2.destroyAllWindows()
kindly looking for your support for conversion.
# im_gray is the "WHITE HOT" picture from FLIR's web site
colorized = cv.applyColorMap(im_gray, cv.COLORMAP_PLASMA)
here's the result:
compare to FLIR's Ironbow:
I think OpenCV's color map is somewhat comparable but it's not as saturated. If you need to match FLIR's color map, there are ways to replicate that even more faithfully.
Read all about colormaps:
https://docs.opencv.org/4.x/d3/d50/group__imgproc__colormap.html
FLIR pictures (white hot + ironbow) pirated from:
https://www.flir.com/discover/ots/outdoor/your-perfect-palette/
I am using Android Vision Camera in a Xamarin.Android app. Is their any way we can set the image size say 32 * 32 pixels taken by the camera not the preview size.
Use the CameraSource.PictureCallback, convert the byte array to a bitmap using BitmapFactory.DecodeByteArray() in combination with BitmapFactory.Options
For example I can give one of following video formats:
400
411
420
422
444
Selecting every video format is showing different PSNR value for video sequence.
OR Is there any way I can determine YUV video data format of my input YUV video sequence?
According to Wikipedia, PSNR is reported against each channel of color space.
Alternately, for color images the image is converted to a different color space and PSNR is reported against each channel of that color space, e.g., YCbCr or HSL.
See: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
For computing PSNR of video, you must have the source video, and the same video after some kind of processing stage.
PSNR is most commonly used to measure the quality of reconstruction of lossy compression codecs).
In case color sub-sampling (e.g converting YUV 444 to YUV 420), is part of the lossy compression pipeline, it's recommended to include the sub-sampling in the PSNR computation.
Note: There is no strict answer, it depends what you need get measured.
Example:
Assume input video is YUV 444, and H.264 codec were used from lossy compression, and assume pre-processing stage is converting YUV 444 to YUV 420.
Video Compression: YUV444 --> YUV420 --> H264 Encoder.
You need to reverse the process, and then compute PSNR.
Video Reconstruction: H264 Decoder --> YUV420 --> YUV444.
Now you have input video in YUV 444 format, and reconstructed video in YUV 444 format, apply PSNR computation of the two videos.
Determine YUV video data format of input YUV video:
I recommend using ffprobe tool.
You can download it from here: https://ffmpeg.org/download.html (select "Static Linking").
I found the solution here: https://trac.ffmpeg.org/wiki/FFprobeTips.
You can use the following example:
ffprobe -v error -show_entries stream=pix_fmt -of default=noprint_wrappers=1:nokey=1 input.mp4
Y-PSNR: you can simply extract the Y component of the original and the reference images, and calculate the PSNR value for each image/video frame.
For video: you need to calculate the mean value of the all estimated PSNR values.
How do I blend two images - thermal(80x60) and RGB(640x480) efficiently?
If I scale the thermal to 640x480 it doesn't scale up evenly or doesn't have enough quality to do any processing on it. Any ideas would be really helpful.
RGB image - http://postimg.org/image/66f9hnaj1/
Thermal image - http://postimg.org/image/6g1oxbm5n/
If you scale the resolution of the thermal image up by a factor of 8 and use Bilinear Interpolation you should get a smoother, less-blocky result.
When combining satellite images of different resolution, (I talk about satellite imagery because that is my speciality), you would normally use the highest resolution imagery as the Lightness or L channel to give you apparent resolution and detail in the shapes because the human eye is good at detecting contrast and then use the lower resolution imagery to fill in the Hue and Saturation, or a and b channels to give you the colour graduations you are hoping to see.
So, in concrete terms, I would consider converting the RGB to Lab or HSL colourspace and retaining the L channel. The take the thermal image and up-res it by 8 using bilinear interpolation and use the result as the a, or b or H or S and maybe fill in the remaining channel with the one from the RGB that has the most variance. Then convert the result back to RGB for a false-colour image. It is hard to tell without seeing the images or knowing what you are hoping to find in them. But in general terms, that would be my approach. HTH.
Note: Given that a of Lab colourspace controls the red/green relationship, I would probably try putting the thermal data in that channel so it tends to show more red the "hotter" the thermal channel is.
Updated Answer
Ok, now I can see your images and you have a couple more problems... firstly the images are not aligned, or registered, with each other which is not going to help - try using a tripod ;-) Secondly, your RGB image is very poorly exposed so it is not really going to contribute that much detail - especially in the shadows - to the combined image.
So, firstly, I used ImageMagick at the commandline to up-size the thermal image like this:
convert thermal.png -resize 640x480 thermal.png
Then, I used Photoshop to do a crude alignment/registration. If you want to try this, the easiest way is to put the two images into separate layers of the same document and set the Blending mode of the upper layer to Difference. Then use the Move Tool (shortcut v) to move the upper image around till the screen goes black which means that the details are on top of each other and when subtracted they come to zero, i.e. black. Then crop so the images are aligned and turn off one layer and save, then turn that layer back on and the other layer off and save again.
Now, I used ImageMagick again to separate the two images into Lab layers:
convert bigthermalaligned.png -colorspace Lab -separate thermal.png
convert rgbaligned.png -colorspace Lab -separate rgb.png
which gives me
thermal-0.png => L channel
thermal-1.png => a channel
thermal-2.png => b channel
rgb-0.png => L channel
rgb-1.png => a channel
rgb-2.png => b channel
Now I can take the L channel of the RGB image and the a and b channels of the thermal image and put them together:
convert rgba-0.png thermal-1.png thermal-2.png -normalize -set colorpsace lab -combine result.png
And you get this monstrosity! Obviously you can play around with the channels and colourpsaces and a tripod and proper exposures, but you should be able to see some of the details of the RGB image - especially the curtains on the left, the lights, the camera on the cellphone and the label on the water bottle - have come through into the final image.
Assuming that the images were not captured using a single camera, you need to note that the two cameras may have different parameters. Also, if it's two cameras, they are probably not located in the same world position (offset).
In order to resolve this, you need to get the intrinsic calibration matrix of each of the cameras, and find the offset between them.
Then, you can find a transformation between a pixel in one camera and the other. Unfortunately, if you don't have any depth information about the scene, the most you can do with the calibration matrix is get a ray direction from the camera position to the world.
The easy approach would be to ignore the offset (assuming the scene is not too close to the camera), and just transform the pixel.
p2=K2*(K1^-1 * p1)
Using this you can construct a new image that is a composite of both.
The more difficult approach would be to reconstruct the 3D structure of the scene by finding features that you can match between both images, and then triangulate the point with both rays.
I have downloaded a program, in which there are multiple classes. In that one of the function receives an image as parameter. How can I check the Image received by that function in is in YUV format or RGB format using opencv ??
You can't. Mat does not have such information. All you can get is depth and number of channels.