OpenCV Ip Camera Image Deterioration - opencv

I have successfully connected with a IP camera using OpenCV. If I just show the image using imshow, then it's all fine... But if I try to do some CPU processing with the image (I equalize the image and run a face detector...), the image starts to deteriorate (I keep getting ac-tex damaged in the console)... Is starts to blur and blur and blur... I dont know why this is happening. I can confirm that this does not happen when getting images from my iSight camera (I am running on a iMac...) Besides that, I am having a really weird time with OpenCV. The face detection doesn't seem to work when I run the app in Release mode. I am on Windows 8 and using VS 2010.
Can someone shed some light at these problems?

I suggest you break up your problem into small parts. Some questions that I have for you:
The image capture and presentation are working without any processing?
A local camera with the face detector algorithm is working?
You said about release mode, it means that is working on the debug version?

Related

How do we fix our flickering depth image when using an Orbecc Astra Camera and Rviz?

We try to set up the Orbbec Astra Embedded S camera with ROS and our goal is to detect objects by reconstructing a 3D point cloud from the camera images. We are using ROS Noetic and the ROS-Package "astra-camera" (https://github.com/orbbec/ros_astra_camera) as well as Rviz to visualize the images and the 3D point cloud.
Here are the rostopics:
/camera/color/camera_info
/camera/color/image_raw
/camera/depth/camera_info
/camera/depth/image_raw
/camera/depth/points
/camera/ir/camera_info
/camera/ir/image_raw
First Issue:
The color (/camera/color/image_raw) and IR (/camera/ir/image_raw) image stream seems to be working fine, but the big issue is the depth (/camera/depth/image_raw) image stream as it is flickering very fast and does not seem to detect anything.
Second Issue:
When launching the camera by running "roslaunch astra_camera astra_pro.launch" we received three warnings:
Publishing dynamic camera transforms (/tf) at 10 Hz
Camera calibration file /home/astra/.ros/camera_info/rgb_camera.yaml not found.
Camera calibration file /home/astra/.ros/camera_info/ir_camera.yaml not found.
By calibrating the color camera using a checkerboard, we were able to solve the 2. warning, as it generated the rgb_image.yaml file containing the intrinsic parameters. We tried calibrating the ir camera as well, but the ir_camera.yaml file was not generated. We have not yet solved the 1. warning.
Even though we are unsure if this is related to the issue regarding the flickering depth image stream, we believe it is worth mentioning.
We are ROS beginners and would be grateful for any feedback that could help us finding a solution. If you need any other or more information, please let us know.
Thanks,enter image description here
The following gif shows the issue Flickering-Issue

OpenCV Colour Detection Error

I am writing a script on the raspberry pi to detect the majority colour featured in a frame of a webcam and I seem to be having an issue. The following image is me holding up my phone with a blank red image on it. I seem to be getting an orange colour instead.
Now when I angle the phone I do in fact produce the red colour expected.
I am not sure why this is the case.
I am using a logitech c920 webcam that emits a blue light when activated and also have the monitor going. I am wondering whether the light from these two are causing this issue and when I angle it, these lights are not hitting it front on and thus not distributing the image.
I am still not heavily experienced in this area so I would enjoy hearing explanations and possible work arounds for my problem.
Thanks
There are a few things that can mess this up:
As you already mention, the light from the monitor and the camera.
The iPhone screen is a display, so flicker and sync might also be coming to play.
Reflection from the iPhone screen.
If your camera has automatic control for exposure and color balance etc., the picture quality can change as you move around.
I suggest using a colored piece of non-glossy paper so that you can remove the iPhone display's effects.

Is it possible to emulate a polarization filter during image processing, using C++ or OpenCV?

I've looked all over Stack and other sources, but I haven't seen any code that seems to successfully emulate what a polarization filter does, reducing glare. The application I want for this code won't allow for a physical filter, so I was wondering if anyone had tried this.
I'm using OpenCV image processing (mat) in C++ on an Android platform, and glare is interfering with the results I'm trying to get. Imagine a lost object you're trying to find based on a finite set of Red/Green/Blue values; if the object is smooth, glare would render bad results. And that's my current problem.
OK, no, there's no virtual polarization that can be accomplished just with code. It's possible to find (via image color saturation) glare spots on shiny objects, and those can be overwritten with nearby pixels without glare, but that's not the same thing as real polarization. That requires a physical, metal mesh in front of the lens, or sensor, to eliminate those stray light waves that create glare.
Tell you what. The person who invents the virtual polarization filter, using just code, will be an instant billionaire since every cell phone and digital camera company will want to license the patent.

Kurento - Blurness in the Remote stream stored images

What I did:
I am using Kurento Media Server to store the video streaming frames in the server. I can store the frames in the server by using opencv-plugin sample.
I am storing the video frames in the below two scenarios.
1) I need to take the images when the user show their faces in front of
the camera.(Note: No movements)
Issues: No issue. I can get the quality images.
2) I need to take the images when the user walks in a room.(Note: The
user is moving)
Issues: Most of the stored images are blurred in the server when they
are in moving (while walking).
What I want:
i) Is this the default behavior of the KMS (gstreamer)?
Note: I can see the local stream videos clearly in the browser while moving. But
the remote stream videos only got blurred while moving.
ii) Did anyone face this issue before. If yes, how do I solve this issue?
iii) Do I want to change any gstreamer configuration?
iv) Anyone give me a suggestion to overcome this issue?
The problem you are having is that the exposition time of your camera is high. It's like taking a picture of a car with low light.
When there is movement in the image, getting a simple frame, specially if the camera exposition time is long (due to low light conditions of low camera quality), will end in this kind of images.
On continuous video you don't notice this blurriness because there is a sequence of images, and your brain fills the gaps.
Edit
You can try to improve the quality that you are sending to the server by changing constrains on WebRTCEndpoint using properties setMaxVideoSendBandwidth and setMaxVideoRecvBandwidth. As long as there is available bandwidth you'll get a better quality.

KLT Pyramid BoofCV

I am working on an Android application that will use the KLT tracking algorithm. I have downloaded the Android sample provided by BoofCV's website and I have seen the code. However, I need it to work in the background on a different thread without the camera preview while in the front it will be a user interface of some sorts.
Your help is highly appreciated.
You can make camera preview as small as 2x2 pixels effectively making it invisible while still receiving image frames in onPreviewFrame() - that's the way it's done in a BoofCV example application I've found

Resources