Which camera to connect with Raspberry pi2 - image-processing

Using a compatible usb webcam or pi cam. Which of these two is better in the way of controlling the capture settings like iso, exposure time, etc ? And which of these will be more energy efficient and easier to integrate with pi.

The Raspi camera connects directly to the GPU, and is capable of 1080p30 video encode, 5MP stills in pretty decent quality. Because its attached to the GPU, there is only a little impact on the CPU, leaving it available for other processing.
Webcams (unless they have built in encoding - expensive) are unlikely to get the same performance, and they also use a LOT more CPU.
The difference between the Pi Camera and USB webcam is performance and higher frame rate with h.264 video encoding.
With a USB webcam you have low frame rate and no GPU encoding but that doesn't really matter if all you want to do is take photos.

Related

Improve image quality from rtsp stream with opencv

I have a rtsp stream from a pretty good camera (my mobile phone).
I am getting the stream using opencv:
cv2.VideoCapture(get_camera_stream_url(camera))
However, the image quality I get is way bellow my mobile phone camera. I understand that rtsp protocol may lower the resolution but still, the image quality is not good for OCR.
However, although I have a VIDEO stream, the object I am recording is a static one. So, it is expected that all frames from the video should more or less the same, except for noise or lighting issues.
I was wondering if it is possible to get a 10 seg video with several frames and combine it to a SINGLE frame with better sharpness, reducing the noise.
Is it viable? How?

What sensors does ARCore use?

What sensors does ARCore use: single camera, dual-camera, IMU, etc. in a compatible phone?
Also, is ARCore dynamic enough to still work if a sensor is not available by switching to a less accurate version of itself?
Updated: May 10, 2022.
About ARCore and ARKit sensors
Google's ARCore, as well as Apple's ARKit, use a similar set of sensors to track a real-world environment. ARCore can use a single RGB camera along with IMU, what is a combination of an accelerometer, magnetometer and a gyroscope. Your phone runs world tracking at 60fps, while Inertial Measurement Unit operates at 1000Hz. Also, there is one more sensor that can be used in ARCore – iToF camera for scene reconstruction (Apple's name is LiDAR). ARCore 1.25 supports Raw Depth API and Full Depth API.
Read what Google says about it about COM method, built on Camera + IMU:
Concurrent Odometry and Mapping – An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used for fixing a drift in the tracked motion.
Here's Google US15595617 Patent: System and method for concurrent odometry and mapping.
in 2014...2017 Google tended towards Multicam + DepthCam config (Tango project)
in 2018...2020 Google tended to SingleCam + IMU config
in 2021 Google returned to Multicam + DepthCam config
We all know that the biggest problem for Android devices is a calibration. iOS devices don't have this issue ('cause Apple controls its own hardware and software). A low quality of calibration leads to errors in 3D tracking, hence all your virtual 3D objects might "float" in a poorly-tracked scene. In case you use a phone without iToF sensor, there's no miraculous button against bad tracking (and you can't switch to a less accurate version of tracking). The only solution in such a situation is to re-track your scene from scratch. However, a quality of tracking is much higher when your device is equipped with ToF camera.
Here are four main rules for good tracking results (if you have no ToF camera):
Track your scene not too fast, not too slow
Track appropriate surfaces and objects
Use well lit environment when tracking
Don't track reflected of refracted objects
Horizontal planes are more reliable than vertical ones
SingleCam config vs MultiCam config
The one of the biggest problems of ARCore (that's ARKit problem too) is an Energy Impact. We understand that the higher frame rate is – the better tracking's results are. But the Energy Impact at 30 fps is HIGH and at 60 fps it's VERY HIGH. Such an energy impact will quickly drain your smartphone's battery (due to an enormous burden on CPU/GPU). So, just imagine that you use 2 cameras for ARCore – your phone must process 2 image sequences at 60 fps in parallel as well as process and store feature points and AR anchors, and at the same time, a phone must simultaneously render animated 3D graphics with Hi-Res textures at 60 fps. That's too much for your CPU/GPU. In such a case, a battery will be dead in 30 minutes and will be as hot as a boiler)). It seems users don't like it because this is not-good AR experience.

Why does the resolution of my Logitech C920 webcam with open cv go down a lot with Windows compared with Mac OS?

When I use my Logitech C920 on my Mac, I get high quality 1080p resolution. But when I plug it into my Windows laptop, the default is 640x480. I increased the resolution setting to 1080, but it appears as though it's just upsampling from the default low res, and not actually high quality. How can I increase the quality?
I also tried using the default Logitech software, and that does increase the quality, so it seems possible on my Windows, however, I need the high quality with OpenCV, not the default software. Thanks!
class MyCam:
def __init__(self):
self.vs = cv2.VideoCapture(1)
time.sleep(1.5)
self.vs.set(3,1920)
self.vs.set(4,1080)

Which is the best camera for image processing?

Now, I have two options, a GoPro and an arduino OV7670 camera module, but if any better camera is available for image processing, I have the budget to buy (less than 100$).
For real time data processing go with the Arduino OV7670, because with the GoPro you would need a HDMI to Arduino video input. WiFi Preview on the GoPro has a ~1-3 second lag and its very low resolution.

How to apply video processing on the GPU

I have a issue in programming for the graphics Card.
I write a shader for Image processing (I use DirectX 11 in combination with SharpDX), but the Transfer from the CPU to the GPU is really slow (sometime about 0.5 seconds), but I think there should be a faster way than using Texture2D.FromStream, because when playing a video, also every image of the Video has to be transfered to the GPU (or am I wrong at this Point?).
How can I Speed up this process?

Resources