How to restore web-cam to default settings after ruining them with OpenCV settings? - opencv

I need video from web-cam. On Anaconda with python-3.6 and OpenCV-3 it worked fine. I tried then the same code in Idle with python-3.6 and OpenCV-4.1.0 and it did not worked in anaconda. I had two black upper and lower edges, and I could only see the middle of the image. I tried to modify some OpenCV settings and it only got worse, now I barely see anything on the image, only if I put strong light. The two edges did not disappeared.
import cv2
capture = cv2.VideoCapture(0)
capture.set(cv2.CAP_PROP_SETTINGS, 0)
while(True):
ret, frame = capture.read()
cv2.imshow('video', frame)
if cv2.waitKey(1) == 27:
break
capture.release()
cv2.destroyAllWindows()
The line capture.set(cv2.CAP_PROP_SETTINGS, 0) opens a small settings dialog, but there are many other, like this:
CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.
CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file
CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
CV_CAP_PROP_FPS Frame rate.
CV_CAP_PROP_FOURCC 4-character code of codec.
CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.
CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.
CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).
CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).
CV_CAP_PROP_HUE Hue of the image (only for cameras).
CV_CAP_PROP_GAIN Gain of the image (only for cameras).
CV_CAP_PROP_EXPOSURE Exposure (only for cameras).
CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
CV_CAP_PROP_WHITE_BALANCE Currently unsupported
CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)
I tried to install some camera drivers from asus, but couldn't find any for my model: FX504GE . Is there any combination of this settings or smth to restore my web-cam? I really need it rn...

The simple way is to use v4l2-ctrl to read in all parameters when you launch the camera. record down the initial value. After you have done in opencv. use v4l2-ctrl to set.
eg. size
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=YUYV
there are other like auto zoom auto expsoure and lot of things read all and set all

You can use guvcview to to this through a GUI. It has a "hardware defaults" button.
sudo apt-get install guvcview
See:
https://askubuntu.com/questions/205391/reset-webcam-settings-to-defaults

Related

Why is the first frame of my video duplicated at the end, and being duplicated at the end of my frame extraction process using OpenCV?

I am extracting frames from a video using OpenCV. Once the process is finished and all frames are extracted, the code continues to extract the first frame of the video, seemingly infinitely.
This OpenCV code has worked for all of my videos so far except this one, which was shot with a different kind of camera, so I suspect something is different about the video file. Notably, when I play the video in Quick Time, the first frame of the video is shown at the end.
cap = cv2.VideoCapture('Our_Video.mp4')
i = 0
while(cap.isOpened()):
cap.set(cv2.CAP_PROP_POS_FRAMES, i)
IsNotEnd, frame = cap.read()
if IsNotEnd == False:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite(os.path.join('increment_'+str(i)+'.png'),gray)
i+=1
cap.release()
cv2.destroyAllWindows()
Clearly, variable IsNotEnd is never being set to False – how can I change that setting from cap.read()? It clearly seems to relate to the first frame being shown after the video ends.

How to process a raw 10 bit video signal with opencv, to avoid purple distorted image?

I have created a custom UVC camera, which can streaming a 10 bit raw RGB(datasheet said) sensor's image. But i had to pack the 10 bit signal into 16 bit packets, and write the descriptors as a YUY2 media(UVC not support raw format). Now I have video feed(opened it witm amcap,vlc, custom opencv app). The video is noisy and purple. I started to process the data with openCV and read bunch of posts about the problem, but now I am bit confused how to solve the problem. I would love to learn more about the image formats and processing, but now a bit overhelmed the amount of information and need some guidance. Also based on the sensor datasheet it is a BGGR bayer grid, and the similar posts describe the problem as a greenish noisy picture, but i have purple pictures.
purple image from the camera
UPDATE:
I used the mentioned post post for get proper 16 bit one channel image (gray scale), but I am not able to demosaicing the image properly.
import cv2
import numpy as np
# open video0
cap = cv2.VideoCapture(1, cv2.CAP_MSMF)
# set width and height
cols, rows = 400, 400,
cap.set(cv2.CAP_PROP_FRAME_WIDTH, cols)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, rows)
cap.set(cv2.CAP_PROP_FPS, 30)
cap.set(cv2.CAP_PROP_CONVERT_RGB, 0)
# Fetch undecoded RAW video streams
cap.set(cv2.CAP_PROP_FORMAT, -1) # Format of the Mat objects. Set value -1 to fetch undecoded RAW video streams (as Mat 8UC1)
while True:
# Capture frame-by-frame
ret, frame = cap.read()#read into np array with [1,320000] h*w*2 byte
#print(frame.shape)
if not ret:
break
# Convert the frame from uint8 elements to big-endian signed int16 format.
frame = frame.reshape(rows, cols*2) # Reshape to 800*400
frame = frame.astype(np.uint16) # Convert uint8 elements to uint16 elements
frame = (frame[:, 0::2] << 8) + frame[:, 1::2] # Convert from little endian to big endian (apply byte swap), the result is 340x240.
frame = frame.view(np.int16)
# Apply some processing for disapply (this part is just "cosmetics"):
frame_roi = frame[:, 10:-10] # Crop 320x240 (the left and right parts are not meant to be displayed).
# frame_roi = cv2.medianBlur(frame_roi, 3) # Clean the dead pixels (just for better viewing the image).
frame_roi = frame_roi << 6 # shift the 6 most left bits
normed = cv2.normalize(frame_roi, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC3) # Convert to uint8 with normalizing (just for viewing the image).
gray = cv2.cvtColor(normed, cv2.COLOR_BAYER_GR2BGR)
cv2.imshow('normed', normed) # Show the normalized video frame
cv2.imshow('rgb', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.imwrite('normed.png', normed)
cv2.imwrite('colored.png', gray)
cap.release()
cv2.destroyAllWindows()
from this:
i got this:
SECOND UPDATE:
To get more relevant informations about the image status I took some pictures with a different target(another devboard with a camera module, both of the should be blue and the PCB shoulb be orangeish), I repeated this with the test pattern of the camera. I took pictures after every step of the script:
frame.reshaped(row, cols*2) camera target
frame.reshaped(row, cols*2) test pattern
frame.astype(np.uint16) camera target
frame.astype(np.uint16) test pattern
frame.view(np.int16) camera target
frame.view(np.int16) test pattern
cv2.normalize camera target
cv2.normalize test pattern
cv2.COLOR_BAYER_GR2BGR camera target
cv2.COLOR_BAYER_GR2BGR test pattern
On the bottom and top of the camera target pictures there a pink wrap foil for protect the camera(looks green on the picture). The vendor did not provide me the documentation of the sensor, so i do not know how should look like the proper test pattern, but I am sure that one not correct.

imagemagick cropping image creates jagged edges or saw-tooth shapes

The above pic (looks like zoomed one ) is from first level conversion from 1.Ai file 1_cropped.AI file what I get after cropping. I don't do resize during cropping it gets automatically resized.
I am trying crop an image, Seems without +repage imagemagick unable to crop. The problem is a simple crop created the jagged lines as you can see from the snapshot taken from a portion of image.
How to remove this. Some where in SOF post I found a recommendation to use "Gaussian blur" but didn't find a proper command to do the same. Many thanks! I am doing just the crop and no resizing.
Original : Due to copyright can't show the entire image. But below is one section:
Looking into : http://www.imagemagick.org/Usage/antialiasing/ now but unable to smoothe the 'stair case' or 'jaggies' so far.
UPDATE from the comments:
Yes the input is AI and output is almost all format AI/SVG/PNG/GIF/JPEG/BMP. So for smaller resolution files such as png/GIF I don't get that jagged shapes I I tried turning on anti-aliasing , blurring and guassian-bluring but no luck. I think the repaging zooms the image which I don't need, is it possible to set the canvas somehow so the original resolution is kept intact when converting from AI to AI? Yes initially I convert AI to AI after cropping and than feed the converted AI for further processing. The stair-stepping appears from first level AI to AI file conversion itself.

Crop Custom Shape in ImageJ or equivalent (TIFF video file)

I have a .tiff video file with growing fibers that look like the image below
Now
imagine that this fiber will constantly grow and shrink in a straight line. Now I'd like to somehow crop out the region of the video that contains just the fiber with, for example, a black background image.
Now when I play the video I'd like to just see the growing fiber region of the video with the black background everywhere else.
Question: Is there a way to preform a "custom" crop of irregular shaped objects in ImageJ?
If you don't know if ImageJ can do this sort of image processing any other software options are welcome.
Thanks for any help
Yes, you can do this in ImageJ. If you can find a threshold method that captures your fiber, you can turn that into a selection (ROI), and then Clear Outside to turn everything else black:
Image > Adjust > Threshold and choose the threshold, or use one of the automatic methods. But don't apply the threshold!
Edit > Selection > Create Selection (turns the thresholded area into an ROI)
Edit > Clear Outside (makes the background black -- assuming you have set your background color to black)
If you want to make the window smaller, you can do Image > Crop with the selection active. This will crop the image to the rectangular bounding box of the ROI. But this size will vary according to the size of the fiber. So you might want to do this when the fiber is at its largest.

How show stereo camera with Oculus Rift?

I use the OpenCV for show in a new windows the left and right image from a stereo camera. Now I want to see the same thing on the Oculus Rift but when I connect the Oculus the image doesn't became in the Characteristic Circled image suitable with the lens inside the Oculus...
I need to process by myself the image ? It's not Automatic?
This is the code for show the windows:
cap >> frame; //cap= camera 1 & cap2=camera 2
cap.read(frame);
sz1 = frame.size();
//second camera
cap2 >> frame2;
cap2.read(frame2);
sz2 = frame2.size();
cv::Mat bothFrames(sz2.height, sz2.width + sz1.width, CV_8UC3);
// Move right boundary to the left.
bothFrames.adjustROI(0, 0, 0, -sz1.width);
frame2.copyTo(bothFrames);
// Move the left boundary to the right, right boundary to the right.
bothFrames.adjustROI(0, 0, -sz2.width, sz1.width);
frame.copyTo(bothFrames);
// restore original ROI.
bothFrames.adjustROI(0, 0, sz2.width, 0);
cv::imencode(".jpg", bothFrames, buf, params);
I have another problem. I'm trying to add the OVR Library to my code but I have the error "System Ambibuous Symbol" because some class inside the OVR Library used the same namaspace... This error arise when I add the
#include "OVR.h"
using namespace OVR;
-.-"
The SDK is meant to perform lens distortion correction, chromatic aberration correction (different refractive indices for different color light causes color fringing in image without correction), time warp, and possibly other corrections in the future. Unless you have a heavy weight graphics pipeline that you're hand optimizing, it's best to use the SDK rendering option.
You can learn about the SDK and different kinds of correction here:
http://static.oculusvr.com/sdk-downloads/documents/Oculus_SDK_Overview.pdf
It also explains how the distortion corrections are applied. The SDK is open source so you could also just read the source for a more thorough understanding.
To fix your namespace issue, just don't switch to the OVR namespace! Every time you refer to something from the OVR namespace, prefix it with OVR:: - e.g, OVR::Math - this is, after all, the whole point of namespaces :p

Resources