Why is the first frame of my video duplicated at the end, and being duplicated at the end of my frame extraction process using OpenCV? - opencv

I am extracting frames from a video using OpenCV. Once the process is finished and all frames are extracted, the code continues to extract the first frame of the video, seemingly infinitely.
This OpenCV code has worked for all of my videos so far except this one, which was shot with a different kind of camera, so I suspect something is different about the video file. Notably, when I play the video in Quick Time, the first frame of the video is shown at the end.
cap = cv2.VideoCapture('Our_Video.mp4')
i = 0
while(cap.isOpened()):
cap.set(cv2.CAP_PROP_POS_FRAMES, i)
IsNotEnd, frame = cap.read()
if IsNotEnd == False:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite(os.path.join('increment_'+str(i)+'.png'),gray)
i+=1
cap.release()
cv2.destroyAllWindows()
Clearly, variable IsNotEnd is never being set to False – how can I change that setting from cap.read()? It clearly seems to relate to the first frame being shown after the video ends.

Related

How to process a raw 10 bit video signal with opencv, to avoid purple distorted image?

I have created a custom UVC camera, which can streaming a 10 bit raw RGB(datasheet said) sensor's image. But i had to pack the 10 bit signal into 16 bit packets, and write the descriptors as a YUY2 media(UVC not support raw format). Now I have video feed(opened it witm amcap,vlc, custom opencv app). The video is noisy and purple. I started to process the data with openCV and read bunch of posts about the problem, but now I am bit confused how to solve the problem. I would love to learn more about the image formats and processing, but now a bit overhelmed the amount of information and need some guidance. Also based on the sensor datasheet it is a BGGR bayer grid, and the similar posts describe the problem as a greenish noisy picture, but i have purple pictures.
purple image from the camera
UPDATE:
I used the mentioned post post for get proper 16 bit one channel image (gray scale), but I am not able to demosaicing the image properly.
import cv2
import numpy as np
# open video0
cap = cv2.VideoCapture(1, cv2.CAP_MSMF)
# set width and height
cols, rows = 400, 400,
cap.set(cv2.CAP_PROP_FRAME_WIDTH, cols)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, rows)
cap.set(cv2.CAP_PROP_FPS, 30)
cap.set(cv2.CAP_PROP_CONVERT_RGB, 0)
# Fetch undecoded RAW video streams
cap.set(cv2.CAP_PROP_FORMAT, -1) # Format of the Mat objects. Set value -1 to fetch undecoded RAW video streams (as Mat 8UC1)
while True:
# Capture frame-by-frame
ret, frame = cap.read()#read into np array with [1,320000] h*w*2 byte
#print(frame.shape)
if not ret:
break
# Convert the frame from uint8 elements to big-endian signed int16 format.
frame = frame.reshape(rows, cols*2) # Reshape to 800*400
frame = frame.astype(np.uint16) # Convert uint8 elements to uint16 elements
frame = (frame[:, 0::2] << 8) + frame[:, 1::2] # Convert from little endian to big endian (apply byte swap), the result is 340x240.
frame = frame.view(np.int16)
# Apply some processing for disapply (this part is just "cosmetics"):
frame_roi = frame[:, 10:-10] # Crop 320x240 (the left and right parts are not meant to be displayed).
# frame_roi = cv2.medianBlur(frame_roi, 3) # Clean the dead pixels (just for better viewing the image).
frame_roi = frame_roi << 6 # shift the 6 most left bits
normed = cv2.normalize(frame_roi, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC3) # Convert to uint8 with normalizing (just for viewing the image).
gray = cv2.cvtColor(normed, cv2.COLOR_BAYER_GR2BGR)
cv2.imshow('normed', normed) # Show the normalized video frame
cv2.imshow('rgb', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.imwrite('normed.png', normed)
cv2.imwrite('colored.png', gray)
cap.release()
cv2.destroyAllWindows()
from this:
i got this:
SECOND UPDATE:
To get more relevant informations about the image status I took some pictures with a different target(another devboard with a camera module, both of the should be blue and the PCB shoulb be orangeish), I repeated this with the test pattern of the camera. I took pictures after every step of the script:
frame.reshaped(row, cols*2) camera target
frame.reshaped(row, cols*2) test pattern
frame.astype(np.uint16) camera target
frame.astype(np.uint16) test pattern
frame.view(np.int16) camera target
frame.view(np.int16) test pattern
cv2.normalize camera target
cv2.normalize test pattern
cv2.COLOR_BAYER_GR2BGR camera target
cv2.COLOR_BAYER_GR2BGR test pattern
On the bottom and top of the camera target pictures there a pink wrap foil for protect the camera(looks green on the picture). The vendor did not provide me the documentation of the sensor, so i do not know how should look like the proper test pattern, but I am sure that one not correct.

OpenCV BackgroundSubtractor Thinks A Busy Foreground Is The Background

I am using background subtraction to identify and control items on a conveyor belt.
The work is very similar to the typical tracking cars on a highway example.
I start out with an empty conveyor belt so the computer knows the background in the beginning.
But after the conveyor has been full of packages for a while the computer starts to think that the packages are the background. This requires me to restart the program from time to time.
Does anyone have a workaround for this?
Since the conveyor belt is black I am wondering if it maybe makes sense to toss in a few blank frames from time to time or if perhaps there is some other solution.
Much Thanks
# After experimentation, these values seem to work best.
backSub = cv2.createBackgroundSubtractorMOG2(history = 5000, varThreshold = 1000, detectShadows = False)
while True:
return_value, frame = vid.read()
if frame is None:
print('Video has ended or failed, try a different video format!')
break
## [gaussian blur helps to remove noise These settings seem to remove reflections from rollers.]
blur = cv2.GaussianBlur(frame, (0,0), 5, 0, 0)
## [End of: gaussian blur helps to remove noise These settings seem to remove reflections from rollers.]
## [Remove the background and produce a grays scale image]
fgMask = backSub.apply(blur)
## [End of: Remove the background and produce a grey scale image]
You say you're giving the background subtractor some pure background initially.
When you're done with that and in the "running" phase, call apply() specifically with learningRate = 0. That ensures the model won't be updated.

How to restore web-cam to default settings after ruining them with OpenCV settings?

I need video from web-cam. On Anaconda with python-3.6 and OpenCV-3 it worked fine. I tried then the same code in Idle with python-3.6 and OpenCV-4.1.0 and it did not worked in anaconda. I had two black upper and lower edges, and I could only see the middle of the image. I tried to modify some OpenCV settings and it only got worse, now I barely see anything on the image, only if I put strong light. The two edges did not disappeared.
import cv2
capture = cv2.VideoCapture(0)
capture.set(cv2.CAP_PROP_SETTINGS, 0)
while(True):
ret, frame = capture.read()
cv2.imshow('video', frame)
if cv2.waitKey(1) == 27:
break
capture.release()
cv2.destroyAllWindows()
The line capture.set(cv2.CAP_PROP_SETTINGS, 0) opens a small settings dialog, but there are many other, like this:
CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.
CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file
CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
CV_CAP_PROP_FPS Frame rate.
CV_CAP_PROP_FOURCC 4-character code of codec.
CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.
CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.
CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).
CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).
CV_CAP_PROP_HUE Hue of the image (only for cameras).
CV_CAP_PROP_GAIN Gain of the image (only for cameras).
CV_CAP_PROP_EXPOSURE Exposure (only for cameras).
CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
CV_CAP_PROP_WHITE_BALANCE Currently unsupported
CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)
I tried to install some camera drivers from asus, but couldn't find any for my model: FX504GE . Is there any combination of this settings or smth to restore my web-cam? I really need it rn...
The simple way is to use v4l2-ctrl to read in all parameters when you launch the camera. record down the initial value. After you have done in opencv. use v4l2-ctrl to set.
eg. size
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=YUYV
there are other like auto zoom auto expsoure and lot of things read all and set all
You can use guvcview to to this through a GUI. It has a "hardware defaults" button.
sudo apt-get install guvcview
See:
https://askubuntu.com/questions/205391/reset-webcam-settings-to-defaults

SceneKit background too bright when applying camera frame MTLTexture as a background

I'm trying to convert a camera frame to a MTLTexture and then use it as a SceneKit background. The texture is created successfully, and looks like it should when inspecting it. However, when I set the following:
scene.background.contents = texture
It appears too bright / washed out. Any ideas how to fix this?
Update 1: Gist here
It turns out what was causing the washed out effect in the texture was the pixel format. It should be .bgra8Unorm_srgb (not .bgra8Unorm).

Using BlobTrackerAuto to track people in computer vision application

I am currently trying to develop a system that tracks people in a queue using EmguCV (OpenCV Wrapper). I started by running and understanting the VideoSurveilance example that's in Emgu package I downloaded. Here is my code based on the example:
private static void processVideo(string fileName)
{
Capture capture = new Capture(fileName);
MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX,
1.0, 1.0);
BlobTrackerAuto<Bgr> tracker = new BlobTrackerAuto<Bgr>();
//I'm using a class that I implemented for foreground segmentation
MyForegroundExtractor fgExtractor = new MyForegroundExtractor();
Image<Bgr, Byte> frame = vVideo.QueryFrame();
fgExtractor.initialize(frame);
while (frame != null)
{
Image<Gray, Byte> foreground = fgExtractor.getForegroundImg(frame);
tracker.Process(frame, foreground);
foreach (MCvBlob blob in tracker)
{
if (isPersonSize(blob))
{
frame.Draw((Rectangle)blob, new Bgr(0, 0, 255), 3);
frame.Draw(blob.ID.ToString(), ref font,
Point.Round(blob.Center), new Bgr(255.0, 255.0, 255.0));
}
}
CvInvoke.cvShowImage("window", frame);
CvInvoke.cvWaitKey(1);
frame = capture.QueryFrame();
}
}
The above code is meant to process each frame of an AVI Video, and show the processed frame with red rectangles around each person in scene. I didn't like the results I was getting using the IBGFGDetector<Bgr> class that is used in VideoSurveilance example, so I am trying to use my own foreground detector, using Emgu's functions such as CvInvoke.cvRunningAvg(), CvInvoke.cvAbsDiff(), CvInvoke.cvThreshold() and cvErode/cvDilate(). I have a few issues:
The video starts with a few people already in the scene. I am not getting the blobs corresponding to the people that are in the scene when the video starts.
Sometimes I "lose" a person for a few frames: I had the red rectangle drawn around a person for several seconds/frames and it disappears and after a while is drawn again with a different ID.
As you can see from the sample code, I check if the blob may be a person checking its height and width (isPersonSize() method), and draw the red rectangle only in the ones that pass in the test. How can I remove the ones that are not person sized?
I want to measure the time a person stays in the scene. What's the best way to know when a blob disappeared? Should I store the IDs of the blobs that I think correspond to people in an array and at each loop check if each one is still there using tracker.GetBlobByID()?
I think I am getting better results if I don't process every frame in the loop. I added a counter variable and an if-statement to process at every 3 frames:
if (i % 3 == 0)
tracker.Process(frame, foreground);
I added the if-statement because the program execution was really slow. But when I did that, I was able to track people that I wasn't able before.
To summarize, I would really appreciate if someone that is more used to OpenCV/EmguCV helped me by saying if it is a good approach to track people using BlobTrackerAuto, and by helping me with the issues above. I get the feeling that I am not taking advantage of the tools EmguCV can provide me.

Resources