Gstreamer Analog Video Capturing - image-processing

I am using Sensoray Model-1012 frame grabber.
First of all my setup is getting a screen output from HDMI and convert it to NTSC CVBS output. I am getting screen video to Toradex IM8 CPU via model-1012.
My problem is, when there is something moving horizontally on the screen it gets sketched from corners. I am loading an example video. It doesn't occur when I connect analog video to my JVC display.
I am using Gstreamer as a software and here is my pipeline:
GST_DEBUG=3 gst-launch-1.0 v4l2src device=/dev/video6 ! 'video/x-raw, format=(string)UYVY, width=720, height=480, framerate=30/1' ! interlace field-pattern=2 ! videoconvert ! autovideosink

The problem should be related to the "interlace" module. You can find here a description of interlacing. Basically half of the lines of your image are from a previous frame, and when something moves in the scene it looks weird, like ghosted. You should be able to solve simply removing interlace
GST_DEBUG=3 gst-launch-1.0 v4l2src device=/dev/video6 ! 'video/x-raw, format=(string)UYVY, width=720, height=480, framerate=30/1' ! videoconvert ! autovideosink

Related

Is it possible to remove camera frame but still continue detection in command prompt console?

I am working with YOLOv4 for detection through IP Camera. I have a GUI for camera control. So I don't want the camera frame to show the detected objects. However, I want the detected objects and the percentage to be shown in the Command Prompt console. Is it possible to make that? If yes, please suggest the way. Thank You
Add -dont_show after the command

VLC Scene Filter for Stream

Hi I am receiving a stream on VLC through another PC. The Stream is working on ubuntu 16.04 but I also want to use the scene filter that saves frames to a folder. I am trying to use the scene filter in VLC but it is not working any suggestions will be greatly appreciated. Many thanks in advance.
Scene filter Image

how to add beat and bass effect in video using ffmpeg command?

i want beat effect on video and i am using ffmpeg command for beat effect i was used this below command for beat effect black and white and original color after 2 sec looping but not this work this command only create black and white video ffmpeg -i addition.mp4 -vf hue=s=0 output.mp4
So please, suggest any solution.
I want make video like youtube.com/watch?v=7fG7TVKGcqI plaese suggest me
Thanks in advance
ffmpeg -i addition.mp4 -vf hue=s=0 output.mp4 will, as you said, just create a black video. vf is video filters and hue=s=0 is setting the hue and saturation to 0.
As far as I know, this kind of effect is way too advanced for a command line application unless you have a lot of knowledge on it already. I'd recommend using a graphical video editor. I use shotcut and I like it, but I'm not sure if you can do this in it.

GStreamer iOS - possible streaming & glimagesink blocking issue when returning to corresponding UITabBarController tab

Running XCode 12.0 beta 6 on a Mac Catalina 10.15.6 connected to an iPhone 8+ 13.5.1
GST Ver: 1.16.1
I've created an app with rtsp video streaming as one of the features from which the stream gets displayed by a GStreamer pipeline on a tab view on a dynamically created UI.
When switching to the streaming tab and starting the stream the first time, it starts, gstreamer & ios are fine & the stream is viewable.
After switching to another tab where the UI Views and window handles go out of focus, I throw away samples coming in from the decoder to an appsink element in the pipeline (described more below).
Switching back to the streaming tab, I resume the stream, and I can tell there is video data coming in (described below), but it is not being displayed.
There are 2 pipelines for the entire stream, separated by appsink and appsrc.
I can tell the video data is coming in after switching back to the tab because the appsink "new-sample" callback I registered is getting called.
Also, in the callback, pushing the samples with gst_app_src_push_sample(...) to the appsrc element returns without error.
Here's an example of what the pipelines look like:
rtspsrc name=rtspsrc01 location=rtsp://192.168.0.25:7768/stream latency=25 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! videobox left=0 right=0 top=0 bottom=0 ! tee name=t2_01 ! queue ! videoscale ! glimagesink name=thumb_sink01 t2_01. ! appsink name=appsink01 sync=false
appsrc name=appsrc01 max-latency=10 ! videoscale ! glimagesink name=viewer_sink01 sync=false
The glimagesink element named "thumb_sink01" is a thumbnail view of the stream displayed on the tab and "appsink01" goes to the "new-sample" callback.
The "appsrc01" element on the second pipeline is receiving the sample from the gst_app_src_push_sample(...) call and goes to a larger UIImageView window on the same tab.
I can see memory consumption growing after switching back to the streaming tab as well, so it appears one of the elements on the second pipeline is blocking for some reason. I've verified both pipelines are in the state GST_STATE_PLAYING as well.
I've tried quite a few other things such as verifying the views GStreamer is rendering on are valid and even this chunk of code when switching back to the streaming tab and resuming the stream:
gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(self->video_sink), (guintptr) (id) self->ui_video_view);
gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(self->thumb_sinks[0]), (guintptr) (id) self->ui_thumb_views[0]);
gst_video_overlay_prepare_window_handle(GST_VIDEO_OVERLAY(self->thumb_sinks[0]));
gst_video_overlay_expose(GST_VIDEO_OVERLAY(self->thumb_sinks[0]));
gst_video_overlay_prepare_window_handle(GST_VIDEO_OVERLAY(self->video_sink));
gst_video_overlay_expose(GST_VIDEO_OVERLAY(self->video_sink));
I've been assuming that the issue is in the glimagesink element because pushing the sample returns, which indicates to me that appsrc accepted it and no indication of running out of buffers or dropping samples. I feel unlikely that videoscale would be the culprit either, but I've been wrong before.
Maybe there's something goofy going on with the glimagesink name=thumb_sink01 element.?.? I haven't really looked at that yet.
Appreciate any feedback anyone has.
Doug
My last comment on dropping glimagesink and writing directly to the window handle looks like the way to go, and seems to overall work better.
Taking the raw RGB decoded frame from the appsink sample memory buffer received in the "new-sample" callback and creating a UIImage from that and setting the UIImageView.image to the UIImage works.
Some pseudo code samples to do the conversion if it's useful to others (although plenty of examples are available online):
// Create this once before passing any image frames.
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
data->img_ctx = CGBitmapContextCreate(img_buff, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
// Use this for converting decoded RGB image frames to UIImage's
dispatch_async(dispatch_get_main_queue(), ^{
CGImageRef imgRef = CGBitmapContextCreateImage(data->img_ctx);
data->the_img = [[UIImage alloc] initWithCGImage: imgRef];
CGImageRelease(imgRef);
[data->ui_video_view setMyImage: (UIImage *)data->the_img];
[data->ui_video_view setNeedsDisplay];
});
// Edit: had problems with subclassing between Objective-C & Swift, so had to create this setter function to work around it.
-(void) setMyImage: (UIImage *) img {
super.image = img;
}

OpenCV: Making GPU pyrlk_optical_flow.cpp work on video input

It seems that the pyrlk_optical_flow.cpp sample code (opencv\samples\gpu) only works on two still images.
If so, do any of you know of examples of how to convert the code from still image input to streaming video or webcam input?
Any help is appreciated. Thank you.

Resources