From a video file (.mp4) to live video (usbcamera) - OpenCV - opencv

Recenty I made a proyect with OpenCV, dectects cars, that inputs video file and output another video file.
I want, with the help of a raspberry pi or similar, turn the input video(mp4) to a live webcam input(usb camera or similar), and output the video processed in livetime to my screen, to basicly mount a rasppbery pi (or similar) to my car with a camera, and output in live the detection in a screen.
How to do that? any documentation?

Related

Darknet change video stream resolution

I am trying to use my raspberry pi camera module to stream video to darknet tiny-yolo-v4 using
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights
The resolution is set to the maximum resolution of the camera, which is way too much to handle for the pi. However, I am unable to find a way to change the resolution of the stream. I would be very grateful for any suggestions!
P.S.: Using a udp/rstp stream as input and setting the resolution there does not work either for whatever reason ("Video-stream stopped" for rstp, showing no image for udp).

How to merge stereo video?

I recently purchased a stereo USB camera to capture some footage through my AR headset. Now the camera records through 2 lenses and gives 2 outputs: a left and a right video feed. My goal is to combine the left and right video feed into a single mono output.
Here is a picture of the camera and the link to the exact model.
Here is a screenshot of the video output:
As you can see I want to somehow interlace those 2 videos into 1 (I guess similar to anaglyph) so it can be viewed on a 2d screen (without a vr headset). Does anyone know any software programs that can do this? Or has anyone written a custom script?

DSLR Canon Videocapture in OpenCV

I need to capture frame from a DSLR camera. I know that i can use
Videocapture cap(0);
for capture from default webcam. If i connect with usb the camera and run the code, It seems like he cant find the camera.
What should i do for capture from the DSLR?
In general, I have found getting OpenCV to work with anything besides a basic webcam almost impossible. In theory, I think it uses the UVC driver, but I have had almost 0 luck getting it to read. One thing you can try is using VLC and see if you can capture a video stream from your camera with it. If you can you might get lucky and figure which camera or video device the DSLR actually is.
If your DSLR has a development SDK maybe you can capture frame using their interface and then use OpenCV for processing. I do this for a project. I have a 3rd party SDK that I use to find and control the camera and them I move the video data into OpenCV (EmguCV) for processing.
Doug

How to change vuforia video source with a custom camera

I would like change source camera video by custom camera stream using Vuforia and Unity:
Take the video stream from the camera (Android cam or Webcam)
Improve contrast, brightness or other manually (for example through openCV) and add elements or another pattern that could be optimally recognized by Vuforia.
Resend the modified video stream in Unity 3D and have it detected by Vuforia
It is possible ?
Is there another mode ?
As far as I know, this is not possible. Vuforia takes its input directly from the camera and processes it - the maximum you can do is alter some of the camera settings (if you want to explore that, read about the Vuforia advanced camera API), but this is not enough for you according to your requirements.
Your only option if you must do processing on the input video is to handle the detection and tracking yourself without Vuforia (for example, using OpenCV), which is obviously not so easy...
You can use any software for faking the camera like http://perfectfakewebcam.com/.
just prepare your video and feed it to the fake webcam software and then from unity change vuforia camera device to the fake webcam

send images to kinect as input instead of camera feed

I want to send my images to Kinect SDK either OpenNI or windows SDK for Kinect sdk to tell me the position of a user's hand or head and ..
I don't want to use kinect's camera feed. The images are from a paper which I want to do some image processing on them and I need to work exactly on same images so can not use my own body as input to Kinect camera.
I don't matter between Kinect sdk of Microsoft or the OpenNI thing, it just needs to be able to get my rgb and depth images as input instead of Kinect camera's one.
Is it possible? If yes how can I do it?
I subscribe to the same question. I want to feed a Kinect face detection app to read images from the hard drive and return the Animation Units of the recognized face. I want to train a classifier for facial emotion recognition using Animation Units as input and features.
Thanks,
Daniel.

Resources