I am trying to use my raspberry pi camera module to stream video to darknet tiny-yolo-v4 using
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights
The resolution is set to the maximum resolution of the camera, which is way too much to handle for the pi. However, I am unable to find a way to change the resolution of the stream. I would be very grateful for any suggestions!
P.S.: Using a udp/rstp stream as input and setting the resolution there does not work either for whatever reason ("Video-stream stopped" for rstp, showing no image for udp).
Related
I have a rtsp stream from a pretty good camera (my mobile phone).
I am getting the stream using opencv:
cv2.VideoCapture(get_camera_stream_url(camera))
However, the image quality I get is way bellow my mobile phone camera. I understand that rtsp protocol may lower the resolution but still, the image quality is not good for OCR.
However, although I have a VIDEO stream, the object I am recording is a static one. So, it is expected that all frames from the video should more or less the same, except for noise or lighting issues.
I was wondering if it is possible to get a 10 seg video with several frames and combine it to a SINGLE frame with better sharpness, reducing the noise.
Is it viable? How?
I need to capture frame from a DSLR camera. I know that i can use
Videocapture cap(0);
for capture from default webcam. If i connect with usb the camera and run the code, It seems like he cant find the camera.
What should i do for capture from the DSLR?
In general, I have found getting OpenCV to work with anything besides a basic webcam almost impossible. In theory, I think it uses the UVC driver, but I have had almost 0 luck getting it to read. One thing you can try is using VLC and see if you can capture a video stream from your camera with it. If you can you might get lucky and figure which camera or video device the DSLR actually is.
If your DSLR has a development SDK maybe you can capture frame using their interface and then use OpenCV for processing. I do this for a project. I have a 3rd party SDK that I use to find and control the camera and them I move the video data into OpenCV (EmguCV) for processing.
Doug
Recenty I made a proyect with OpenCV, dectects cars, that inputs video file and output another video file.
I want, with the help of a raspberry pi or similar, turn the input video(mp4) to a live webcam input(usb camera or similar), and output the video processed in livetime to my screen, to basicly mount a rasppbery pi (or similar) to my car with a camera, and output in live the detection in a screen.
How to do that? any documentation?
I want to send my images to Kinect SDK either OpenNI or windows SDK for Kinect sdk to tell me the position of a user's hand or head and ..
I don't want to use kinect's camera feed. The images are from a paper which I want to do some image processing on them and I need to work exactly on same images so can not use my own body as input to Kinect camera.
I don't matter between Kinect sdk of Microsoft or the OpenNI thing, it just needs to be able to get my rgb and depth images as input instead of Kinect camera's one.
Is it possible? If yes how can I do it?
I subscribe to the same question. I want to feed a Kinect face detection app to read images from the hard drive and return the Animation Units of the recognized face. I want to train a classifier for facial emotion recognition using Animation Units as input and features.
Thanks,
Daniel.
I am working on a project in which I have to detect the hand in a video. Kinect is being used to capture the video. I have already tried skin segmentation using hsv colour scheme. It works well when I get the video from cam of my laptop but does not work with Kinect. I have also tried colour segmentation and thresholding but it is also not working well. I am using opencv in c. I will be grateful if someone can give any type of suggestions or steps to detect the hand.
You can use OpenNI to obtain a skeleton of the person which robustly tracks the hand in your video. E.g. http://www.youtube.com/watch?v=pZfJn-h5h2k