How to save a v4l2 buffer image using OpenCV 4.2 - opencv

I'm using a Raspberry Pi 3B+ with an equivalent board to the Auvidea B101 which is a HDMI to MIPI bridge. The board works correctly and I can save images using the code from my gist ( https://gist.github.com/danriches/80fadc21b5f1b0ca8ba725a8dd598710 ). However when I try to use OpenCV 4.2.0 (compiled locally and samples all work!) I get a segfault as detailed in the comments section of the gist. I can't for the life of me work out why though. If someone has a B101 or equivalent that uses the TC358743 chipset and has it working with OpenCV I'd be greatly appreciative if you have any advice or suggestions.
This all needs to work as fast as possible so apps calling apps is no good, I need to process the buffer and write it to a gstreamer pipe eventually. In fact if I could take the jpeg I get out the original code at line 832 and pass it to an instance of gst-rtsp-server then I'd be a happy man.
Any takers??
Dan

Related

Using OpenCV to process live video from Phantom 4

I would like to process frames live in OpenCV from the video feed on a DJI Phantom 4. I've been able to set up OpenCV for iOS in xCode but I need help finding a tutorial/instructions on how to send the frames over from the DJI Camera into OpenCV in the correct format on the fly. Any suggestions?
Thanks
Hello there Ilia Labkovsky,
I think am in the same boat, I have got a P3 and would like to process the images via OpenCV. I am intending to use my laptop PC as an image processor, sending the images directly via TCP/IP and doing my own image processing off-board. I am yet to establish this though, I may come across some problems.
Is there a way I can privately message you on Stack Overflow?
Best of luck with the programming :)
There is a tutorial for Android on the DJI sample apps on how to parse and obtain the yuv frames. From there you can use openCv to process the frames: https://github.com/DJI-Mobile-SDK-Tutorials/Android-VideoStreamDecodingSample

Raspberry Pi camera and OpenCv: 10bit?

The Raspberry Pi Camera v1 contains a OmniVision OV5647 sensors which offers up to 10bit raw RGB data. Using opencv's cvQueryFrame I get only 8bit data. I am only interested in grayscale imagery - how do I get 10bit data?
There may be simpler options available, but here are a couple of possible ideas. I have not coded or tested either, like I normally would - sorry.
Option 1.
Use "Video for Linux" (v4l2) and open the camera, do the ioctl()s and manage the buffers yourself - great link here.
Option 2.
Use popen() to start raspivid and tell it you want the raw option (--raw) and grab the raw data off the end of the JPEG with information on Bayer decoding from - here. Other, somewhat simpler to follow information available at section 5.11 here.
Assuming you want to capture RAW data from still images and not necessarily video, you have 2 options I know of:
Option 1: picamera
picamera is a Python library that will let you capture data to a stream. Be sure to read the docs as it's pretty tricky to work with.
Option 2: raspistill
You can also shell out to raspistill to capture your image file, and the process that however you want - if you want to process the raw data (captured raspistill --raw), you can use picamraw on- or offboard the Pi.
Even though we're a heavily Python shop, my team went with option 2 (in combination with picamraw, which we released ourselves) because picamera was not stable enough.

Teensy + IR camera + OpenCV

I have never ever asked this kind of question on StackOverflow before, and I wonder if you could help me guys because it is a "bit" vague.
I have to design a project that uses Teensy (simple ARM platform) for getting data from IR camera (Flir, resolution 80x60) over SPI, and streaming these data to Linux/Windows running machine (through USB-serial) and doing something simple with OpenCV.
THE PROBLEM: The project lacks some "inovation". It should not be something very complicated, but rather different approach, or trying something new.
Do you have recommendations/tutorials/books/experience with working with above mentioned things? OR do you see a potential for teying something new?
You might want to check out the OpenCV Cookbook for some ideas.
There is a project using this FLIR with a Teensy. It provides a thermal image using a small LCD screen (without any aditional computer).
https://hackaday.io/project/8994-diy-thermocam
So, the teensy can get data through spi.
Can the teensy send data through usb then ? Probably but you will have to check if the rate is high enough
.
Using OpenCV directly on teensy is not possible because of size of library. But you can probably make some basic image processing if the code is small enough.
The FLIR Lepton can be directly interfaced with Linux or Windows computer, so I don't really see the use of Teensy.
I would recommend a Raspberry Pi to interface the FLIR Lepton and then do some image processing. It's well documented on the web.

OpenCV: black image captured from usb camera

I am trying to capture an image frame from a USB camera using opencv. However I am always getting a black frame. I have read many posts with this issue and tried all the suggestions but nothing worked for me.
I started using the code discussed here:
http://opencv-users.1802565.n2.nabble.com/Using-USB-Camera-td6786148.html
I have tried including the method cvWaitKey(1000) after many 'critical' sentences. As you can see the waiting value is very high (1000).
I have also tried to save the image frame and, equally, it is a black image.
I am using the following system:
OpenCV 2.2.0
Windows 7, 32 bits
Visual Studio 2010 (C++)
a board usb camera (which I do not know the manufacturer)
The usb camera works well with AMCAP.EXE 1.00.
Could it be because of the camera drivers being used by Windows? Could I change to other drivers that work better for OpenCV 2.2.0?
Thanks
Ok. As I promised to your request in the comments, and sorry to keep you waiting, really been busy. Barely had time to post this answer too. But here it is:
This is me simulating that opencv is capturing black image. On the output window, which I had asked you in the comments about what it says, shows that there is an error.
After investigating, I realised that it is due to the camera's available format:
Of cuz, this is a lousier camera. If you have a better camera like the logitech one, you can see that the format available is so much more.
There are lots of methods, you can try some thing like
capture.set(CV_CAP_PROP_FRAME_WIDTH , 640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT , 480);
capture.set (CV_CAP_PROP_FOURCC, CV_FOURCC('B', 'G', 'R', '3'));//diff from mine, using as example
then the webcam will be able to snap. This webcam is bit faulty, hence the image snapped is not that beautiful.
Hope this is your problem. but it may not be the case too. I like debugging problems, but I can't put down all the possible causes that this happen for you as I am really busy, as you asked for an example, this is one of them. Cheers. If you could tell me what you output window error says, I probably can help more.
EDIT(to answer more in your comments):
Ok, I want you to try a few things:
1)First, instead of using cvQueryFrame, or similar capturing methods, I want you to try use that webcam to capture a video instead. Wait up to maybe say 10 secs to see if it's successful. Reason being, some cameras(lower quality ones) take quite a while to warm up and the first few frames they capture may be an empty one.
2) If the step one doesn't work, try typing
cout << cv::getBuildInformation() << endl;
and paste the results for media I/O and Video I/O? I want to see the results. I would suspect your library dependencies too, but since you said it works with a logitech camera, I doubt that's the case. Of course, there's always a chance it's due to that the camera is not compatible to OpenCV. Does the camera have any brands by the way?
3) Alternatively, just search for usb drivers online and install it, I had a friend who did this for a similar problem, but not sure the process of that.
The first thing, I would suggest is, visit this link and check your camera is working or not
http://www.youronlinemirror.com/
if yes, then go through below link to get started with things, you ll also find a good opencv c++ code out there, the code which you are using is of opencv1 api's c code, I would rather encourage you to go for c++ than the old version of opencv.
http://opencv-srf.blogspot.in/2011/09/capturing-images-videos.html
if you want an answer for your code, then, its simple, as you are saying its giving a black screen, which happened in my case when i started things out with opencv,
it isnt able to take the data from device, so, try this, it might work, as it did for me.
add
cvQueryFrame( capture );
before
IplImage* frame = cvQueryFrame( capture );
I went through the same problem as yours. Then I just changed the version from 3.1.0 to 2.4.13, then my webcam works! No more black images. I guess the 3 version is not compatible with the vs15. Thinking you may have already solved the problem long ago. But just post to let others know if they happen to have the same issue.
Even I faced the same black screen problem while running OpenCV-related programs. So instead of using USB cam I used mobile camera which worked perfectly fine. Use Google apps such as DroidCam (install DroidCam on mobile as well as on laptop/PC) to connect your mobile camera to laptop through Wi-Fi.

Streaming opencv Video

I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!
OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.
I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)

Resources