Can OpenCV seamlessly interact with all cameras that comply with these standards
No it cannot. You need something called a GenTLProducer in order to interact with your camera. Normally your vendor's SDK comes with it. Alternatively, you can use the one from Baumer Here or from Stemmer Imaging Here.
Another option is to use harvesters, which is an open source project that aims to do this. Although you need a GenTLProducer for that as well.
Related
I am interested in the Visual Intertial SLAM algorithm that is implemented in the ARKit SDK for motion tracking, that performs visual SLAM and fuses it with intertial data. I understand the algorithm and how tracking is performed.
Since I want to use my custom camera, and not an iphone, I was wondering if there is an equivalent open source implementation available already that performs the VI-SLAM + inertial data for tracking the object, with a comparable performance? I am not looking for SDKs that I can use as APIs, rather algorithm implementations that I can edit myself.
Apologies if this question should belong in another forum.
You can try a popular ARToolKit5. It is fast, intuitive and cross-platform. You can run it on macOS, iOS, Linux, Android or Windows. It was released in 2015 as a completely open source platform as LGPLv3 and later. There's also a link to the latest release of ARToolKitX.
There are many open source VISLAM on github. I recommend you to try VINS-Mono(https://github.com/HKUST-Aerial-Robotics/VINS-Mono). You can use your own camera to collect images and IMU data, or you can use public datasets.
I'm looking to implement face recognition feature and I see OpenCV is capable of it: https://github.com/Mjrovai/OpenCV-Face-Recognition
At the same time, I see many 3rd party face verification SDKs, like
http://kairos.com, http://www.neurotechnology.com/face-verification.html, http://ever.ai, etc. In general practise, what's the difference between OpenCV and 3rd party ones, if you only need offline face-recognition with no fancy addons, and which shall be used?
The example you linked with OpenCV uses a method (LBP) to perform the face-recognition that is outdated at the state of the art and that I think can hardly lead you to excellent results.
The SDKs you talk about are paid, and since they are obviously not open-source and I can not know what technologies they use.
If you prefer to implement a good face-recognition yourself, you have to use OpenCV only for the image capture / video stream part and then use something like TensorFlow/Keras/PyTorch for the deep learning part.
I am newbie with drones. I would like to develop a program to manage a drone using opencv to fly indoor over a line.
I am searching a lot of languages but most all of them are GPS based. I saw there is an alternative which calls SLAM to detect the position using the sensors.
Well I have a line in the floor and a camera on my drone. I like mission planner but I am not quite sure if it is the best choice. I will be using Parrot AR, but I would like to use any drone.
So I would like to use mission planner but I am not sure if it is the best choice.
What would be the best SDK you would recommend me to use in order to manage the drone not using the GPS points but relative locations or SLAM?
Well, you have the Parrot API ,and a couple of wrappers in different languages. Node-AreDrone for nodeJs, PyArdrone for python, and there is a wrapper coded in C# which I have used AR.Drone. It has a good user interface which you can see the both cameras, record and replay the videos, control the drone by clicking on buttons, you can see the metrics and configuration of the drone and you have also a way to send commands in a queue. Because I love c# and the features I've mentioned you have already in a user interface, I prefer this. Most of them are quite the same as they use the Parrot API inside by sending udp messages. I couldn't try others, so, there are a lot, and anybody could tell me which one is the best. For mission planner I couldn't find a good solution for indoors. So, for anyone who is lost and do not know here to start as I was. I recommend to select the language you want and search for the corresponding wrapper. If you like c# as me, so AR.Drone is a good choice.
Also if you want to do something with OpenCV. Copterface is a good example. You could implement it in any language with OpenCV.
I'm writing LabVIEW software that grabs images from an IMAQ compatible GigE camera.
The problem: This is a collaborative project, so I only have intermittent access to the actual camera.I'd like to be able to keep developing this software even when the camera isn't present.
Is there a simple/fast way to create a virtual or dummy IMAQ camera in software? Ideally I'd like the dummy camera grab frames from an AVI or a stack of JPEG's. Something like this must exist, I just can't find it on Google.
I'm looking for something that won't take very long (e.g.< 2 hours effort) and that is abstracted away through the standard LabVIEW IMAQ interface, so that my software won't know or care whether its dealing with a dummy camera or an actual camera.
You can try this method using LabVIEW classes:
Hardware Emulation Using LabVIEW Classes
If you have the IMAQdx driver, you might consider just buying a cheap USB webcam for $10.
Use the IMAQdx driver (assuming you have it), and then insert the Vision Acquisition Express VI, and you can choose AVIs or even pics as a source.
Something like this: GigESim is a camera emulation software. Unfortunately it is proprietary and too expensive (>$500) for my own needs, but perhaps others will find this link useful.
Anyone know of a viable Open Source alternative?
There's an IP Camera emulator project that emulates IP camera with python. I haven't used it myself so i don't know if it can be used by IMAQ.
Let us know if it's good for you.
I know this question is really old, but hopefully this answer helps someone out.
IMAQdx also works with Windows DirectShow devices. While normally these are actual physical capture devices (think USB Webcams), there is no necessity that they have to be.
There are a few different pre-made options available on the web. I found using Open Broadcaster Studio and this Virtual Cam plugin to be easy enough. Basically:
Download and install both.
Load your media sources in the sources list.
Enable the VirtualCam stream (Tools > VirtualCam). Press Start.
We're looking for a package to help identify and automatically rotate faxed TIFF images based on a watermark or logo.
We use libtiff for rotation currently, but don't know of any other libraries or packages I can use for detecting this logo and determining how to rotate the images.
I have done some basic work with OpenCV but I'm not sure that it is the right tool for this job. I would prefer to use C/C++ but Java, Perl or PHP would be acceptable too.
You are in the right place using OpenCV, it is an excellent utility. For example, this guy used it for template matching, which is fairly similar to what you need to do. Also, the link Roddy specified looks similar to what you want to do.
I feel that OpenCV is the best library out there for this kind of development.
#Brian, OpenCV and the IntelIPP are closely linked and very similar (both Intel libs). As far as I know, if OpenCV finds the intel IPP on your computer it will automatically use it under the hood for improved speed.
The Intel Performance Primitives (IPP) library has a lot of very efficient algorithms that help with this kind of a task. The library is callable from C/C++ and we have found it to be very fast. I should also note that it's not limited to just Intel hardware.
That's quite a complex and specialized algorithm that you need.
Have a look at http://en.wikipedia.org/wiki/Template_matching. There's also a demo program (but no source) at http://www.lps.usp.br/~hae/software/cirateg/index.html
Obviously these require you to know the logo you are looking for in advance...