I'm new to IP cameras and I know there are quite a lot of topics about this in the forum already, but I can't find a concrete answer for my needs.
I want to access an IP camera using OpenCV in Python from a Windows PC. As I don't have a camera yet, I need to buy one and I can't figure out, what requirements this camera needs to have.
For example, there are quite cheap IP cameras (e.g. Xi****) which say they come with an Android or iOS app and are only accessible via those.
I thought you can access any IP cam via OpenCV, but now I'm not sure anymore... can anyone give me an overview, what specs an IP cam needs, to be accessed via OpenCV on Windows? I don't want to buy a camera and later realize, that I can't access the video stream.
I'm really sorry, if this has already been asked, but I can't find a satisfying answer to this question and Google doesn't seem to be very helpful...
Thanks in advance.
check for IP cam that can transmit RTSP opencv know how to work with this type of stream.
Related
As part of a student project, I am currently setting up device to device video streaming.
I am using two Raspberry Pi 3 with the camera modules and am adding face tracking via OpenCV (all in Python3).
I want to stream live Video captured by Raspberry Pi (X) to Raspberry Pi (Y) and vice versa. The Raspberries will not be in the same building/network.
What I don't want, is anyone being able to stream the video on a different device.
As I am new to the whole streaming and security idea, I was wondering if there is some way of adding security to live streams by limiting access by the device.
Say, the video of Raspberry Pi (X) CAN ONLY be viewed by Raspberry Pi (Y).
Is this possible? If not, what's the next most secure option (limiting by IP maybe).
I am also not fixed to using Raspberries for this project, if there is a different solution I'd love to hear about it.
Thanks for any ideas.
You're not the first person to do something like this. A raspberry pi is an excellent choice for the project and you should be able to find plenty of guides online to doing something like this.
You'll want to ensure you enable a strong username and password within whatever library you use.If you want to protect the live stream with a username and password, you should enable this:
stream_auth_method 2
stream_authentication SOMEUSERNAME:SOMEPASSWORD
https://www.instructables.com/Raspberry-Pi-as-low-cost-HD-surveillance-camera/
I am using DJI Matrice 100 for indoor usage. I intend to use dji_sdk ROS library (http://wiki.ros.org/dji_sdk) to control the drone. The idea is to do path tracking. However, I do not want to use the GPS at all. General question: How do I get this done?
Specific questions:
How can I get the system to start with the GPS turned-off/unplugged? Someone here (Remove GPS on matrice 100) suggested that the latest update allows the system to take-off if the magnetometer is there but I do not find this to be the case.
I have a few options of controlling the drone by publishing on specific rostopics listed on the dji_sdk ROS page above. While I have successfully simulated this in the dji simulation environment, doing this in reality requires the direction of motion in the ENU direction to be known. Thus it requires the GPS to be ever-present. I do not know if there is a way to provide these commands in the the body-frame of the drone.
Please feel free to ask for any details you need!
Edit: For the 2nd part of the question, I was hoping someone would refer to dji_sdk ROS library (http://wiki.ros.org/dji_sdk) and tell me that I can use the rostopic /dji_sdk/flight_control_setpoint_generic to control the drone in the body-frame of the drone (as somehow referred to at the end of the page). If this is indeed true and using the flag you can achieve this, does someone know what should the value be for doing position control/velocity control in the body-fixed frame?
Full disclosure: I'm a pretty junior developer and new to asking questions. I also don't know that much about video streaming as a concept so if the answer is right in front of my face I probably just glazed right over it.
That being said, I am trying to do something that seems like it should be pretty simple but can't seem to figure it out. I'm trying to get a H.264 live stream video off of a Raspberry Pi and view it in my app. I've found a number of things about encoding videos but couldn't seem to get anything to work.
Anything anyone has to offer would be a large help, even if it is just a direction to look in because I'm pulling my hair out trying to figure this one out.
You'll first need to install some platform on your Raspberry Pi that can serve data to a client. You can look into web server platforms like Apache. Once installed, you can verify this is working by hitting the IP address of the Raspberry Pi from any browser: e.g. 192.168.1.67:80
Then you need to make sure the video is available through the file system on your Raspberry Pi. Searching something like "Adding files to Apache" might help.
You can test that the file is available by hitting the IP address of your Raspberry Pi from any browser: e.g.
192.168.1.67:80/path/to/video.mp4
This means that the video file is available and can be downloaded, but won't be streamed by default. Then you can look into some JavaScript framework that can help you with the streaming portion.
Apple has super famous HLS protocol for streaming videos. You would need to first encode video input coming from camera, then pass it to your server who's basically doing all the "behind the scene" work and provides you with *.m3u8 URL. I've implemented this pattern with Wowza Streaming Engine. You can use it or similar tools.
On the flip side, if you're inclined towards having more simple and straight forward solution; more like a CDN approach, then you may follow #Bret's answer.
I have never ever asked this kind of question on StackOverflow before, and I wonder if you could help me guys because it is a "bit" vague.
I have to design a project that uses Teensy (simple ARM platform) for getting data from IR camera (Flir, resolution 80x60) over SPI, and streaming these data to Linux/Windows running machine (through USB-serial) and doing something simple with OpenCV.
THE PROBLEM: The project lacks some "inovation". It should not be something very complicated, but rather different approach, or trying something new.
Do you have recommendations/tutorials/books/experience with working with above mentioned things? OR do you see a potential for teying something new?
You might want to check out the OpenCV Cookbook for some ideas.
There is a project using this FLIR with a Teensy. It provides a thermal image using a small LCD screen (without any aditional computer).
https://hackaday.io/project/8994-diy-thermocam
So, the teensy can get data through spi.
Can the teensy send data through usb then ? Probably but you will have to check if the rate is high enough
.
Using OpenCV directly on teensy is not possible because of size of library. But you can probably make some basic image processing if the code is small enough.
The FLIR Lepton can be directly interfaced with Linux or Windows computer, so I don't really see the use of Teensy.
I would recommend a Raspberry Pi to interface the FLIR Lepton and then do some image processing. It's well documented on the web.
I'm writing LabVIEW software that grabs images from an IMAQ compatible GigE camera.
The problem: This is a collaborative project, so I only have intermittent access to the actual camera.I'd like to be able to keep developing this software even when the camera isn't present.
Is there a simple/fast way to create a virtual or dummy IMAQ camera in software? Ideally I'd like the dummy camera grab frames from an AVI or a stack of JPEG's. Something like this must exist, I just can't find it on Google.
I'm looking for something that won't take very long (e.g.< 2 hours effort) and that is abstracted away through the standard LabVIEW IMAQ interface, so that my software won't know or care whether its dealing with a dummy camera or an actual camera.
You can try this method using LabVIEW classes:
Hardware Emulation Using LabVIEW Classes
If you have the IMAQdx driver, you might consider just buying a cheap USB webcam for $10.
Use the IMAQdx driver (assuming you have it), and then insert the Vision Acquisition Express VI, and you can choose AVIs or even pics as a source.
Something like this: GigESim is a camera emulation software. Unfortunately it is proprietary and too expensive (>$500) for my own needs, but perhaps others will find this link useful.
Anyone know of a viable Open Source alternative?
There's an IP Camera emulator project that emulates IP camera with python. I haven't used it myself so i don't know if it can be used by IMAQ.
Let us know if it's good for you.
I know this question is really old, but hopefully this answer helps someone out.
IMAQdx also works with Windows DirectShow devices. While normally these are actual physical capture devices (think USB Webcams), there is no necessity that they have to be.
There are a few different pre-made options available on the web. I found using Open Broadcaster Studio and this Virtual Cam plugin to be easy enough. Basically:
Download and install both.
Load your media sources in the sources list.
Enable the VirtualCam stream (Tools > VirtualCam). Press Start.