how to integrate webcam texture with the OpenGL in python language . i didnt get any example on internet
plzz answer
As always in Programming, I would start by splitting the problem into smaller Problems:
Getting the webcam feed (if you do not have it already)
For getting webcam data I would start here: How do I access my webcam in Python? (Basically get the webcam feed using OpenCV or GStreamer)
Make sure the data is in an 1D array in RGB color format
Once you have video feed from the camera, it should be fairly straight forward to take the newest frame and upload it to a OpenGL texture using GL.glTexImage2D. (example)
This is assuming you to be familiar with OpenGL already, if not, you can try to follow some OpenGL tutorials until you feel comfortable enough for step 3.
Related
I'm using a RPI 4 with Pi Camera and OpenCV to get the video stream from the Pi camera, and detect a face, then tracking it using servo motors.
If I want to see the feed, I can use cv2.imshow("",frame) with the frame read from the stream.
I'm looking for a way to output the frames to be used as a webcam. For example using rtsp to make the RPI an IP camera, then using VLC to get the feed.
Problem is, I can't find a way to implement in my code to actually stream the frames. I tried using ffmpeg, but the rtsp server part is missing, I need to somehow start in my code, maybe with a package of somekind.
If anyone has a better suggestion to use the RPI with my code as a webcam, I would be happy to hear.
Thanks
I have an object I'd like to track using OpenCV. In my detection algorithm I can create bounded boxes around the objects it sees, and can create a target object to track properly. My detection algorithm works well, but I want to pass this object to a tracking algorithm.I can't quite get this done without having to re write the detection and image display issues. I'm working with an NVIDA Jetson Nanoboard with an Intel Realsense camera if that helps.
The OpenCV DNN module comes with python samples of state of the art trackers. I've heard good things about the "siamese" based ones. Have a look
Also the OpenCV contrib repo contains a whole module of various trackers. Give those a try first. They have a simple API.
I'm the beginner of programming and openCV. I have to build my own library based on OpenCV and use it on Hololens.
I have a very short time to work on it. So I have started with many samples from many websites.
I can build the library and use it in Unity from the sample but I cannot show it on hololens
Now, I'm trying to show webcam like this sample on the hololens. https://youtube.com/watch?v=vUviuj8KcQM&t=781s
It showed on unity but It didn't show on hololens. Just the cube showed on the hololens.
I think I have to write the script to show this webcam texture from openCV on Hololens but I have no idea. It's very complicated for me.
I would like to ask how to show this webcam texture from openCV on Hololens?
I would like to ask your suggestion where should I start to learn OpenCV, C++, C# and Unity + hololens in short time.
Sorry for my poor English and programming.
I'm a Software Engineering student in my last year in a 4-year bachelor degree program, I'm required to work on a graduation project of my own choice.
we are trying to find a way to notify the user of any thing the gets on his/her way while walking, this will be implemented as an android application so we have the ability to use the camera, we thought of Image processing and computer vision but neither me or any of my group members have any Image processing background, we searched a little bit and we found out about OpenCv.
So my question is do I need any special background to deal with OpenCv? and is it a good choice for the objective of my project to use computer vision, if not what alternatives do u advise me to use?
I appreciate your help.. thanks in advance!
At the first glance I would use 2 standard cameras to find depth image - stereo vision (similar to MS Kinect depth sensor)
from that it would be easy to fix a threshold to some distance.
Those algorithms are very CPU hungry so I do not think it will work on Android (although I have zero experience).
I you must use Android, I would look for some depth sensor (to avoid extracting depth data from 2 images)
For prototyping I would use MATLAB (or Octave), then I would switch to OpenCV (pointers, mem. allocations, blah...)
I went through the Kinect SDK and Toolkit provided by Microsoft. Tested the Face Detection Sample, it worked successfully. But, how to recognize the faces ? I know the basics of OpenCV (VS2010). Is there any Kinect Libraries for face recognition? if no, what are the possible solutions? Are there, any tutorials available for face recognition using Kinect?
I've been working on this myself. At first I just used the Kinect as a webcam and passed the data into a recognizer modeled after this code (which uses Emgu CV to do PCA):
http://www.codeproject.com/Articles/239849/Multiple-face-detection-and-recognition-in-real-ti
While that worked OK, I thought I could do better since the Kinect has such awesome face tracking. I ended up using the Kinect to find the face boundaries, crop it, and pass it into that library for recognition. I've cleaned up the code and put it out on github, hopefully it'll help someone else:
https://github.com/mrosack/Sacknet.KinectFacialRecognition
I've found project which could be a good source for you - http://code.google.com/p/i-recognize-you/ but unfortunetly(for you) its homepage is not in english. The most important parts:
-project(with source code) is at http://code.google.com/p/i-recognize-you/downloads/list
-in bibliography author mentioned this site - http://www.shervinemami.info/faceRecognition.html. This seems to be a good start point for you.
There are no built in functionality for the Kinect that will provide face recognition. I'm not aware of any tutorials out there that will do it, but someone I'm sure has tried. It is on my short list; hopefully time will allow soon.
I would try saving the face tracking information and doing a comparison with that for recognition. You would have a "setup" function that would ask the user the stare at the Kinect, and would save the points the face tracker returns to you. When you wish to recognize a face, the user would look at the screen and you would compare the face tracker points to a database of faces. This is roughly how the Xbox does it.
The big trick is confidence levels. Numbers will not come back exactly as they did previously, so you will need to include buffers of values for each feature -- the code would then come back with "I'm 93% sure this is Bob".