eye tracking driven vitual computer mouse using OpenCV python lkdemo - opencv

I am a beginner in OpenCV programming. Now I'm trying to develop an eye tracking driven virtual computer mouse using OpenCV python version of lkdemo. I have a code in python lkdemo. I compiled it using python pgmname.py.Then I have the following results.
OpenCV Python version of lkdemo
Traceback (most recent call last):
File "test.py", line 64, in <module>
capture = cvCreateCameraCapture (device)
NameError: name 'cvCreateCameraCapture' is not defined.
Can anyone help to solve this?
Update:
now the error is:
OpenCV Python version of lkdemo
Traceback (most recent call last):
File "test.py", line 8, in <module>
import cv
ImportError: No module named cv
Can anyone suggest a solution?

The API changed a while ago. Depending on your version, it should rather be something like:
import cv
capture = cv.CaptureFromCAM(0)
img = cv.QueryFrame(capture)
HTH.

what is your version OpenCV?
this example for version 2.4.5:
import cv2
import numpy as np
c = cv2.VideoCapture(0)
while(1):
_,f = c.read()
cv2.imshow('e2',f)
if cv2.waitKey(5)==27:
break
cv2.destroyAllWindows()

Related

OpenCV can't grab-retrieve on Windows 11

I have been using the "grab and retrieve" flow with OpenCV's VideoCapture on linux for a long time. Now migrating the code to Windows 11, it seems with the same USB Webcams that using retrieve is not working:
import sys
import cv2
camera_number = 2
print(f"video output encoding backends available to OpenCV: "
f"{[cv2.videoio_registry.getBackendName(backend) for backend in cv2.videoio_registry.getWriterBackends()]}")
print(f"camera video acquisition backends available to OpenCV: "
f"{[cv2.videoio_registry.getBackendName(backend) for backend in cv2.videoio_registry.getStreamBackends()]}")
video_stream = cv2.VideoCapture(camera_number, cv2.CAP_DSHOW)
if video_stream.isOpened():
print(f"successfully opened camera number {camera_number}")
else:
print(f"\nfailed to open camera number {camera_number}")
sys.exit(1)
print(f"OpenCV is using the following backend library for camera video acquisition: {video_stream.getBackendName()}")
success, image = video_stream.read()
if success:
print('read image succeeded')
else:
print('read image failed')
while video_stream.isOpened():
grabbed = video_stream.grab()
if grabbed:
print('image grab succeeded')
else:
print('image grab failed')
success, image = video_stream.retrieve()
if not success:
raise ValueError(f'image retrieve failed')
This code succeeds up until the retrieve().
Here's the full output:
video output encoding backends available to OpenCV: ['FFMPEG', 'GSTREAMER', 'INTEL_MFX', 'MSMF', 'CV_IMAGES', 'CV_MJPEG']
camera video acquisition backends available to OpenCV: ['FFMPEG', 'GSTREAMER', 'INTEL_MFX', 'MSMF', 'CV_IMAGES', 'CV_MJPEG']
successfully opened camera number 2
OpenCV is using the following backend library for camera video acquisition: DSHOW
read image succeeded
image grab succeeded
Traceback (most recent call last):
line 49, in <module>
raise ValueError(f'image retrieve failed')
ValueError: image retrieve failed
Process finished with exit code 1
As seen above, this is using DSHOW. Notably, none of the backends other than DSHOW seemed to manage to open the Webcam cameras, although the OpenCV API states them as supported.
Enabling the env variable OPENCV_VIDEOIO_DEBUG=1 does not reveal any warnings or errors.
The problem regards USB cameras but not the laptop's in-built camera: switching to camera number 0, the laptop's built-in camera, the same above code seamlessly works and manages to loop on grab and retrieve, but not with any of two Logitech Webcams (Logitech Brio and Logitech C390e) on this Windows 11 laptop.
Version info
opencv-python: 4.5.5.62
opencv-contrib-python: 4.5.5.62
winrt: 1.0.21033.1
python: 3.9.10
How would you approach this?
Ok, seems that (not much unlike on Linux) each webcam exposes more than one camera number through the OS, and by choosing one and not the other camera number, give or take a Windows Update bringing in some Logitech software update, it works.
Although both camera numbers manage opening the camera, only one of them enables the full flow. Enumerating the properties of each camera number through code is rough but through trial and error over just two of them camera numbers, it works.

can't show an image using PIL on google colab

I am trying to use PIL to show an image. I know that I can use other modules to do that. I am working on google colab. But I can't figure out why PIL is not showing output image.
% matplotlib inline
import numpy as np
import PIL
im=Image.open('/content/drive/My Drive/images-process.jpeg')
print(im.width, im.height, im.mode, im.format, type(im))
im.show()
output: 739 415 RGB JPEG < class 'PIL.JpegImagePlugin.JpegImageFile'>
Instead of
im.show()
Try just
im
Colab should try to display it on its own. See example notebook
Use
display(im)
instead of im.show() or im.
When using these options after multiple lines or in a loop, im won't work.
After you open an image(which you have done using Image.open()), try converting using im.convert() to which ever mode image is in then do display(im)
It will work

error in Keras: Invalid argument 'metrics' passed to K.function

I am working on some problems about machine learning and want to try the powerful package Keras(using Theano backend) in python. While I am running a demo of MLP for digit recognition here, it gives the follow error message:
Traceback (most recent call last):
File "mlp.py", line 52, in <module>
metrics=['accuracy'])
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 564, in compile
updates=updates, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 459, in function
raise ValueError(msg)
ValueError: Invalid argument 'metrics' passed to K.function
I don't know why it gave the error message, can anyone help me to fix the bug? Thank you in advance.
This error means that you are running Keras version 0 (e.g. 0.3.2) but running code that was written for Keras version 1. You can upgrade to Keras 1, or remove metrics=['accuracy'] from the function call to model.compile().
Which version of Keras are you running?
I updated (e.g., "pip install --upgrade keras"), and that keyword is now accepted.
Take care, however, because several other functions have changed. For example, if you would like to access layer input and output after training, the model method functions have changed.
see http://keras.io/layers/about-keras-layers/

Scikits-Learn RandomForrest trained on 64bit python wont open on 32bit python

I train a RandomForestRegressor model on 64bit python.
I pickle the object.
When trying to unpickle the object on 32bit python I get the following error:
'ValueError: Buffer dtype mismatch, expected 'SIZE_t' but got 'long long''
I really have no idea how to fix this, so any help would be hugely appreciated.
Edit: more detail
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\python27\lib\pickle.py", line 1378, in load
return Unpickler(file).load()
File "c:\python27\lib\pickle.py", line 858, in load
dispatch[key](self)
File "c:\python27\lib\pickle.py", line 1133, in load_reduce
value = func(*args)
File "_tree.pyx", line 1282, in sklearn.tree._tree.Tree.__cinit__ (sklearn\tre
e\_tree.c:10389)
This occurs because the random forest code uses different types for indices on 32-bit and 64-bit machines. This can, unfortunately, only be fixed by overhauling the random forests code. Since several scikit-learn devs are working on that anyway, I put it on the todo list.
For now, the training and testing machines need to have the same pointer size.
For ease, please use python 64 bit version to decentralize your model. I faced the same issue recently. after taking that step it was resolved.
So try running it on a 64 bit version. I hope this helps
I fixed this problem with training the model in the same machine. I was training the model on Jupyter Notebook(Windows PC) and trying to load into Raspberry Pi but I got the error. Therefore, I trained the model in Raspberry Pi and maintained again then I fixed the problem.
I had the same problem when I trained the model with python 3.7.0 32bit installed on my system. It got solved after installing the python 3.8.10 64bit version and training the model again.

Using ImageMagick/ZBar to read QR codes

I've got scanned image files that I perform some preprocessing on and get them looking something like this:
My phone's ZBar app can read this QR code fine, but zbarimg seems to be unable to figure it out. I've tried all sorts of things in ImageMagick to make it smoother (-smooth, -morphology) but even with slightly better-looking results, zbarimg still comes up blank.
Why would my phone's ZBar be so much better than my computer's (zbar-0.10)? Is there anything I can do to get zbarimg to read this successfully?
You can try morphological closing.
Python code:
# -*- coding: utf-8 -*-
import qrtools
import cv2
import numpy as np
imgPath = "Fdnm1.png"
img = cv2.imread(imgPath, 0)
kernel = np.ones((5, 5), np.uint8)
processed=cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
cv2.imwrite('test.png', processed)
d = qrtools.QR(filename='test.png')
d.decode()
print d.data
Result:
1MB24

Resources