I am trying to learn about Opencv on embeded Linux. Currently I am working on NXP i.mx8m.
I used VS code in remote SSH. The first target is to display an image and destroy the window when pressed key "q", like usual.
import numpy as np
import cv2
img = cv2.imread('L5.jpeg')
cv2.imshow('Window name', img)
key =cv2.waitKey(0)
print(key)
if key== ord('q'):
print("pressed q")
cv2.destroyAllWindows()
The window of imshow GUI appears on the monitor which connected to I.mx8m, as wished.
BUT simply waitkey(0) doesn't respond. The window is there. The return value of cv2.waitKey(0) is always -1, no matter which key I press. I can only stop the program by Ctr+Z.
I have learned that , If I run the code on local system, it works only when the GUI window is in focus. But in a remote SSH, how can I put the HiGUI in focus?
I searched google many times, but still cannot find the solution.
can anyone be so kind and help me?
I sense maybe I missed some configuration in SSH?
Thousand thanks!
regards
Cliff
Related
I have some .nc data that I use xarray to analyze. I wanted to overlay a shapefile on the plot and came across hv.plot which seems to have the interactive component I have been needing. However, when I tried to plot, it does not seem to work. Then I found the following thread: Why doesn't holoviews show histogram in Spyder? but could not get Maxime's answer to work and Sander's only partly works, however I have trouble "connecting to the local host" consistently. Does anyone have a solution?
ds_path = 'Z:/MODIS-LAADS-DAAC/'
ds_file = 'anom_2016_v1.nc'
ds_anom = xr.open_dataset(ds_path+ds_file)
x = ds_anom.Rrs_667.isel(time=0)
# create holoviews plot
hv_map = x.hvplot()
# display graph in browser
# a bokeh server is automatically started
bokeh_server = pn.Row(hv_map).show(port=12345)
# stop the bokeh server (when needed)
bokeh_server.stop()
However, I get this 95% of the time ERR_CONNECTION_REFUSED but occasionally get it to work and look like this (however I don't know why!!!) sample ocean data plot
IN SUMMARY:
What I want to do:
https://tutorial.xarray.dev/scipy-tutorial/04_plotting_and_visualization.html#interactive-bokeh-plots-using-hvplot
, https://hvplot.holoviz.org/user_guide/Geographic_Data.html#declaring-an-output-projection
What I have tried:
https://discourse.holoviz.org/t/whats-the-most-efficient-way-to-overlay-a-shapefile-onto-a-hvplot-xarray-plot/397, Why doesn't holoviews show histogram in Spyder?
What I get: ERR_CONNECTION_REFUSED but occasionally (don't know why) sample ocean data plot
I am working on some video processing tasks and have been using opencv-python 4.2.0 as my go-to library. At first there was a problem with displaying video frames using the imshow function - I would only see a small black window, but I thought there was something wrong with my logic. I tried reproducing the problem in its simplest form - loading and displaying a static image:
import cv2
frame = imread("path/to/some/image.png")
print(frame.shape)
cv2.imshow('test', frame)
The output:
>>> (600, 600, 3)
I have not had similar problems in this development environment before. I am developing under WSL (Ubuntu 16.04) and use Xming to display the program's window under Win10.
Image in window is updated when function waitKey() is executed - so you have to use it
import cv2
frame = cv2.imread("path/to/some/image.png")
print(frame.shape)
cv2.imshow('test', frame)
cv2.waitKey(1)
At least it resolves this problem on Linux Mint 19.3 based on Ubuntu 18.04
I am trying to use OpenCV to video capture my webcam. Each time I run the program, python quits unexpectedly, and the code says [Finishd in 0.3s with exit code -6]
Operating System - Catalina 10.15.3
Using sublime text 3
Python 3.8
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
The problem is that Sublime's build systems can't handle GUI window creation like cv2.imshow(). I believe it has to do with how the build system is executed using the subprocess module. You'd run into the same problem if you were trying to display an image using Pillow or matplotlib, for example. Please note that you can do image processing just fine within Sublime, as long as you don't try to display the results.
The easiest way around it is to just keep a Terminal window open and manually run your scripts from the command line after saving.
It would probably help to add some error handling to your code. For instance, after
cap = cv2.VideoCapture(0)
Try to put something like this:
if !cap.isOpened():
print("Error")
exit(1)
Then, after
ret, frame = cap.read()
try:
if !ret:
print("Error: frame not captured")
At least this should give you some hints as to where the problem is.
Good luck
Andreas
I installed ROS melodic version in Ubuntu 18.04.
I'm running a rosbag in the background to mock cameras in messages rostopics.
I set the camera names in rosparams and iterated through it to capture each camera topics.
I'm using message_filter ApproximateTimeSynchronizer to get time synchronized data as mentioned in the official documentation,
http://wiki.ros.org/message_filters
But most of the time the callback function to ApproximateTimeSynchronizer is not being called/is having delay. The code snippet I'm using is given below:
What am I doing wrong here?
def camera_callback(*args):
pass # Other logic comes here
rospy.init_node('my_listener', anonymous=True)
camera_object_data = []
for camera_name in rospy.get_param('/my/cameras'):
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/camera_info'.format(camera_name), CameraInfo))
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/image_color_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/image_depth_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/points'.format(camera_name), PointCloud2)
topic_list = [filter_obj for filter_obj in camera_object_data]
ts = message_filters.ApproximateTimeSynchronizer(topic_list, 10, 1, allow_headerless=True)
ts.registerCallback(camera_callback)
rospy.spin()
Looking at your code, it seems correct. There is, however, a trouble with perhaps bad timestamps and ergo this synchronizer as well, see http://wiki.ros.org/message_filters/ApproximateTime for algorithm assumptions.
My recommendation is to write a corresponding node that publishes empty versions of these four msgs all at the same time. If it's still not working in this perfect scenario, there is an issue with the code above. If it is working just fine, then you need to pay attention to the headers.
Given that you have it as a bag file, you can step through the msgs on the command line and observe the timestamps as well. (Can also step within python).
$ rosbag play --pause recorded1.bag # step through msgs by pressing 's'
On time-noisy msgs with small payloads, I've just written a node to listen to all these msgs, and republish them all with the latest time found on any of them (for sync'd logging to csv). Not optimal, but it should reveal where the issue lies.
I have set up Google Assistant with Raspberry Pi. I'd like to define a custom action, but it's not working. The Google Assistant recognizes the sentence, but does nothing. Here's a log. How do I fix it?
I've edited action.py to put my code
class SwitchControl(object):
"""Control a RC-Socket"""
COMMAND_ON = 'sudo /home/pi/rcswitch-pi/send 00111 3 1'
COMMAND_OFF = 'sudo /home/pi/rcswitch-pi/send 00111 3 0'
def __init__(self, say, toggle):
self.say = say
self.toggle = toggle
def run(self, voice_command):
try:
if (self.toggle == 'ON'):
self.say(_('Turning switch on.'))
for i in range(10):
subprocess.call(SwitchControl.COMMAND_ON, shell=True)
elif (self.toggle =='OFF'):
self.say(_('Turning switch off.'))
for i in range(10):
subprocess.call(SwitchControl.COMMAND_OFF, shell=True)
except (ValueError, subprocess.CalledProcessError):
logging.exception("Error using codesend to toggle rc-socket.") self.say("Sorry I didn't identify that command")
# =========================================
# Makers! Add your own voice commands here.
# =========================================
actor.add_keyword(_('pi power off'), PowerCommand(say, 'shutdown'))
actor.add_keyword(_('pi reboot'), PowerCommand(say, 'reboot'))
actor.add_keyword(_('switch on'), SwitchControl(say, 'ON'))
actor.add_keyword(_('switch off'), SwitchControl(say, 'OFF'))
return actor
Ok finally i manage to make it works :)
First thing you need to know to make local action you need to use cloud speech.
Then i was stuck because in my terminal
when i start google assistant i wasn't seeing :
[2017-07-26 09:25:20,672] INFO:main:ready...
Press the button on GPIO 23 then speak, or press Ctrl+C to quit...
I was only seeing START RECORDING
So i grab an image of pixel raspbian for magpi and it was working with with this distrib and then i have put my old sdcard with my raspbian to retest and tada it was working !!!