Can I implement onboard LED blinks as "status" on Umqtt .wait_msg() on Raspeberry Pi Pico W running Micropython? - mqtt

I'm trying to let the LED keep blinking while waiting in the "wait_msg()" function. can I achieve this by uasycncio, _thread, or modification on the main loop in the module? And how?
Many thanks.

Try the following:
client.check_msg()
led.toggle()
time.sleep(0.1)

Related

cv2.waitKey(0) not responding in VS code SSH remote

I am trying to learn about Opencv on embeded Linux. Currently I am working on NXP i.mx8m.
I used VS code in remote SSH. The first target is to display an image and destroy the window when pressed key "q", like usual.
import numpy as np
import cv2
img = cv2.imread('L5.jpeg')
cv2.imshow('Window name', img)
key =cv2.waitKey(0)
print(key)
if key== ord('q'):
print("pressed q")
cv2.destroyAllWindows()
The window of imshow GUI appears on the monitor which connected to I.mx8m, as wished.
BUT simply waitkey(0) doesn't respond. The window is there. The return value of cv2.waitKey(0) is always -1, no matter which key I press. I can only stop the program by Ctr+Z.
I have learned that , If I run the code on local system, it works only when the GUI window is in focus. But in a remote SSH, how can I put the HiGUI in focus?
I searched google many times, but still cannot find the solution.
can anyone be so kind and help me?
I sense maybe I missed some configuration in SSH?
Thousand thanks!
regards
Cliff

Nao robot IMU data rates

I'm trying to stream data from the Nao's inertial unit in its trunk. However the update rate is quite slow ~ 1Hz. Is there any way to improve it? For reference, I issued the following command using qicli to measure the rates:
qicli call --json ALMemory.getListData "[[\"Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value\"]]"
In this example I retrieve the tilt angle of the trunk around the Y-axis (pitch).
To execute this command, I established an SSH connection to the Nao. I timed it using the linux time command. I also tried to force a faster read rate by issuing the above command in a loop with 5 milliseconds of sleep between each iteration:
for i in {1..100}; do qicli call --json ALMemory.getListData "[[\"Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value\"]]"; sleep 0.005; done
But even in this case I could see that the data was read at about a rate of 1Hz.
I tried it on Nao versions 5 and 6. I also connected both over WiFi and a link-locally using an ethernet cable.
This data is available every 10ms, but a qicli call takes a long time to init the connection.
Try using the api in python, create a proxy then call the getData in the loop, refer to the API documentation here.
As a side note, best way to record data or to monitor it efficiently is to process it directly on the NAO. Connect using ssh upload your program and run it, or use choregraphe to create and run it directly on the robot easily.
# edit: adding simple script to be run directly on NAO (untested)
import time
import naoqi
mem = naoqi.ALProxy("ALMemory","localhost",9559)
while 1:
val = mem.getData("Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value")
print(val)
time.sleep(0.01)

Issue with TracedPath in manim

I'm trying to use TracedPath to make a curve in manim (CE v0.10), which I then want to manipulate (move around, rotate, etc.). The problem is, the tracing continues once I start moving the curve and I don't want it to. Does anyone know how to turn the tracing off? Any help would be greatly appreciated (this is the last issue I need to solve to finish my video). Here is some sample code:
class TracedPathProblem(Scene):
def construct(self):
dot = Dot(color=RED)
trace = TracedPath(dot.get_center,stroke_color=RED)
self.add(dot,trace)
self.play(dot.animate.shift(RIGHT),run_time=2)
path = trace.copy()
self.play(path.animate.shift(2*UP+RIGHT)) #do not want tracing here
self.wait()
Issue solved. Benjamin Hackl gave me a solution: path = trace.copy().clear_updaters()

PEPPER (SoftBank Robotics): ALSpeechRecognition Engine issue - How to restart it when it doesnìt work?

During my test on Pepper, I found some difficulties in realizing continuative collaborative dialog.
In particular, after about 10 minutes, it seems that the ALSpeechRecognition engine stops working.
In other words, Pepper dialog panel remains empty and/or the robot does not understand my words, even if the structure worked some minute before.
I tried to stop and restart it (i.e., the engine) via SSH terminal, by using:
qicli call ALSpeechRecognition.pause 1
qicli call ALSpeechRecognition.pause 0
It should restart the engine according to the guidelines shown here, but it does not work.
Thank you so much guys.
Sincerely,
Giovanni
According to the tutorial, starting and stopping the speech recognition engine is done by subscribing/unsubscribing it.
The recommended way to do this is unsubscribing and subscribing back to it. For me it also worked changing the speech reco language and chaging it back to the one you had previously.
Luis is right and to do so just create a function as below given and call it if ActiveListenning event comes false from ALSpeechRecognition module. Note: Use ALMemory module to get data from ALSpeechRecogntion.
asr_service = ALProxy("ALSpeechRecognition",ip,port)
memory = ALProxy("ALMemory",ip,port)
def reset():
asr_service.unsubscribe("ASR_Engine")
asr_service.subscribe("ASR_Engine")
ALS = memory.getData("ALSpeechRecognition/ActiveListening")
if ALS==False:
reset()

Google assistant sdk on pi do nothing on custom action

I have set up Google Assistant with Raspberry Pi. I'd like to define a custom action, but it's not working. The Google Assistant recognizes the sentence, but does nothing. Here's a log. How do I fix it?
I've edited action.py to put my code
class SwitchControl(object):
"""Control a RC-Socket"""
COMMAND_ON = 'sudo /home/pi/rcswitch-pi/send 00111 3 1'
COMMAND_OFF = 'sudo /home/pi/rcswitch-pi/send 00111 3 0'
def __init__(self, say, toggle):
self.say = say
self.toggle = toggle
def run(self, voice_command):
try:
if (self.toggle == 'ON'):
self.say(_('Turning switch on.'))
for i in range(10):
subprocess.call(SwitchControl.COMMAND_ON, shell=True)
elif (self.toggle =='OFF'):
self.say(_('Turning switch off.'))
for i in range(10):
subprocess.call(SwitchControl.COMMAND_OFF, shell=True)
except (ValueError, subprocess.CalledProcessError):
logging.exception("Error using codesend to toggle rc-socket.") self.say("Sorry I didn't identify that command")
# =========================================
# Makers! Add your own voice commands here.
# =========================================
actor.add_keyword(_('pi power off'), PowerCommand(say, 'shutdown'))
actor.add_keyword(_('pi reboot'), PowerCommand(say, 'reboot'))
actor.add_keyword(_('switch on'), SwitchControl(say, 'ON'))
actor.add_keyword(_('switch off'), SwitchControl(say, 'OFF'))
return actor
Ok finally i manage to make it works :)
First thing you need to know to make local action you need to use cloud speech.
Then i was stuck because in my terminal
when i start google assistant i wasn't seeing :
[2017-07-26 09:25:20,672] INFO:main:ready...
Press the button on GPIO 23 then speak, or press Ctrl+C to quit...
I was only seeing START RECORDING
So i grab an image of pixel raspbian for magpi and it was working with with this distrib and then i have put my old sdcard with my raspbian to retest and tada it was working !!!

Resources