I'm trying to stream data from the Nao's inertial unit in its trunk. However the update rate is quite slow ~ 1Hz. Is there any way to improve it? For reference, I issued the following command using qicli to measure the rates:
qicli call --json ALMemory.getListData "[[\"Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value\"]]"
In this example I retrieve the tilt angle of the trunk around the Y-axis (pitch).
To execute this command, I established an SSH connection to the Nao. I timed it using the linux time command. I also tried to force a faster read rate by issuing the above command in a loop with 5 milliseconds of sleep between each iteration:
for i in {1..100}; do qicli call --json ALMemory.getListData "[[\"Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value\"]]"; sleep 0.005; done
But even in this case I could see that the data was read at about a rate of 1Hz.
I tried it on Nao versions 5 and 6. I also connected both over WiFi and a link-locally using an ethernet cable.
This data is available every 10ms, but a qicli call takes a long time to init the connection.
Try using the api in python, create a proxy then call the getData in the loop, refer to the API documentation here.
As a side note, best way to record data or to monitor it efficiently is to process it directly on the NAO. Connect using ssh upload your program and run it, or use choregraphe to create and run it directly on the robot easily.
# edit: adding simple script to be run directly on NAO (untested)
import time
import naoqi
mem = naoqi.ALProxy("ALMemory","localhost",9559)
while 1:
val = mem.getData("Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value")
print(val)
time.sleep(0.01)
Related
We are setting up a federated scenario with Server and Client on different physical machines.
On the server, we have used the docker container to kickstart:
The above has been borrowed from Kubernetes tutorial. We believe this creates a 'local executor' [Ref 1] which helps create a gRPC server [Ref 2].
Ref 1:
Ref 2:
Next on the client 1, we are calling tff.framework.RemoteExecutor that connects to the gRPC server.
Our understanding based on the above is that the Remote Executor runs on the client which connects to the gRPC server.
Assuming the above is correct, how can we send a
tff.tf_computation
from the server to the client and print the output on the client side to ensure the whole setup works well.
Your understanding is definitely correct.
If you construct an ExecutorFactory directly, as seems to be the case in the code above, passing it to tff.framework.set_default_context will install your remote stack as the default mechanism for executing computations in the TFF runtime. You should additionally be able to pass the appropriate channels to tff.backends.native.set_remote_execution_context to handle the remote executor construction and context installation if desired, but the way you are doing it certainly works, and allows for greater customization.
Once you have set this up, running an example end-to-end should be fairly simple. We will set up a computation which takes a set of federated integers, prints on the clients, and sums the integers up. Let:
#tff.tf_computation(tf.int32)
def print_and_return(x):
# We must use tf.print here, as this logic will be
# serialized and run on the clients as TensorFlow.
tf.print('hello world')
return x
#tff.federated_computation(tff.FederatedType(tf.int32, tff.CLIENTS))
def print_and_sum(federated_arg):
same_ints = tff.federated_map(print_and_return, federated_arg)
return tff.federated_sum(same_ints)
Suppose we have N clients; we simply instantiate the set of federated integers, and invoke our computation.
federated_ints = [1] * N
total = print_and_sum(federated_ints)
assert total == N
This should cause the tf.prints defined above to run on the remote machine; as long as tf.print is directed to an output stream which you can monitor, you should be able to see it.
PS: you may note that the federated sum above is unnecessary; it certainly is. The same effect can be had by simply mapping the identity function with the serialized print.
I installed ROS melodic version in Ubuntu 18.04.
I'm running a rosbag in the background to mock cameras in messages rostopics.
I set the camera names in rosparams and iterated through it to capture each camera topics.
I'm using message_filter ApproximateTimeSynchronizer to get time synchronized data as mentioned in the official documentation,
http://wiki.ros.org/message_filters
But most of the time the callback function to ApproximateTimeSynchronizer is not being called/is having delay. The code snippet I'm using is given below:
What am I doing wrong here?
def camera_callback(*args):
pass # Other logic comes here
rospy.init_node('my_listener', anonymous=True)
camera_object_data = []
for camera_name in rospy.get_param('/my/cameras'):
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/camera_info'.format(camera_name), CameraInfo))
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/image_color_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/image_depth_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/points'.format(camera_name), PointCloud2)
topic_list = [filter_obj for filter_obj in camera_object_data]
ts = message_filters.ApproximateTimeSynchronizer(topic_list, 10, 1, allow_headerless=True)
ts.registerCallback(camera_callback)
rospy.spin()
Looking at your code, it seems correct. There is, however, a trouble with perhaps bad timestamps and ergo this synchronizer as well, see http://wiki.ros.org/message_filters/ApproximateTime for algorithm assumptions.
My recommendation is to write a corresponding node that publishes empty versions of these four msgs all at the same time. If it's still not working in this perfect scenario, there is an issue with the code above. If it is working just fine, then you need to pay attention to the headers.
Given that you have it as a bag file, you can step through the msgs on the command line and observe the timestamps as well. (Can also step within python).
$ rosbag play --pause recorded1.bag # step through msgs by pressing 's'
On time-noisy msgs with small payloads, I've just written a node to listen to all these msgs, and republish them all with the latest time found on any of them (for sync'd logging to csv). Not optimal, but it should reveal where the issue lies.
I'm fairly new to OpenTSDB but I managed to set it up inside a docker container and connect Grafana to it.
Now I'm looking for a way to keep track of it's health. In particular, I would like to plot some of the metrics that come from the internal stats (e.g. tsd.rpc.received).
When I try to use them as a regular metric in the Graph panel of OpenTSDB I get a "java.lang.RuntimeException: Unexpected exception".
I know I could connect the http api (/api/stats) to another tool to then send the metrics to cloudwatch or a similar app. But I was hoping for something that didn't involved adding more pieces to the solution.
In the documentation I found: "The Telnet style API also supports the "stats" command for fetching over CLI. These can easily be published right back into OpenTSDB at any interval you like."
Is this the recommended way to keep track of those internal metrics? Read from the stats api and then feed them back to OpenTSDB?
After looking different alternatives I found that the best way to get the internal stats is either using the tcollector util or injecting the output of the stats command back into opentsdb and using grafana to visualize the data.
Since in my particular case I don't want to install another component like tcollector, I'll feed the stats back to opentsdb as metrics in every node.
This is a small script I wrote to feed the stats back via the telnet api.
#!/bin/bash
while true; do
sleep 5
STATSINPUT=$(echo "stats" | nc 0 4242 -w1)
while IFS= read -r line
do
echo "Feed: $line"
INPUT="put $line"
echo $INPUT | nc 0 4242 -w0
done < <(printf '%s\n' "$STATSINPUT")
done
I'm using LabVIEW and its VISA capabilities to control a Keithley 2635A source meter. Whenever I try to identify the device, it works just fine, both in reading and writing.
viWRITE(*IDN?) /* VISA subVI to send the command to the machine */
viREAD /* VISA subVI to read output */
However, as soon as I set the voltage (or current), it does so. Then I send the command to perform a measurement, but I'm not able to read that data, with the error
VISA: (Hex 0xBFFF0015) Timeout expired before operation completed.
After that, I can not read the *IDN? output either anymore.
The source meter is connected to the PC via a National Instrument GPIB-USB-HS adaptor.
EDIT: I forgot to add, this happens in the VISA Interactive Control program as well.
Ok, apparently the documentation is not very clear. What the smua.measure.X() (where X is the needed parameter) command does is, of course, writing the measurement outcome on a buffer. In order to read that buffer, however, the simple viREAD[] is not sufficient.
So basically the answer was to simply add a print command: this way I have
viWRITE[print(smua.measure.X())];
viREAD[]
And I don't have the error anymore. Not sure why such a command is needed, but that's that. Thank you all for your time answering me.
As #Tom Blodget mentions in the comments, the machine may not have any response to read after you set the voltage. The *IDN? string is both command and query. That is, you will write the command *IDN? and read the result. Some commands do not have any response to read. Here's a quick test to see if you should be reading from the instrument. The following code is in python; I made up the GPIB command to set voltage.
sm = SourceMonitor()
# Prints out IDN
sm.query('*IDN?')
# Prints out current voltage (change this to your actual command)
sm.query('SOUR:VOLT?')
# Set a new voltage
sm.write('SOUR:VOLT 1V')
# Read the new voltage
sm.query('SOUR:VOLT?')
Note that question-marked GPIB commands and the query are used when you expect to get a response from the instrument. The instrument won't give a response for the write command. Query is a combination of write(...) and read(...). If you're using LabView, you may have to write the write and read separately.
If you need verification that the machine received your instruction and acted on it, most instruments have the following common commands:
*OPC? query to see if the operation is complete
SYST:ERR? query to see if any error was generated
Add a question mark ? to the end of the GPIB command used to set the voltage
Say I want to use FTP in Python using the ftplib. I begin with this:
from ftplib import ftp
ftp = FTP('10.10.10.151')
If the FTP server is not online, however, it will hang right there indefinitely. The only thing that can kick it out is a keyboard interrupt as far as I know. I've tried this:
ftp.connect('10.10.10.151','21', 5)
With the five being a five second timeout. But the problem here is that I do not know of any way to use that line without first assigning ftp something. But if the server is offline, then the "ftp =" line will hang. So what use is ftp.connect()'s timeout function?!?
Does anybody know a workaround or anything? Is there a way to time out the "ftp = FTP(xxx)" command that I haven't found? Thanks.
I'm using Python 2.7 on Linux Mint.
Your call to connect() is redundant since FTP() method documentation states:
When host is given, the method call connect(host) is made.
Also, since Python 2.6, FTP() does have a timeout parameter:
class ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])
The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if is not specified, the global default timeout setting will be used).