(pydrake) How do I trace an input port back to its connected output port? - drake

I am implementing a small monitoring system to compute the error between (actual, desired) for a controller (and also recording them for quick and simple analysis). In some of my code, I've added my Systems to my DiagramBuilder and have connected everything using Connect().
I have a controller system that takes in desired input, then produces the actual as output.
Rather than try to remember the inputs connected to the desired output, I'd rather just trace it back.
How do I do that?

From quick perusal, there is both DiagramBuilder.connection_map() and Diagram.connection_map().
The following code seems to work as of v1.11.0:
def trace_to_output(diagram_or_builder, input_port):
system = input_port.get_system()
input_locator = (system, input_port.get_index())
connection_map = diagram_or_builder.connection_map()
output_system, output_index = connection_map[input_locator]
output_port = output_system.get_output_port(output_index)
return output_port
EDIT: I had dict_inverse() in there incorrectly. Fixed.

Related

In PyTorch, how to convert the cuda() related codes into CPU version?

I have some existing PyTorch codes with cuda() as below, while net is a MainModel.KitModel object:
net = torch.load(model_path)
net.cuda()
and
im = cv2.imread(image_path)
im = Variable(torch.from_numpy(im).unsqueeze(0).float().cuda())
I want to test the code in a machine without any GPU, so I want to convert the cuda-code into CPU version. I tried to look at some relevant posts regarding the CPU/GPU switch of PyTorch, but they are related to the usage of device and thus doesn't apply to my case.
As pointed out by kHarshit in his comment, you can simply replace .cuda() call with .cpu():
net.cpu()
# ...
im = torch.from_numpy(im).unsqueeze(0).float().cpu()
However, this requires changing the code in multiple places every time you want to move from GPU to CPU and vice versa.
To alleviate this difficulty, pytorch has a more "general" method .to().
You may have a device variable defining where you want pytorch to run, this device can also be the CPU (!).
for instance:
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
Once you determined once in your code where you want/can run, simply use .to() to send your model/variables there:
net.to(device)
# ...
im = torch.from_numpy(im).unsqueeze(0).float().to(device)
BTW,
You can use .to() to control the data type (.float()) as well:
im = torch.from_numpy(im).unsqueeze(0).to(device=device, dtype=torch.float)
PS,
Note that the Variable API has been deprecated and is no longer required.
net = torch.load(model_path, map_location=torch.device('cpu'))
Pytorch docs: https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-on-cpu-load-on-gpu

Rospy message_filter ApproximateTimeSynchronizer issue

I installed ROS melodic version in Ubuntu 18.04.
I'm running a rosbag in the background to mock cameras in messages rostopics.
I set the camera names in rosparams and iterated through it to capture each camera topics.
I'm using message_filter ApproximateTimeSynchronizer to get time synchronized data as mentioned in the official documentation,
http://wiki.ros.org/message_filters
But most of the time the callback function to ApproximateTimeSynchronizer is not being called/is having delay. The code snippet I'm using is given below:
What am I doing wrong here?
def camera_callback(*args):
pass # Other logic comes here
rospy.init_node('my_listener', anonymous=True)
camera_object_data = []
for camera_name in rospy.get_param('/my/cameras'):
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/camera_info'.format(camera_name), CameraInfo))
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/image_color_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/image_depth_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/points'.format(camera_name), PointCloud2)
topic_list = [filter_obj for filter_obj in camera_object_data]
ts = message_filters.ApproximateTimeSynchronizer(topic_list, 10, 1, allow_headerless=True)
ts.registerCallback(camera_callback)
rospy.spin()
Looking at your code, it seems correct. There is, however, a trouble with perhaps bad timestamps and ergo this synchronizer as well, see http://wiki.ros.org/message_filters/ApproximateTime for algorithm assumptions.
My recommendation is to write a corresponding node that publishes empty versions of these four msgs all at the same time. If it's still not working in this perfect scenario, there is an issue with the code above. If it is working just fine, then you need to pay attention to the headers.
Given that you have it as a bag file, you can step through the msgs on the command line and observe the timestamps as well. (Can also step within python).
$ rosbag play --pause recorded1.bag # step through msgs by pressing 's'
On time-noisy msgs with small payloads, I've just written a node to listen to all these msgs, and republish them all with the latest time found on any of them (for sync'd logging to csv). Not optimal, but it should reveal where the issue lies.

LabVIEW and Keithley 2635A - Unable to read data

I'm using LabVIEW and its VISA capabilities to control a Keithley 2635A source meter. Whenever I try to identify the device, it works just fine, both in reading and writing.
viWRITE(*IDN?) /* VISA subVI to send the command to the machine */
viREAD /* VISA subVI to read output */
However, as soon as I set the voltage (or current), it does so. Then I send the command to perform a measurement, but I'm not able to read that data, with the error
VISA: (Hex 0xBFFF0015) Timeout expired before operation completed.
After that, I can not read the *IDN? output either anymore.
The source meter is connected to the PC via a National Instrument GPIB-USB-HS adaptor.
EDIT: I forgot to add, this happens in the VISA Interactive Control program as well.
Ok, apparently the documentation is not very clear. What the smua.measure.X() (where X is the needed parameter) command does is, of course, writing the measurement outcome on a buffer. In order to read that buffer, however, the simple viREAD[] is not sufficient.
So basically the answer was to simply add a print command: this way I have
viWRITE[print(smua.measure.X())];
viREAD[]
And I don't have the error anymore. Not sure why such a command is needed, but that's that. Thank you all for your time answering me.
As #Tom Blodget mentions in the comments, the machine may not have any response to read after you set the voltage. The *IDN? string is both command and query. That is, you will write the command *IDN? and read the result. Some commands do not have any response to read. Here's a quick test to see if you should be reading from the instrument. The following code is in python; I made up the GPIB command to set voltage.
sm = SourceMonitor()
# Prints out IDN
sm.query('*IDN?')
# Prints out current voltage (change this to your actual command)
sm.query('SOUR:VOLT?')
# Set a new voltage
sm.write('SOUR:VOLT 1V')
# Read the new voltage
sm.query('SOUR:VOLT?')
Note that question-marked GPIB commands and the query are used when you expect to get a response from the instrument. The instrument won't give a response for the write command. Query is a combination of write(...) and read(...). If you're using LabView, you may have to write the write and read separately.
If you need verification that the machine received your instruction and acted on it, most instruments have the following common commands:
*OPC? query to see if the operation is complete
SYST:ERR? query to see if any error was generated
Add a question mark ? to the end of the GPIB command used to set the voltage

trying to read from stdout with NONBLOCKING set using winARM newlib lpc

I want read on stdout to be non-blocking. I was using the newlib-lpc library in WINarm to do this. But even though it is set as non-Blocking, the code stops at read every time and waits for a character to be received.
Here is my read line:
read(fileno(stdout), &inchar, 1);
Here is the code to set stdout to non-blocking using newlib-lpc
blocking=BLOCKING_IO_NO;
ioctl( fileno(stdout), BLOCKING_SETUP, &blocking);
When that didn't work, I decided to forget using the library and do it myself with this line
fcntl(fileno(stdout), F_SETFL,fcntl(fileno(stdout), F_GETFL) |O_NONBLOCK);
But that didn't work either.
Could anyone give me some advice?
Thanks!
edit:
Someone asked if I really meant stdout. Yes, I am resurrecting some old code, and the stdout has been set as the socket for UART communication between my board and the computer.
sp.baud = 115200uL;
sp.length = UART_WORD_LEN_8;
sp.parity = UART_PARITY_NONE;
sp.stop = UART_STOP_BITS_1;
ioctl( fileno(stdout), UART_SETUP, &sp);
irq.FIQ = 0;
irq.pri = (INT_PRIORITY)10;
ioctl( fileno(stdout), INTERRUPT_SETUP, &irq);
edit2: They used stdout this way so that printf works. When it tries to print to stdout, it gets redirected to the UART serial stream.
edit3: The error I get when calling
ioctl( fileno(stdout), BLOCKING_SETUP, &blocking);
is
errno88 function not implemented

Conditional OCR rotation on the image or Page in KOFAX

We have two source of inputs to create a Batch first is Folder Import and second is Email import.
I need to add condition where if the source of image is Email it should not allow to rotate the image and like wise if source if Folder import it should rotate the image.
I have added a script for this in KTM.
It is showing proper message of the source of image but it is not stopping the rotation of the image.
Below check the below script for reference.
Public Function setRotationRule(ByVal pXDoc As CASCADELib.CscXDocument) As String
Dim i As Integer
Dim FullPath As String
Dim PathArry() As String
Dim xfolder As CscXFolder
Set xfolder = pXDoc.ParentFolder
While Not xfolder.IsRootFolder
Set xfolder = xfolder.ParentFolder
Wend
'Added for KTM script testing
FullPath= "F:\Emailmport\dilipnikam#gmail.com_09-01-2014_10-02-37\dfdsg.pdf"'
If xfolder.XValues.ItemExists("AC_FIELD_OriginalFileName") Then
FullPath= xfolder.XValues.ItemByName("AC_FIELD_OriginalFileName").Value
End If
PathArry() = Split(FullPath,"\")
MsgBox(PathArry(1))
If Not PathArry(1) = "EmailImport" Then
For i = 0 To pXDoc.CDoc.Pages.Count - 1
pXDoc.CDoc.Pages(i).Rotation = Csc_RT_NoRotation
Next i
End If
End Function
The KTM Scripting Help has a misleading topic named "Dynamically Suppress Orientation Detection for Full Page OCR" where it shows setting Csc_RT_NoRotation from the Document_AfterClassifyXDoc event.
The reason I think this is misleading is because rotation may already have occurred before that event and thus setting the property has no effect. This can happen if layout classification has run, or if OCR has run (which can be triggered by content classification, or if any project-level locators need OCR). The sample in that topic does suggest that it is only for use when classifiers are not used, but it could be explained better.
The code you've shown would be best called from the event Document_BeforeProcessXDoc. This will run before the entire classify phase (including project-level locators), ensuring that rotation could not have already occurred.
Of course, also make sure this isn't because of a typo or anything else preventing the code from actually executing, as mentioned in the comments.

Resources