I'm currently trying to make the Connector Device work with ROS. My first step was to just take one of the robots of the "connector.wbt" world and place it in a new world and switching the robots controller argument to "ros"as one can see below.
When I run the simulations all ROS services are advertised as expected. In the next step I enable the Connector using its corresponding service, where an integer needs to be specified (presence check Time Step) //presence_sensor/enable service as shown below:
Now the topic is enabled, however no messages are published, while I'm expecting to see messages of type "webots_ros/Int8Stamped", which should give me zeros for the absence of a suitable connector.
Is there any idea if the problems lies on my side? Is the Connector-Ros-Interface working properly for you if you recreate the above example?
You are completely right, there was indeed a problem with the ros-connector interface.
This has been fixed just now here:
https://github.com/omichel/webots/pull/672
This fix will be included in the nightly build of today (R2019b revision 1, available from tomorrow morning) that you can find here:
https://github.com/omichel/webots/releases
Let us know if there are still problems.
Related
I am trying to implement logging on connected users in my vernemq client using erlang. From documentation, I found that this could be bad, due to the scalability of the client and the assumption that there might be a lot of clients connecting and disconnecting. This is not my case, I will just have a bunch of clients but a lot of messages. Anyway, to my question. Is it possible to change the log file when using error_logger? Or should I use a different module for logging? Log file can be in any location if it had to, but I need it separated from vernemqs console.log. A followup question would be, can I somehow get a floating window on logs? I don't need to keep logs from previous year and I don't want to manually clean them every day or week or something like that
Thanks for any responses
From OTP21 on, you should use logger instead of error_logger, although the error_logger API is kept for compatibility (it justs uses logger under the hood).
With logger, which you can configure with the system configuration, you can use file backends such as logger_std_h (check the example configurations).
In logger_std_h you can set file rotation.
Is there any recommended way to inspect/plot the numeric values that are being sent through the ports between drake systems in real-time?. (something similar to rqt_plot in ROS). Apart from the SignalLogger or writing and wiring custom individual plotting Systems, is there any method to access the port values internally?
There's nothing as nice as rqt_plot as far as I know.
If you are able to alter your Diagram before calling DiagramBuilder::Build, you could add an LcmScopeSystem onto any vector-valued output port and then the port's contents will be transmitted on an LCM channel. You can add multiple scopes, but you currently have to add them one by one, ahead of time.
Once the data is onto an LCM channel, then you could use the provided drake-lcm-spy program which has the ability to show (very rudimentary) live plots:
cd drake
bazel build //lcmtypes:drake-lcm-spy
bazel-bin/lcmtypes/drake-lcm-spy &
Also tangentially related would be https://github.com/RobotLocomotion/drake/issues/5857, though that is not on any near-term roadmap.
As the title says, whats the correct way to configure an IoTEdge Module to report data to Remote Monitoring?
Actually I have a custom module running on an IoTEdge device that is working correctly (I can check that is working properly by looking at the docker logs of the module) , but its not transmitting anything to the Remote monitoring dashboard, by the way the device is listed on the available devices on the Azure Remote Monitoring but it is offline. I suppose that it depends on the MessageSchema and MessageTemplate that are not configured. I can't find any specific documentation about this topic, can anyone point me in the right direction?
Are you asking about the original V1 version of the remote monitoring solution, or the newer V2 version? If it's the original version, you would need to, at least once, send a DeviceInfo structure (https://learn.microsoft.com/en-us/azure/iot-suite/iot-suite-v1-remote-monitoring-device-info#device-metadata) to the IoTHub associated with the solution. I haven't tried it yet, but it should work for the edge device (I don't think it would have an issue with the module concept). If it's the V2 version, I would need to investigate further.
We are having some challenges around using Azure Portal to manage IoT edge devices in development. I am posting in case someone can confirm these are known issues, or supply possible workarounds.
The first inconsistency is that when we have no clients connected, it will show up saying 1 under the connected client count field; even though at the same time giving a warning that the device is disconnected from the hub.
The second (and more annoying) inconsistency is that the modules that are running don’t match the modules that are showing up on the hub. When we run the docker ps command we get what we think is the correct situation, while at the same time, the hub shows that we have modules that are “running” that aren’t there at all, and one that is “pending deployment” that we can see is running.
We've also seen that the EdgeAgent container is pumping out a bunch of warnings saying: [WRN] - Building state for computing patch failed with error 'Could not find type in JObject. type Newtonsoft.Json.JsonSerializationException.' We wonder if this error has anything to do with it, maybe it’s trying to send status updates back to the hub but falling over somewhere along the line?
We would be grateful for any comments or updates on portal status. I have some images supporting my message, but can't upload them because I am a Stack Overflow newbie.
Thanks for any guidance as to whether these are known issues with the portal at this stage, or if there is something we are doing incorrectly.
Dave
I'm using the following code in an application that automatically detects when it is run whether it is running as a service or desktop application and behaves appropriately for the situation.
JclAppInst.JclAppInstances('<application descriptive label>').CheckSingleInstance;
The code is embedded into an initialization block at the bottom of a unit that contains code responsibility for acknowledging the service status and displaying key desktop information so I know this unit is included in both modes of operation. The CheckSingleInstance call works perfectly in desktop mode making sure that only one instance is run but doesn't seem to be able to detect if the application is currently running as a service.
Unfortunately I can't work out why the JclAppInstances would be affected by the difference. Both instances are running from the same folder but are operating as different users (ie service user differs from desktop user) but my understanding is different users should work.
Does anyone know whether it is possible to do this with the JclAppInstances and if so what my problem is?
It seems quite clear that the JclAppInstances class doesn't support the required functionality and any solution based on using this Jedi component should refer to the following StackOverflow answer instead.
one instance of app per Computer, how?