What is the standard workflow for using drake with real robot? - drake

I would like to use Drake to control a real robot (Franka panda or Kuka iiwa). For simulation, the standard workflow would be build a diagram first and then use the Simulator to simulate the diagram. However, for the usage in real robot, I haven't found an example on how to run the built diagram. A close example would be ManipulationStationHardwareInterface. However I haven't found code on how to actual use this interface. Do I need to manually generate the context and run the diagram in a while loop? Or maybe I still need to use the Simulator and use the AdvanceTo method? If I use the Simulator, would there be addtional setting for synchoronization? What's more, how to control the loop frequency of the running diagram?
Thank you in advance!

The ManipulationStationHardwareInterface is indeed a reasonable example to look at. We have an updated version of the coming to Drake soon, too.
The basic steps are:
Run the robot's driver as a different process. Some of the drivers we use are public and are hosted here: https://manipulation.csail.mit.edu/station.html .
Send / Receive messages from your main controller thread to/from your driver. The LCMPublisherSystem and LCMSubscriberSystem are right in Drake. The ROS/ROS2 equivalents are here: https://github.com/RobotLocomotion/drake-ros (The ManipulationStationHardwareInterface + manipulation_station_simulation file in Drake shows an example of splitting up a Diagram for control + simulation into two processes using this message passing.
Timing: in the simplest case (typically ok for hardware) all of the executables are running with simulator.set_target_realtime_rate(1.0) and then simulator.AdvanceTo(big number). In other words, they all synchronize their clocks to the cpu clock, and don't try to synchronize explicitly to a clock passed via message passing. That sort of synchronization is possible, too, but I recommend the simpler version first. In this simple version, each system (e.g. your controller) is set to update with some periodic update -- say 100Hz -- and will run its computation using the most recently received subscribed messages and publish at that rate.

Related

How to run a robot's controller in multi processes or multi threads in webots?

I want to have a controller that somehow runs 3 processes to run the robot's code.
I am trying to simulate a humanoid soccer robot in webots . To run our robot's code, we run 3 processes. One for the servomotors' power management , another one for image processing and communications and the last one for motion control.
Now I want to have a controller making me somehow able to simulate something like this or at least similar to it. Does anyone have any idea how I can do this?
Good news: the Webots API is thread safe :-)
Generally speaking, I would not recommend to use multi-threads, because programming threads is a big source of issues. So, if you have any possibility to merge your threads into a single-threaded application, it's the way to go!
If you would like to go in this direction, the best solution is certainly to create a single controller running your 3 threads, and synchronize them with the main thread (thread 0).
The tricky part is to deal correctly with the time management and the simulation steps. A solution could be to set the Robot.synchronization field to FALSE and to use the main thread to call the wb_robot_step(duration) function every duration time (real time).

Lua code runs properly on my advanced computer but doesn't run on the monitor

I run a successful Minecraft Tekkit modded server with computer craft on it.
I'm fairly new to lua and only know the basics, I'm trying to make a menu with pages to display the banned items list and rules list on. I've made a program with arrows that's optimized for advanced computers and monitors.
The code runs properly on my advanced computer but doesn't run on the monitor, when it shows and someone clicks the arrows it doesn't work either.
I just started using stack so I'm not sure on what to do, if you need any info please ask for it :)
The code: http://pastebin.com/gVtPeBCE
By the way I already tried using Mon.write and Mon = peripheral.wrap("top")
For those who don't have tekkit here is a computercraft emulator: https://goo.gl/J0dPq0
I'm sorry to inform you that I haven't read through all of your code. But judging based on your description, I would say that it's likely one of three issues, not including incorrect syntax as a possibility.
Note: Your question is exclusively asking about the programs ability to run on a monitor while the emulator you link to only provides the desktop ComputerCraft computers.
Peripheral
Although you already stated:
By the way I already tried using Mon.write and Mon = peripheral.wrap("top")
I would like to clarify that you can, as a way to simplify the code transition, set the peripheral function table equal to the term variable. For example: term = peripheral.wrap(string_side).
Note: When you use this method, you shouldn't execute the program with the command:
> monitor side program.
You should instead run it as a normal program with no special treatment.
I.e. > program.
Incorrect Mouse Event Detection
Simply put, when using a monitor, you're not supposed to pull for a mouse_click event. You have to pull for a monitor_touch event instead.
while true do
type, side, x, y = os.pullEvent()
if type == "monitor_touch" then
print("Monitor '"..side.."' has been pressed at "..x..", "..y.."!")
end
end
Monitor Size
This just simply means that the program you're trying to execute on the monitor takes up to much space and is therefore unusable when displayed on that size of monitor.
Suggestion: Either update your code for the monitor size or build the monitor to fit the program.
Please remember that all of these ideas might not answer your question, as the code you have provided to look over is too large and I haven't been able to find the time to experiment with it. Therefore, these are only general suggestions.
if i had to guess, it's because term is short for terminal and will auto work with computers so if you set term to be the monitor at the top of the file it should work correctly.
term = peripheral.wrap("SIDE OF MONITOR")
Put that at the top of your code and it should work. but this what i think it is after taking a look at your code (also its not that long of a code sample...)

How would someone create a preemptive scheduler for the Lua VM?

I've been looking at lua and lvm.c. I'd very much like to implement an interface to allow me to control the VM interpreter state.
Cooperative multitasking from within lua would not work for me (user contributed code)
The debug hook gets me only about 50% of the way there, instruction execution limits, but it raises an exception which just crashes the running lua code - but I need to be able to tweak it even further.
I want to create a system where 10's of thousands of lua user scripts are running - individual threads would not work, and the execution limits would cause headache for beginning developers, I'm going to control execution speeds too. but ultimately
while true do
end
will execute forever, and I really don't care that it is.
Any ideas, help or other implementations that I could look at?
EDIT: This is not about sandboxing pretend I'm an expert in that field for this conversation
EDIT: I do not want to use an internally ran lua code coroutine based controller.
EDIT: I want to run one thread, and manage a large number of user contributed lua scripts, an external process level control mechansim would not scale at all.
You can search for Lua Sandbox implementations; for example, this wiki page and SO question provide some pointers. Note that most of the effort in sandboxing is focused on not allowing you to execute bad code, but not necessarily on preventing infinite loops. For better control you may need to combine Lua sandboxing with something like LXC or cpulimit. (not relevant based on the comments)
If you are looking for something Lua-based, lightweight, but not necessarily 100% foolproof, then you can try running your client code in a separate coroutine and set a debug hook on that coroutine that will be triggered every N-th line. In that hook you can check if the process you are running exceeded its quotes. You also need to take care of new coroutines started as those need to have their own hooks set (you either need to disable coroutine.create/wrap or to replace them with something that sets the debug hook you need).
The code in this case may look like:
local coro = coroutine.create(client_func)
debug.sethook(coro, debug_hook, "l", 1000) -- trigger hook on every 1000th line
It's not foolproof, because it may block on some IO operation and the debug hook will not help there.
[Edit based on updated question and comments]
Between "no lua code coroutine based controller" and "no external process control mechanism" I don't think you are left with much choice. It may be that your only option is to run one VM per user script and somehow give ticks to those VMs (there was a recent question on SO on this, but I can't find it). Before going this route, I would still try to do this with coroutines (which should scale to tens of thousands easily; Tir claims supporting 1M active users with coroutine-based architecture).
The mechanism would roughly look like this: you install the debug hook as I shown above and from that hook you yield back to your controller, which then decides what other coroutine (user script) to resume. I have this very mechanism working in the Lua debugger I've been developing (although it only does it for one client script). This doesn't protect you from IO calls that can block and for that you may still need to have a watchdog at the VM level to see if it's been blocked for longer than needed.
If you need to serialize and deserialize running code fragments that preserve upvalues and such, then Pluto is probably your only option.
Look at implementing lua_lock and lua_unlock.
http://www.lua.org/source/5.1/llimits.h.html#lua_lock
Take a look at lulu. It is lua VM written on lua. It's for Lua 5.1
For newer version you need to do some work. But it's then you really can make a schelduler.
Take a look at this,
https://github.com/amilamad/preemptive-task-scheduler-for-lua
I maintain this project. It,s a non blocking preemptive scheduler for running lua code. Suitable for long running game scripts.

DirectX 11: simultaneous use of multiple adaptors

We need to drive 8 to 12 monitors from one pc, all rendering different views of a single 3d scenegraph, so have to use several graphics cards. We're currently running on dx9, so are looking to move to dx11 to hopefully make this easier.
Initial investigations seem to suggest that the obvious approach doesn't work - performance is lousy unless we drive each card from a separate process. Web searches are turning up nothing. Can anybody suggest the best way to go about utilising several cards simultaneously from a single process with dx11?
I see that you've already come to a solution, but I thought it'd be good to throw in my own recent experiences for anyone else who comes onto this question...
Yes, you can drive any number of adapters and outputs from a single process. Here's some information that might be helpful:
In DXGI and DX11:
Each graphics card is an "Adapter". Each monitor is an "Output". See here for more information about enumerating through these.
Once you have pointers to the adapters that you want to use, create a device (ID3D11Device) using D3D11CreateDevice for each of the adapters. Maybe you want a different thread for interacting with each of your devices. This thread may have a specific processor affinity if that helps speed things up for you.
Once each adapter has its own device, create a swap chain and render target for each output. You can also create your depth stencil view for each output as well while you're at it.
The process of creating a swap chain will require your windows to be set up: one window per output. I don't think there is much benefit in driving your rendering from the window that contains the swap chain. You can just create the windows as hosts for your swap chain and then forget about them entirely afterwards.
For rendering, you will need to iterate through each Output of each Device. For each output change the render target of the device to the render target that you created for the current output using OMSetRenderTargets. Again, you can be running each device on a different thread if you'd like, so each thread/device pair will have its own iteration through outputs for rendering.
Here are a bunch of links that might be of help when going through this process:
Display Different images per monitor directX 10
DXGI and 2+ full screen displays on Windows 7
http://msdn.microsoft.com/en-us/library/windows/desktop/ee417025%28v=vs.85%29.aspx#multiple_monitors
Good luck!
Maybe you not need to upgrade the Directx.
See this article.
Enumerate the available devices with IDXGIFactory, create a ID3D11Device for each and then feed them from different threads. Should work fine.

How to implement a code coverage tool using Win32 Debugging API

I am trying to understand how to implement a Code Coverage tool using the Win32 Debugging API.
My thinking has been to utilize the Win32 Debugging API to launch a process in debug mode - and track what CPU instructions has been executed. After having tracked all CPU instructions I would then use the map file to map it to what source code lines were executed.
As far as I understand, there would be two ways of knowing what CPU instructions have been executing.
Would be to launch the process in debug mode - set all threads in single step mode and let the debugging app note all instructions that has been executed
Would be make a more intelligent approach where you would know a lot more about x86 instructions and basically replace the next branch instruction with a breakpoint. Then keeping track of the delta instructions between the two breakpoints.
Update - new suggested approaches inspired by Michael's response:
Start with the map file and insert breakpoints for the beginning of each line and let the debug framework be notified every time a breakpoint hits.
Start with the map file - binary instrumentation to insert a "hook" that get called at entry of each source line - avoiding the callback through the debugger framework.
Using a VM Technology - such as VMware to find out what instructions in a particular process was executed - I don't fully understand this approach...
Could someone validate one of the approaches above or maybe suggest an alternative - please note that the use case is line-by-line code coverage and not performance profiling - thus we need to know if each single source line is visited.
My primary goal (although no particular plan is in place...) would be to create a simple code coverage tool for Delphi primarily.
Thanks!
One approach is hooking all api calls and function calls to compare with table made from the source. Thus you discovers what is covered.
There is many api for hooking, one is Trappola API hooking
This could work - each single step event will create an exception and you could record the hit IP address in your map of executed code lines.
Unfortunately, I imagine this would be glacially slow. It'd be incredibly inefficient, as each single line of code results in 1000's of times more work, as an exception is generated, trapped, a message sent to your debugger, and then a round trip back after you record the hit. It might be better to try to set breakpoints instead for each covered line and clear them after they are hit. That'd be faster, but most likely still very slow.
The core problem is you're trying to use the debugger as a code coverage tool which it is not intended for. A quick search shows several code coverage tools for Delphi on the Internet.
I would suggest, in stead of hooking for each line of code, you can go for the each block. What I mean to say hook for block of codes. It will be faster and you can get the count of lines as well from the blocks count.

Resources