Turning some of the states in a Drake system into inputs - drake

Consider an existing Drake System (for example MultibodyPlant). Is there a way to wrap that System inside a Diagram in such a way as to convert some of the states of the internal System to be inputs instead, i.e. set directly from input ports of the outer Diagram?
The motivation would be essentially to make a change in modeling decisions. For example, a quadrotor is sometimes considered to have its angular rates and collective thrust as inputs, instead of the collective thrust and body moments (or similarly, individual rotor commands).
In a more complex system, perhaps I might assume I have instantaneous control over certain velocities (e.g. internal, fully-actuated joints with fast dynamics), but still want to model the entire multibody system's dynamics accounting for the current choice of velocity in terms of Coriolis terms etc.
What I'm getting at is actually very similar to the modeling choice for the elevator in the Flat-Plate Glider Model - but I'd like to avoid manually implementing a LeafSystem because my system has nontrivial multibody dynamics.
My sense is that this may not be possible since I don't know of any way for a Diagram to interfere with the internal dynamics of a System, so "deleting" a state and promoting it to an input seems impossible. But I thought there might be some clever method to do this.
Thanks in advance!

I agree with your analysis. The simplest answer is "no" -- systems that declare state cannot be post-hoc exploded to have that state come from an input port. But for the specific examples that you mention, there are a few possibilities / related ideas.
The first is the notion of a prescribed motion constraint -- e.g. that one can set the positions/velocities of joints in a MultibodyPlant directly. We don't implement that yet, unfortunately, but it is a reasonable request that we've discussed occasionally
(here is one example: https://github.com/RobotLocomotion/drake/issues/14694).
As you say, you could implement a PD controller just outside of the system to achieve the desired effect. The only real difference between this and the way that we would implement the prescribed motion constraint internally, is that we can choose the gains well and inform the solver about that constraint directly.
Another possibility is to implement a force element that works "inside" the plant and accepts an input port. This would allow you to apply forces to implement your modeling ideas even in ways that are not possible through the actuation_input_port (e.g. not achievable by a declared actuator).
The glider example you link is a good one. In that case, I was very worried about having a model that avoided even declaring the velocity of the elevator as state (since our verification methods scale in complexity with the dimension of the state space). For now, that still requires writing a bespoke LeafSystem implementation.

Related

Simple Simulation of Time Dependent Dynamics

So, I am wanting to simulate the dynamics of a system which varies over time.
Essentially:
xdot = Q/C_a - x/(R_aC_a)
where Q is an impulse train with period T. I estimate the solution by hand with some different techniques. But, I was curious if there was a simple way in drake to account for this time-dependence.
Time dependence is supported in basically every workflow -- even in the SymbolicVectorSystem that I know you have been using, you can define a variable for time and your dynamics method can depend on it.
An impulse train (in time), though, is a particular dependence on time that we need to think through. These are often used as models of sampling/reconstruction (particularly useful in the frequency domain), but not very common in continuous-time simulation, I think? I'm not sure you actually want to have that on your input? If you do, then drake's event systems are probably up to the task, but first I want to check if it's really your intended workflow.

How to track Fast Moving Objects?

I'm trying to create an application that will be able to track rapidly moving objects in video/camera feed, however have not found any CV/DL solution that is good enough. Can you recommend any computer vision solution for tracking fast moving objects on regular laptop computer and web cam? A demo app would be ideal.
For example see this video where the tracking is done in hardware (I'm looking for software solution) : https://www.youtube.com/watch?v=qn5YQVvW-hQ
Target tracking is a very difficult problem. In target tracking you will have two main issues: the motion uncertainty problem, and the origin uncertainty problem. The first one refers to the way you model object motion so you can predict its future state, and the second refers to the issue of data association(what measurement corresponds to what track, and the literature is filled with scientific ways in which this issue can be approached).
Before you can come up with a solution to your problem you will have to answer some questions yourself, regarding the tracking problem you want to solve. For example: what are the values that you what to track(this will define your state vector), how are those values related to one another, are you trying to perform single object tracking or multiple object tracking, how are the objects moving( do they have a relatively constant acceleration or velocity ) or not, do objects make turns, can objects also be occluded or not and so on.
The Kalman Filter is good solution to predict the next state of your system (once you have identified your process model). A deep learning alternative to the Kalman filter is the so called Deep Kalman Filter which essentially is used to do the same thing. In case your process or measurement models are not linear, you will have to linearize them before predicting the next state. Some solutions that deal with non-linear process or measurement models are the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF).
Now related to fast moving objects, an idea you can use is to have a larger covariance matrix since the objects can move a lot more if they are fast, so the search space for the correct association has to be a bit larger. Additionally you can use multiple motion models in case your motion model cannot be satisfied with only one model.
In case of occlusions I will leave you this stack overflow thread, where I have given an answer covering more details regarding occlusion handling in case of tracking. I have added some references for you to read. You will have to provide more details in your question, if you would like to receive more information regarding a solution (for example you should define fast moving objects with respect to camera frame rate).
I personally do not think there is a silver bullet solution for the tracking problem, I prefer to tailor a solution to the problem I am trying to solve.
The tracking problem is complicated. It is also more in the realm of control systems than computer vision. It would be also helpful to know more about your situation, as the performance of the chosen method pretty much depends on your problem constraints. Are you interested in real-time tracking? Are you trying to reconstruct an existing trajectory? Are there multiple targets? Just one? Are the physical properties of the targets (i.e. velocity, direction, acceleration) constant?
One of the most basic tracking methods is implemented by a Linear Dynamic System (LDS) description, in concrete, a discrete implementation, since we’re working with discrete frames of information. This method is purely based on physics, and its prediction is very sensitive. Depending on your application, the error rate could be acceptable… or not.
A more robust solution is Kalman’s Filter, and it is pretty much the go-to answer when tracking is needed. It implements prediction based on all the measurements obtained so far during the model's lifetime. It mainly works on constant-based measurements (velocity and acceleration) although it can be extended to handle non-constant models. If you are working with targets that won't exhibit a drastic change in their velocity, this is what you (probably) should implement.
I'm sorry I can't provide you with more, but the topic is pretty extensive and, admittedly, the details are beyond my area of expertise. Hopefully, this info should give you a little bit of context for finding a solution.
The problem of tracking fast-moving objects (FMO) is a known research topic in computer vision. FMOs are defined as objects which move over a distance larger than their size in one video frame. The solutions which have been proposed use classical image processing and energy minimization to establish their trajectories and sharp appearance.
If you need a demo app, I would suggest this GitHub repository: https://github.com/rozumden/fmo-cpp-demo. The demo is written in OpenCV/C++ and runs in real-time. The authors also provide a mobile app version, which is still in testing mode. Using this demo app you can track any fast moving objects in real-time without even providing an object model. However, if you provide object size in real-world units, the app can also estimate object speed.
A more sophisticated algorithm is open-sourced here: https://github.com/rozumden/deblatting_python, written in Python and PyTorch for speed-up. The repository contains a solution to the deblatting (deblurring and matting) problem, exactly what happens when a Fast Moving Object appears in front of a camera.

Dynamically changing the neural network structure

I am trying to implement a NEAT-like algorithm which involves dynamically changing the neural network structure like adding or deleting nodes and connections. I've been using Tensorflow for my previous work in supervised learning. But once a network is defined in Tensorflow , it cannot be changed. Is there any other framework available that provides this functionality ?
Thanks.
Unless it's a framework designed specifically for NEAT, no, not really. The nature of symbolic execution necessarily means that there's a "create the network" step followed by a "run/train the network" step. Depending on what kind of frequency you're changing the network topology, though, Tensorflow could definitely still be viable: it will mean, every so often, saving all the parameters, and making a new model -- but this might not be terrible, depending on your parameters.
If you don't like that, you can sort of hack something together more manually using masking. That is, have some neurons "masked" out and removed, or some connnections "masked" out. You would do this by having a 0-1 valued mask for all your parameters that you pre-multiply into your parameters before applying. Keep the "allowed" connections sparse, but densely-connect everything else together as much as possible. It will, to some degree, give you slowdown since there are some additional computations, but a tf.cond call might be able to save you most of the time by only conditionally executing. This can't get you totally free topology evolution, but could be very flexible.

Image analysis technique to determine approximate change in view over a short period of time?

I am working on an open source package for robot owners. I want to do a decent job of detecting when the robot is having movement problems. One of the problems the robot commonly has is that the back wheel gets "tucked underneath" in a bad way and makes it turn very slowly when on carpet. I believe that with a combination of accelerometer value inspection and (I hope) a relatively simple yet robust vision analysis technique, I will be able to tell when the robot is having this specific problem.
What I need is to be able to analyze two images, separated by about 1/2 second in time, and get a numerical value that tells about how close they are, but in a way that has some intelligence about the objects in the screen instead of just a simple color/hue/etc. analysis. I've heard of an algorithm called optical flow that is used in object and scene tracking, but I'm hoping I don't need something heavyweight.
Is there an machine vision algorithm/function that can analyze two JPEG's and tell if they belong to the same scene and viewpoint, yet can also deliver a numerical monotonically increasing value that tells me rough how different they are? If I could get that numerical value and compare it to the number of milliseconds past, while examining the current accelerometer activity, I believe I can detect when the robot is having the "slow turn of death" problem.
If so, please tell me the basic technique involved, and if you know of machine vision library that implements it, which one it is.
but in a way that has some intelligence about the objects in the screen instead of just a simple color/hue/etc. analysis
What you are suggesting is a complex problem by itself, so forget about 'lightweight' solutions. Probably you are going to need something like optical flow.
Other options I would recommend you looking into are:
Vanishing points detection and variation from image to image. This quite fits into your problem domain Wikipedia
Disparity map: related to optical flow. Used for stereographic vision, but I think you can use it for the kind of application you are looking for. Take a look at this

Are there any good non-predictive path following algorithms?

All the path following steering algorithms (e.g. for robots steering to follow a colored terrain) that I can find are predictive, so they rely on the robot being able to sense some distance beyond its body.
I need path following behavior on a robot with a light sensor on its underside. It can only see terrain it is directly over and so can't make any predictions; are there any standard examples of good techniques to use for this?
I think that the technique you are looking for will most likely depend on what environment will you be operating in as well as to what type of your resources will your robot have access to. I have used NXT robots in the past, so you might consider this video interesting (This video is not mine).
Assuming that you will be working on a flat non glossy surface, you can let your robot wander around until it finds a predefined colour. The robot can then kick in a 'path following' mechanism and will keep tracking the line. If it does not sense the line any more, it might want to try to turn right and/or left (since the line might no longer be under the robot because it has found a bend).
In this case though the robot will need in advance what is the colour of the line that it needs to follow.
The reason the path finding algorithms you are seeing are predictive is because the robot needs to be able to interpret what it is "seeing" in context.
For instance, consider a coloured path in the form of a straight line. Even in this simple example, how is the robot to know:
Whether there is a coloured square in front of it, hence it should advance
Which direction it is even travelling in.
These two questions are the fundamental goals the algorithm you are looking for would answer (and things would get more complex as you add more difficult terrain and paths).
The first can only be answered with suitable forward-looking ability (hence a predictive algorithm), and the latter can only be answered with some memory of the previous state.
Based solely on the details you provided in your question, you wouldn't be able to implement an appropriate solution. Although, I would imagine that your sensor input and on-board memory would in fact be suitable for a predictive solution, you may just need to investigate further what the capabilities of your hardware allow for.

Resources