How to warm start SNOPT in drake - drake

Hi I was wondering how to warm start the snopt solver in pydrake?
In the documentation for SNOPT it says to use start =2. However I'm not sure how to feed that in properly and also send information from the previous solve into snopt

We haven't supported all the warm-start features in Drake yet. In Drake, you can give it an initial guess
result = Solve(prog, initial_guess)
There are other API's to set the initial guess. You can refer to our tutorial in the section "Using an initial guess". You can use the previous solution as the initial guess for the current solve.
We currently don't support warm-starting with dual variable or basis vectors yet.

Related

Lidars in Drake

I want to simulate lidars. I saw that a class DepthSensor was mentioned in the documentation, but I have not found its actual implementation. For now, I am planning on using the RgbdSensor class and use only the height I need of the depth point cloud I receive to simulate my lidars.
Just to get your input on that, maybe I missed something, but is there a specific class for lidars, and how would you go about adding lidars to a simulation?
Thanks in advance,
Arnaud
You've discovered an anchronism in the code. There had previously been a lidar-like sensor (called DepthSensor). The extant documentation refers to that class. The class's removal should've been accompanied by a clean up of the documentation.
The approach you are taking is the expected approach given Drake's current state.
There has always been an intention to re-introduce a lidar-like sensor in Drake's current architecture. It simply hasn't been a high priority.
I'd recommend you proceed with what you're currently doing (lidar from depth images) but, at the same time, post an issue requesting a lidar-like query with specific focus on the minimum lidar-properties that you require. A discussion regarding how that would differ from what you can actually get from the depth images would better inform of us your unique needs and how to prioritize it. (You can also indicate more advanced features that you need less but would be good to have, of course).
As for the question: how would you go about adding lidars?
That's problematic. Ideally, what you would need is ray-casting ability. The intent is for QueryObject to support such a query, but it hasn't happened yet. (It's certainly the underlying technology we'd have used to implement a LidarSensor.) In the absence of that kind of functionality, you'd essentially have to do it yourself in the most horrible, tedious way imaginable. I'd go so far as to suggest that it's not feasible with the current API.

Segfault when simulating control loop with second order system

Working from the tutorials I wanted to try PID-controlling a second order linear system. I'm running into segfaults when simulating the closed loop. I've put my code over on Gist. It is mostly identical to the example in the dynamical systems tutorial.
Here's what works
Simulating a diagram containing only the second order system
Dropping in PendulumPlant for the second order system and using the controller
Every step up to simulator.AdvanceTo - that's where the segfault occurs
I'm sure I am missing something obvious here. Does anyone with more experience see what's wrong?
Thanks for reporting this. I didn't see anything on quick inspection. I ran your code (both on linux and mac) and was able to reproduce. Absolutely you should never see a silent segfault, so this is a real issue.
I've escalated it here: https://github.com/RobotLocomotion/drake/issues/12497
FTR - I've also opened a PR to improve the PidController documentation. https://github.com/robotlocomotion/drake/pull/12496
I'm investigating this now -- I've also successfully reproduced the bug locally using the provided python, but I've also reproduced it directly in C++. [Reproduced in now defunct branch]
I'll update when I have something concrete.
Update 1: You've got an algebraic loop in these two systems (one that does not exist for the PendulumPlant as its derivatives and output are expressed in terms of its state and not its inputs). In this case, both systems' outputs depend directly on their inputs so, kablooie! The bug, in this case, is figuring out why that isn't communicated to you right up front.
Presumably, you'd also like to know what the right version of this program is that doesn't have an algebraic loop. Stay tuned.
Final update
A patch has gone through to correct the underlying bug. Depending on how you're accessing Drake, it's immediately available in master, or you can wait for the next binary (as to your taste).
Thanks for bringing this issue to our attention.

Getting ElliFit ellipse fitting algorithm to work

I have tried to implement the ellipse fitting algorithm descibed in the following paper: “ElliFit: An unconstrained, non-iterative, least squares
based geometric ellipse fitting method”, by Prasad, Leung, Quek. A free version can be downloaded online from http://azadproject.ir/wp-content/uploads/2014/07/2013-ElliFit-A-non-constrainednon-iterative-least-squares-based-geometric-Ellipse-Fitting-method.pdf
The authors did not provide any publicly available implementation.
I have implemented the algorithm in Mathematica, I believe I have implemented it correctly, yet it fails to correctly find the fit parameters. The PDF of the experiment can be downloaded here: http://zvrba.net/downloads/ElliFit-fail-example.pdf
Did somebody else try to implement this particular algorithm and, if yes, what is the key to get it working? Is there a "bug" in the paper? Can somebody take another look at my implementation and see whether there's a bug there?
I know it's been almost a year since this question, but it seems that the authors have now provided public source code for ElliFit, both a MATLAB version and an OpenCV version.
Both are available on the the author's homepage. In case the homepage goes offline for some reason, both source codes are shared on Google and are available here (MATLAB) and here (OpenCV).
At the time of writing, I have not personally tested their code, but am planning to use them for a project. I will post any updates here in the next few days.
EDIT:
I got around to test the code sooner than I expected. I gave the OpenCV code a try. It works pretty well, as demonstrated by the image below (ignore the "almost-closed-ellipses". It's an artifact caused by something else in my code).
As you can see, it works pretty well, most of the times. There are some failure cases too (the small ellipse on the spray bottle next to the cup).

Bundle adjustment functions

If I have a known camera pose(Rotation + Position), and Intrinsics(distortion coefficients and camera matrix), and 2 cameras pointing at the same scene from slightly different angles.
Is there a way to use bundle adjustment to refine the camera pose? Preferably in some already existing API or function that doesent require too much mathematical knowledge to use.
You should use PBA (Multicore Bundle Adjustment) from Changchang Wu. It is really a nice library and it is written in C++. Furthermore, it features multi core computations and even GPU computation with a speedup of about 20 times.
It is clearly structured and easy to use.
So, instead of using SBA from Lourakis or using SSBA from Christopher Zach you should use PBA.
You may want to check out SSBA at http://www.inf.ethz.ch/personal/chzach/opensource.html but it will still require some mathematical insight to be able to use it properly.
You could try the implementation right inside OpenCV. It's in the contrib module. But I couldn't yet get it to work properly.. :/
article about it
Try the Ceres solver. An example implementation is available here. Again, you will need an understanding of the mathematical principles of bundle adjustment. But that is unavoidable.

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Resources