Lidars in Drake - drake

I want to simulate lidars. I saw that a class DepthSensor was mentioned in the documentation, but I have not found its actual implementation. For now, I am planning on using the RgbdSensor class and use only the height I need of the depth point cloud I receive to simulate my lidars.
Just to get your input on that, maybe I missed something, but is there a specific class for lidars, and how would you go about adding lidars to a simulation?
Thanks in advance,
Arnaud

You've discovered an anchronism in the code. There had previously been a lidar-like sensor (called DepthSensor). The extant documentation refers to that class. The class's removal should've been accompanied by a clean up of the documentation.
The approach you are taking is the expected approach given Drake's current state.
There has always been an intention to re-introduce a lidar-like sensor in Drake's current architecture. It simply hasn't been a high priority.
I'd recommend you proceed with what you're currently doing (lidar from depth images) but, at the same time, post an issue requesting a lidar-like query with specific focus on the minimum lidar-properties that you require. A discussion regarding how that would differ from what you can actually get from the depth images would better inform of us your unique needs and how to prioritize it. (You can also indicate more advanced features that you need less but would be good to have, of course).
As for the question: how would you go about adding lidars?
That's problematic. Ideally, what you would need is ray-casting ability. The intent is for QueryObject to support such a query, but it hasn't happened yet. (It's certainly the underlying technology we'd have used to implement a LidarSensor.) In the absence of that kind of functionality, you'd essentially have to do it yourself in the most horrible, tedious way imaginable. I'd go so far as to suggest that it's not feasible with the current API.

Related

Anti windup on PidController

I'm using PidController in a drake project and I need of an anti windup control to limit the integral term growth. I was examine the PidController documentation,
https://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1controllers_1_1_pid_controller.html
but I cannot find functions or parameters related to this limit.
It is correct or i missed something in the documentation?
What's the smartest way to implement it?
Thanks
I agree; the current PidController implementation does not provide those features. It would not be too hard to implement this (by modifying the current PidController interface in C++, or providing a similar implementation in pydrake). And we would welcome a contribution like this if you wanted to make it.
For bonus points, the class should really declare witness events when the "clamp" turns on/off, so that error-controlled integrators get to know about this discontinuity.

F# Priority Queue

I have tried to use this snippet of code
https://code.msdn.microsoft.com/windowsdesktop/Net-Implementation-of-a-d3ac7b9d
to implement a heap-based implementation of Prim's algorithm to solve the Minimum Spanning Tree (MST) problem in a non-directed connected graph.
after a few iterations, i find that the heap/priority queue is not well maintained anymore.
that is the head of the PriorityQueue doesn't have the lowest Key in the Heap.
PQ 0 [-7230, 309]
...
PQ 146 [-7277, 308]
Has anyone use this code and experienced similar problems ?
I can post a link on GitHub if anyone would be looking at it
My needs are for a heap datastructure which supports deletion of an element in the middle. It looks like Fsharpx.collections doesnt have such a data structure.
does anybody know a good implementation available somewhere ?
thanks
Recently, I ported a MaxHeap from PLINQ to F# here and made it MinHeap. It is array-based and performs much better than any "pure functional" alternative.
However, after a lot of benchmarking, I found that SortedDeque based on just a simple sorted circular buffer performs significantly better on most use cases even when I need to add or delete in the middle.
My answer is inspired by V.B. but here it is in full
I have used another library than FSharpx.collections, it is called
Spreads
read that page for details and instructions.
SortedDeque is the data structure i need for this problem.
I used the same code, just changing from the microsoft blog page code to the library functions and found the good result, so it is indeed some bug in the microsoft blog page code
PS. this spreads library has been designed to format financial data for quantitative analysis and i'm happy i found it !!! IT looks like this library is rather recent and thats why its not on top of Google's search or referenced in any other SO question (or if it is i didn't see it)
FSharpx.Collections is of no use for that problem as you can see from that discussion heap issue in FSharpx.Collections

A/B testing(show new feature only for 50% of users)

I'am creating a new feature for my iOS app. After I publish the app I wants to show the new feature only for 50% of the users, so I can do some testing which version makes more orders. I have no idea how to do it without using some third parties like Optimizely.
Also is it possible to do this using Google Tag Manager(GTM).
So can someone please help me to figure this out.
Thank you very much for your time.:)
It’s hard to do it on your own, though not impossible of course: Optimizelys of the world are just programs. You’ll need to solve these problems:
Targeting: Some algorithm that will assign user session to either control or (one of) treatment(s). This has to be random, of course, or you may as well stop there.
Routing: Send sessios to the targeted experience.
Logging: You’ll need to intelligently log events from sessions as they traverse their targeted experience. These may be many, so be careful not to add latency to your app path. Your statistical analysis will be based on these.
Experience stability: how do you ensure (if you do) that a returning user sees the same experience he’s already seen.
Note as well, that Optimizely will only help you if all your changes are on the device and not on the server. If you need to instrument server changes as well, you’ll have to look into Sitespect or Variant.
I finally figured out how to do the A/B testing with 'Google Tag Manager'(GTM).
In GTM you can create a variable called 'Google Analytics Content Experiment'. With this variable you can select how many percentage of users going to see each Variation(your experiments). You can create up to 10 variations for single experiment.
GTM is so cool and powerful. GTM contains so many features that could save lot of time and I totally recommend it for anyone who is going to do A/B testing.

Mahout recommendations with metadata related to preference

I was planning to write a recommender which treats preferences differently depending contextual information (time the preference was made, device used to make the recommendation, ...)
Within the Mahout in Action book and within the code examples shipped with Mahout I can't seem to find anything related. In some examples to there's metadata (a.k.a content) used to express user or item similarity - but that's not what I'm looking for.
I wonder if anyone already made an attempt to do sth similar with Mahout?
Edit:
A practical example could be that the current session is done on a mobile device and this should cause a push up (rating*1.1) for all preferences tracked on mobile devices and a drop for preferences tracked differently (rating*0.9).
...
Another example could be that some ratings are collected implicit and others explicit. How would I be able to keep track of this fact without "coding" that directly into the tracked value and how would I be able to use that information when calculating the scores?
I would say one approach is to use the Rescorer class to do just that, but my guess is that this is what you are referring to when you say that's not what you are looking for.
Another approach would be to pre-process the entire data you have to adjust the preferences according to your needs, before using Mahout to generate recommendations.
If you provide some more detail on how you expect to use your data to modify preferences, people here would be able to help even further.

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Resources