Working from the tutorials I wanted to try PID-controlling a second order linear system. I'm running into segfaults when simulating the closed loop. I've put my code over on Gist. It is mostly identical to the example in the dynamical systems tutorial.
Here's what works
Simulating a diagram containing only the second order system
Dropping in PendulumPlant for the second order system and using the controller
Every step up to simulator.AdvanceTo - that's where the segfault occurs
I'm sure I am missing something obvious here. Does anyone with more experience see what's wrong?
Thanks for reporting this. I didn't see anything on quick inspection. I ran your code (both on linux and mac) and was able to reproduce. Absolutely you should never see a silent segfault, so this is a real issue.
I've escalated it here: https://github.com/RobotLocomotion/drake/issues/12497
FTR - I've also opened a PR to improve the PidController documentation. https://github.com/robotlocomotion/drake/pull/12496
I'm investigating this now -- I've also successfully reproduced the bug locally using the provided python, but I've also reproduced it directly in C++. [Reproduced in now defunct branch]
I'll update when I have something concrete.
Update 1: You've got an algebraic loop in these two systems (one that does not exist for the PendulumPlant as its derivatives and output are expressed in terms of its state and not its inputs). In this case, both systems' outputs depend directly on their inputs so, kablooie! The bug, in this case, is figuring out why that isn't communicated to you right up front.
Presumably, you'd also like to know what the right version of this program is that doesn't have an algebraic loop. Stay tuned.
Final update
A patch has gone through to correct the underlying bug. Depending on how you're accessing Drake, it's immediately available in master, or you can wait for the next binary (as to your taste).
Thanks for bringing this issue to our attention.
Related
I have been doing some tests with the path loss formula and it gives me some pretty good results so far. However, I looked at the original code and saw that the formula used is
distance = Math.pow(10.0, ((-adjustedRssi+txPower)/10*0.35))
where adjustedRssi is RSSI - adjustment. This was giving me very small values for distance so I thought that I must have modified it at some point by accident. After doing the maths and playing around a bit I found that using txPower-adjustment instead of txPower-adjustedRSSI gives me correct distances.
I figured that the error must have been my fault but looking back at an original copy of the library I am seeing that the formula was actually this way all along.
Is this a mistake or am I missing something obvious? Using the formla as is right now gives me wrong results while modifying it the way I did gives right results.
Also, why is the formula only used if the ratio<1. Shouldn't it work in either case?
Yes, you are absolutely right! Reviewing this now, I can see that this was a simple coding error I made when I originally wrote this. I paused work on the path loss formula because I was getting poor results, probably because of this error.
Since this is a development branch of an open source library hosted on Github, it is probably most appropriate to discuss this in that forum. Please feel free to comment directly on the pull request thread here: https://github.com/AltBeacon/android-beacon-library/pull/251. As the lead developer on that project, I would also welcome a pull request with the changes you are making.
TraceGL is a pretty neat project that allows JS programmers to trace code paths in Javascript. It looks something like this:
I'd like to build something similar for Objective C. I know the runtime has made it rather easy to trace method calls, but how would I trace control flow? For example, in the screenshot above, code paths not executed are made obvious with a red highlight. What would be the best way to achieve something similar in an Objective C/Xcode workflow?
The best I've come up with so far is to write a preprocessor that injects code into temporary source files before sending them to the compiler. Anyone have a better idea?
I guess the visualizer for issues found by Xcode's static analyzer comes pretty close to this - albeit this one will only give you the call path for a particular issue like a memory leak.
Try "Product > Analyze" in Xcode, select any of the issues found on any given project and click on the blue arrow in the code editor to see for yourself.
Not exactly answer for Objective C and XCode.
For C++ code there is a industrial quality code coverage tool BullseyeCoverage
Function coverage gives you a quick overview and condition/decision coverage gives you high precision
Works with everything you can write in C++ and C, including system-level and kernel mode
If you want to invent/write this kind of tool yourself I'd recommend to take a look at (evaluate) some existing tools that solve the same task so that you don't miss a key functionality
There are basically 2 categories of such tools
working at binary level, instrument byte code, library entry points etc.
working at source level, instrument source code before going to the compiler
The purpose of the instrumentation is to insert into the code calls to a profiling runtime that collects the runtime statistics for further processing.
Basic calls
timestamp, thread id, source code address, entering
timestamp, thread id, source code address, leaving
The source code address depends on the granularity you are interesetd in. It can be a function name ot it can be a source file and line number.
Collected performance data can be quite huge so they are usually summed-up and whole callstacks are not captured. It is usually sufficient level of detail for detecting performance bottlenecks.
Another drawback is that capturing detailed performance data especially in code points with many hits will slow the application significantly.
If you want complete history then capture the full trace including timestamps and thread-ids and you will be able to recreate the call stacks later knowing that each enter has corresponding leave.
To guarantee this pairing the code instrumentation must insert exception handling calls to make sure that exit point will be logged even if the function throws an exception (what is the "exception" and how to try-finally it dependes on the language and the OS platform).
To get all necessary tricks and tips evaluate some tools and take a look at their instrumentation style.
BTW: in general it is quite a lot of work to do and to get right I'd personally thought twice or more times about what will be the outcome and what will be the costs.
As a want-to-play-with topic I fully recommend that. I created such a tool for troubleshooting Java MIDP applications working at C++ source level and Java binary level and it was helpful at the time when we needed it.
I got an EXC_BAD_ACCESS in my iOS program, and I suspect that the cause is in one of my anonymous blocks, but there are quite a few of those, and I need to narrow down the candidate list a bit.
The stack trace shows the current frame as __lldb_unnamed_function4866$$ProjectName. There are no line numbers or source file names that I can see. No local variables visible either. The debugger shows machine code instructions. This was running on a background event queue, so there is none of my code anywhere else on the stack.
How do I go about finding out what function this is?
I came across a similar situation, and while I can't help (yet) with your problem, I think I know a man who can.
Check out http://www.realmacsoftware.com/blog/block-debugging, for an exposition of how to find out a lot more about the evil block in question.
It doesn't help me much, because I'm working from a crash log, but if you're still interested, this is going to give you just about as much as you can get about the unnamed block.
Warning, the above link exposes you to a lot of arcane knowledge, and may make you feel a little inadequate :)
[Editted to add]
Not good enough yet?
After searching through disassembly and doing some manual symbolication, I came to the conclusion that the ___lldb_unnamed_function is a red herring.
I followed How to manually symbolicate a crash log with atos, and it pointed the finger at a completely different function, which came from a 3rd party library, and was a very good candidate for the crash reason (killed by angry watchdog with badf00d.)
In the course of this enquiry, I also came across hopper, a great disassembler; I used the demo version to confirm what the suspicious code was doing, so I'm giving them a namecheck.
Try to set an exception breakpoint by clicking on the plus symbol within the breakpoint navigator cmd + 6.
For getting an overview of debugging best practices if found it useful to consider a Stanford Lecture on iTunes U
I want to port a good OpenCV code on an embedded platform. Earlier such stuffs were very difficult to perform but now TI has come up with nice embedded platforms which are comparatively hassle free as they say.
I want to know following things:
Given that :
The OpenCV code is already running on PC smoothly. (obviously)
Need to determine these before purchasing the device.
Can't put the code here in stackoverflow. :P
To chose from Texas Instruments: C6000.
Questions:
How to make it sure that the porting will be done?
What steps to be taken to make it sure that after porting the code, will run (at least).
to determine whether the code might require some changes to make its run smooth.
The point 3 above is optional.
I need info which will at least give me some start up in this regard.
What I thought I should do?
I am to list the inbuilt functions down.
Then to find available online bench marking for those functions for the particular device like as shown towards the end of this doc.
...
Need to know how to proceed further?
However C6-Integra™ DSP+ARM Processor seems the best.
The best you can do is to try a device simulator (if it is available), but what you'll see there is far from perfect.
Actually, nothing can tell you how fast and how well the app will run on the embedded device before running you specific app on that specific device.
So:
Step 1 Buy it
Step 2 Try it
Things to consider:
embedded CPU architecture: Your app needs a big cache? how big is the embedded cache?
algorithm: do you use a lot of floating point operations? how good is the device at floating point ops?
do you have memory transfers? data bus on a PC is waaay faster than on embedded
hardware support: do you use a lot of double-precision calculations? they are emulated on ARMs. They are gonna kill your app (from millisecons on a PC it can go to seconds on a ARM)
Acceleration. Do your functions use SSE? (many OpenCV funcs are SSEd, even if you don't know). Do you have the NEON counterpart? (OpenCV does not have much support for that). The difference can be orders of magnitude from x86 SSE to embedded without NEON.
and many, many others.
So, again: no one can tell you how it will work. Just the combination between the specific app and the real device tells the truth.
even a run on a similar device is not relevant. It can run smoothly on a given processor, and with another, with similar freq or listed memory, it will slow down too much
This is an interesting question but run is a very generic word in this context, therefore I feel the need to break it down to other 2 questions:
Will it compile in an embedded device?
Will it run as fast/smooth as in a PC?
I've used OpenCV in a lot of different devices, including ARM, SH4, MIPS and I found out that sometimes the manufacturer of the device itself provides a compiled version of OpenCV (for my surprise), which is great. That's something you can look into, maybe the manufacturer of your device provide OpenCV binaries.
There's no way to know for sure how smooth your OpenCV application will be on the target device unless you are able to find some benchmark of OpenCV running in there. PCs have far better processing power than embedded devices, so you can expect less performance from the target device.
There are 3rd party applications like opencv-performance, that you can use to test/benchmark the environment once you get your hands on it. And if performance is such a big deal in this project, you might also be interested in this nice article which explain some timing tests done on couple of OpenCV features comparing implementations using the C and C++ interfaces of OpenCV.
Does it happen that no one ever needs histogram in Delphi ?
Google gave me a bunch of half-baked code snippets. But it means that each time you need one - you have to invent one more ad hoc bycicle.
Torry mostly told me about some very expensive closed source Math Statistics or Financial packages, that as a subproduct have histograms. But they are very expensive and since you have no source code, each time you install update onto IDE/RTL/VCL you're probably screwed, until the vendor would make (soon ? ever?) updated packages. Given thatvendor is still does exists.
S.O. told me nothing, nil.
For what i found...
Mitov.com provides some histograms in PlotLab. which told to be free for non-commercial. Alas, it is again closed-source, and if the Histogram - quite fancy let's admit -is the onlything i need from it - why pay the whole price ?
One more example http://DSpatial.sf.net
Just few years ago i used it in Delphi 5, but even then i felt the author is loosing interest in the project. I made few enhancement, fixed some bugs, he merged them and that's all. The component was not very useful and lacked upon features, yet better than nothing. Now the project seems to be completely dead. Good old days, etc. But i do not want them back :-)
And Stack Overflow seemingly carries no single question about it. But maybe just no one bothered to create topic, after search found nothing ? I mean, Delphi was created for database access, histograms are one of basic ways to visualize data, and no one crosses them ? Something with nice style, with rich mouse tooltip like in HTML/CSS/JS on http://www.moskva.fm/stations/FM_95.2 ?
Or is this too domain-related and not ever possible to have good abstraction ?
TChart is a control that ships with most versions of Delphi. TChart can be used to make histograms (bar charts) in style. The following give you some ideas about how to use it: http://www.digitalcoding.com/tutorials/delphi/Simple-steps-to-create-Delphi-chart.html and http://delphi.about.com/od/adptips2006/qt/chart_selectbar.htm .
If you need something with code, google the pages at delphiforfun.org/programs/oscilloscope.htm . These are not controls. The oscilloscope article has a histogram with source. Some of the other projects at the site have other histogram graphs with source..not elegant but useful and free. Use them as a template to make your own control.
The link at http://delphiforfun.org/programs/Math_Topics/probability_distributions.htm shows how to make your own statistics displays with "histograms." This example makes use of TChart.
Here is some more stuff to try I found looking at my resource file:
http://wiki.lazarus.freepascal.org/TAChart, http://members.home.nl/mvanwesten/en_lazarus.html , http://www.martinole.org/TAChart.html ...some of these are GPU components that supposedly work with some versions of Delphi. Perhaps this is your lucky day as there is some source code. The first and third listed probably will work reasonably for histograms. You may have to write your own statistics algorithms.
Found this thread while doing some searching. The ImageEn component suite has a THistogramBox component. It's the NOT prettiest thing in the world, but it's the only one I've found so far.
http://www.imageen.com
I came across a histogram example in a gdiplus package available for download from code central. I don't know if it will do what you need but when I saw it I remembered your SO question.
HTH.
If you were using firemonkey, you could just created a series of TRectangles in series. They can be made unclickable by turning hittest off. Or is that too easy and straightforward?