Is there a verbose mode for trajectory optimization? - drake

When we are calling functions like DirectCollocation, is there a way to see some progress in the middle (verbose mode)? I am not sure how helpful it is to check formulation errors. But just wondering :)

There are two ways to monitor the progress
You could add a visual call back function with prog.AddVisualizationCallback. If this callback function visualizes the trajectory, then you can monitor the visualization in every iteration of the optimizer.
If you use Snopt solver, then you could ask the solver to print out statistics on each iteration. The pseudo-code looks like this
std::string print_file_name="foo.txt";
prog.SetSolverOption(SnoptSolver::id(), "Print file", print_file_name);
SnoptSolver solver;
const auto result = solver.Solve(prog, initial_guess);
Then Snopt will print its statistics in each iteration to foo.txt.

Related

Finding factors of number and factoring a polynomial (Lua)

I'm attempting to program with Lua syntax (I have some experience with it) to find the factors of and number and possibly factor an input polynomial. I'm not sure if everyone has done factoring but I learned it by doing a "multiply to" and "add to"/"x-box" method. It'd be interesting to actually draw out the method in Lua (see the picture attached) and display the answer. If not to draw, then I'd just use the print command.
I would like the program to have two parameters: one would be the number to determine its prime factors and the other would be the polynomial input (like a, b and c values ax^2+bx+c) to be factored. Then I may also attempt perfect squares and difference of squares.
I'd like some guidance in this and I'm in no way expecting a full working program. Thanks in advance.
you can make a for chunk loop function like this
function factor(val)
val=math.floor(val)
found={}
rev={os.time()*4}
halt=0
lastI=0
lastM=0
for m=1,val do
if halt==1 then
break
end
if lastI == m then
halt=1
break
else
for i=0,val do
if m*i == val then
print(m.."*"..i.."="..val)
table.insert(found,m.."*"..i)
table.insert(rev,i.."*"..m)
lastI=m
else
end
end
end
end
return found
end
it will return all posable factors but the downside is it will eventually run back wards but it's not a problem.
usage example:factor(6)
returns:{1*6,2*3,3*2,6*1}.

FSK demodulation with GNU Radio

I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.
So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:

Matrix Concatenation using Actionscript Matrix3D

I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.

How to Recognize a Pattern in a Stream of Numbers

I have a stream of numbers (integers for the sake of discussion) being sampled off an analog input (a a/d converter attached to a potentiomeger). I am curious how I would recognize a pattern in the numbers in realtime.
That is to say, if someone quickly twiddles the pot all the way up and back down, how do I recognize that, vs if turn it only half way. Or what if they turn it up and down three times in a row. How can I convert these actions into distinct "events"? This seems especially tricky to me since the time window over which each of these events will occur will be modestly variable.
I can think of a few quick, hacky ways to do this, but nothing that I am confident in. I am also curious how one would expand this out to multiple different inputs (i.e. input off a spectrograph). Does that change things dramatically? I am not even sure what topic area I should be googling.
If you know what you are looking for, correlate the input signal against a replica of what you expect. Basically, implement a matched filter. If you want to see when the input stream is -127, -63, 0, 63, 127, implement a direct form fir filter with these values as the coefficients. Then look for a maximum on the output. The maximum output of a filter with those coefficients occurs when the data in the filter is -127, -63, 0, 63, 127.
Google "Matched Filter Detection" or or "detection theory" maybe even "Feature detection"
If you don't know exactly what you are looking for, or what you are looking for is variable, it gets more complicated. You would then try to implement a filter that's output would give you information about what is going on. The example that I gave above would show the output spike up when that input sequence occurred. If you then saw that spike occurring with regular frequency, you would guess that the input event was occurring with regular frequency.
if you made your filter 0, 63, 127 63 0, which correlates to turning the knob all the way up, and then back down again, and on your output saw the aforementioned spike occur, but having a lower maximum amplitude and wider time over which the correlation occurs, that might tell you that the know was turned all the way up and then back down, but either slower or faster than the speed for which the filter is design to get a maximum response.
To combat this you might implement 3 of these filters in parallel, one designed for a slow knob turn, one for a medium speed knob turn, and one for a fast knob turn. Then looking at the 3 outputs you get 3 different correlations which better help you understand what is occurring
Did you consider taking the running difference of the signal (its differentiation)?

What are the advantages of the "apply" functions? When are they better to use than "for" loops, and when are they not? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is R's apply family more than syntactic sugar
Just what the title says. Stupid question, perhaps, but my understanding has been that when using an "apply" function, the iteration is performed in compiled code rather than in the R parser. This would seem to imply that lapply, for instance, is only faster than a "for" loop if there are a great many iterations and each operation is relatively simple. For instance, if a single call to a function wrapped up in lapply takes 10 seconds, and there are only, say, 12 iterations of it, I would imagine that there's virtually no difference at all between using "for" and "lapply".
Now that I think of it, if the function inside the "lapply" has to be parsed anyway, why should there be ANY performance benefit from using "lapply" instead of "for" unless you're doing something that there are compiled functions for (like summing or multiplying, etc)?
Thanks in advance!
Josh
There are several reasons why one might prefer an apply family function over a for loop, or vice-versa.
Firstly, for() and apply(), sapply() will generally be just as quick as each other if executed correctly. lapply() does more of it's operating in compiled code within the R internals than the others, so can be faster than those functions. It appears the speed advantage is greatest when the act of "looping" over the data is a significant part of the compute time; in many general day-to-day uses you are unlikely to gain much from the inherently quicker lapply(). In the end, these all will be calling R functions so they need to be interpreted and then run.
for() loops can often be easier to implement, especially if you come from a programming background where loops are prevalent. Working in a loop may be more natural than forcing the iterative computation into one of the apply family functions. However, to use for() loops properly, you need to do some extra work to set-up storage and manage plugging the output of the loop back together again. The apply functions do this for you automagically. E.g.:
IN <- runif(10)
OUT <- logical(length = length(IN))
for(i in IN) {
OUT[i] <- IN > 0.5
}
that is a silly example as > is a vectorised operator but I wanted something to make a point, namely that you have to manage the output. The main thing is that with for() loops, you always allocate sufficient storage to hold the outputs before you start the loop. If you don't know how much storage you will need, then allocate a reasonable chunk of storage, and then in the loop check if you have exhausted that storage, and bolt on another big chunk of storage.
The main reason, in my mind, for using one of the apply family of functions is for more elegant, readable code. Rather than managing the output storage and setting up the loop (as shown above) we can let R handle that and succinctly ask R to run a function on subsets of our data. Speed usually does not enter into the decision, for me at least. I use the function that suits the situation best and will result in simple, easy to understand code, because I'm far more likely to waste more time than I save by always choosing the fastest function if I can't remember what the code is doing a day or a week or more later!
The apply family lend themselves to scalar or vector operations. A for() loop will often lend itself to doing multiple iterated operations using the same index i. For example, I have written code that uses for() loops to do k-fold or bootstrap cross-validation on objects. I probably would never entertain doing that with one of the apply family as each CV iteration needs multiple operations, access to lots of objects in the current frame, and fills in several output objects that hold the output of the iterations.
As to the last point, about why lapply() can possibly be faster that for() or apply(), you need to realise that the "loop" can be performed in interpreted R code or in compiled code. Yes, both will still be calling R functions that need to be interpreted, but if you are doing the looping and calling directly from compiled C code (e.g. lapply()) then that is where the performance gain can come from over apply() say which boils down to a for() loop in actual R code. See the source for apply() to see that it is a wrapper around a for() loop, and then look at the code for lapply(), which is:
> lapply
function (X, FUN, ...)
{
FUN <- match.fun(FUN)
if (!is.vector(X) || is.object(X))
X <- as.list(X)
.Internal(lapply(X, FUN))
}
<environment: namespace:base>
and you should see why there can be a difference in speed between lapply() and for() and the other apply family functions. The .Internal() is one of R's ways of calling compiled C code used by R itself. Apart from a manipulation, and a sanity check on FUN, the entire computation is done in C, calling the R function FUN. Compare that with the source for apply().
From Burns' R Inferno (pdf), p25:
Use an explicit for loop when each
iteration is a non-trivial task. But a
simple loop can be more clearly and
compactly expressed using an apply
function. There is at least one
exception to this rule ... if the result will
be a list and some of the components
can be NULL, then a for loop is
trouble (big trouble) and lapply gives
the expected answer.

Resources