How can I add negative delay to a Simulink function? - signal-processing

Asume that you have a Simulink signal signal1 (step / signal builder / ...). Is there a block that can build a signal signal2 that is the original signal signal1 shifted forwards in time: signal2(t)=signal1(t+T)? A delay allows only to have a signal that is shifted backwards in time.
I know that you can use parameters in f.i. a step function and set them in a script. I just wondered whether a 'negative delay' block would exist.
Shifting a signal backwards in time can be physically impossible (causal systems), but in some applications, it is meaningful.

The solution is to define a signal0 that corresponds to signal1 but is shifted backwards in time. Then both signal1and signal2 can be determined by adding a suitable transport delay to signal0.

Related

Updating the drake system states from robot hardware pose during initialization

I have been trying to set up a custom manipulation station with Kuka IIWA hardware in drake. I got the hardware interface working. When running a joint teleoperation code (adapted from drake/examples/manipulation_station/joint_teleop.py), the robot jerks violently (all joints tries to move to 0 position) at first and then continues to operate normally. On digging deeper, I found that this is caused by the FirstOrderLowPassFilter system. While advancing the simulation a tiny bit (simulator.AdvanceTo(1e-6)) to evaluate the LCM messages to set the initial GUI sliders-filter_initial_output_value-plant joint positions etc., to match the hardware, the FirstOrderLowPassFilter outputs a momentary value of 0. This sets the IIWA_COMMAND position to zero for an instance and causes a jerk.
How can I avoid this behavior?.
As a workaround, I am subscribing separately to the raw LCM message from the hardware, before initializing the drake systems and sets the filter_initial_output_value before advancing the simulation. Is this the recommended way?.
I think what you're doing (manually reading the LCM message) is fine.
In the alternative, look how a DiscreteDerivative offers the suppress_initial_transient = true option. Perhaps we could add a similar option (via unrestricted update event) to FirstOrderLowPassFilter so that the initial output value was sampled from the input at t == 0. But the event sequencing of startup may still be difficult. We essentially need to initialize the systems in their dataflow order, including refreshing output ports as events fire, which is not natively supported.
In another alternative, perhaps we could configure the IIWA_COMMAND publisher to not publish at t == 0, instead publishing only t >= 0.005.
FirstOrderLowPassFilter has a method to set the initial value. https://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_first_order_low_pass_filter.html#aaef7539cfbf1acfa0cf487c371bc5360
It is used in the example that you copied from:
https://github.com/RobotLocomotion/drake/blob/master/examples/manipulation_station/joint_teleop.py#L146

What happens in the GPU between the call to gl.drawArrays() to g.readPixels?

Changing the Title in the hopes of being more accurate.
We have some code which runs several programs in succession by calling drawArrays() . The output textures from each stage are fed into the next and so on.
After the final call to draw, a call to readPixels() is made.
This call takes an enormous amount of time (for an output of < 1000 floats). I have measured a readPixels of that size in isolation which takes 1 or 2 ms. However in our case we see a delay of about 1500ms.
So we conjectured that the actual computation must have not started until we called readPixels(). To test this theory and to force the computation, we placed a call to gl.flush() after each gl.drawxx(). This made no difference.
So we replaced that with a call to gl.finish(). Again no difference. We finally replaced it with a call to getError(). Still no difference.
Can we conclude that gpu actually does not draw anything unless the framebuffer is read from? Can we force it to do so?

Controlling the phase of signal in pure data

I'm in need of figure out a way of changing the phase of a signal. Objective is to generate two signals with one phase changed and observe the patters when combined.
below is the program I'm using so far:
As in the above setting, I need to use the same signal to generate a phase changed signal and later combine the two signals and observe patters.
Can someone help me out on this?
Thanks.
Using the right inlet of the [osc~] object is a valid way to set the phase of an oscillator but it isn't the only or even the most correct way. The right inlet only permits a float at the control level.
A more comprehensive manipulation of phase can be done at the signal level using the [phasor~], [cos~], [wrap~], and [+~] objects. Essentially, you are performing the same function as [osc~] with a technique called a table lookup using [phasor~] and [cos~]. You could read another table with [tabread4~] instead of [cos~] as well.
This technique keeps your oscillators in sync. You can manipulate the phase of your oscillators with other oscillators, table lookups, and still of course floats (so long as the phase value is between 0 and 1, hence the [wrap~] object).
phase modulation at the signal level
Afterwards, like the other examples here, you can add the signals together and write them to corresponding tables or output the signal chain or both.
Here's how you might do the same for a custom table lookup. Of course, you'd replace sometable with your custom table name and num-samp-in-some-table with the number of samples in your table.
signal level phase modulation with custom tables
Hope it helps!
To change the phase of an oscillator, use the right-hand side inlet.
Quoting Johannes Kreidler's Programming Electronic Music in Pd:
3.1.2.1.3 Phase
In Pd, you can also set membrane position for a sound wave where it should begin (or where it should jump to). This is called the phase of a wave. You can set the phase in Pd in the right inlet of the "osc~" object with numbers between 0 and 1:
A wave's entire period is encompassed by the range from 0 to 1. However, it is often spoken of in terms of degrees, where the entire period has 360 degrees. One speaks, for example, of a "90 degree phase shift". In Pd, the input for the phase would be 0.25.
So for instance, if you want to observe how two signals can become mute due to destructive interference, you can try something like this:
Note that I connected a bang to adjust simultaneously the phases of both signals. This is important, because while you can reset the phase of a signal to any value between 0.0 and 1.0 at any moment, the other oscillator won't be reset and therefore the results will be quite random (you never know at which phase value the other signal will be at!). So resetting both does the trick.

Interrupt during network I/O == crash?

It seems that when an I/O pin interrupt occurs while network I/O is being performed, the system resets -- even if the interrupt function only declares a local variable and assigns it (essentially a do-nothing routine.) So I'm fairly certain it isn't to do with spending too much time in the interrupt function. (My actual working interrupt functions are pretty spartan, strictly increment and assign, not even any conditional logic.)
Is this a known constraint? My workaround is to disconnect the interrupt while using the network, but of course this introduces potential for data loss.
function fnCbUp(level)
lastTrig = rtctime.get()
gpio.trig(pin, "down", fnCbDown)
end
function fnCbDown(level)
local spin = rtcmem.read32(20)
spin = spin + 1
rtcmem.write32(20, spin)
lastTrig = rtctime.get()
gpio.trig(pin, "up", fnCbUp)
end
gpio.trig(pin, "down", fnCbDown)
gpio.mode(pin, gpio.INT, gpio.FLOAT)
branch: master
build built on: 2016-03-15 10:39
powered by Lua 5.1.4 on SDK 1.4.0
modules: adc,bit,file,gpio,i2c,net,node,pwm,rtcfifo,rtcmem,rtctime,sntp,tmr,uart,wifi
Not sure if this should be an answer or a comment. May be a bit long for a comment though.
So, the question is "Is this a known constraint?" and the short but unsatisfactory answer is "no". Can't leave it like that...
Is the code excerpt enough for you to conclude the reset must occur due to something within those few lines? I doubt it.
What you seem to be doing is a simple "global" increment of each GPIO 'down' with some debounce logic. However, I don't see any debounce, what am I missing? You get the time into the global lastTrig but you don't do anything with it. Just for debouncing you won't need rtctime IMO but I doubt it's got anything to do with the problem.
I have a gist of a tmr.delay-based debounce as well as one with tmr.now that is more like a throttle. You could use the first like so:
GPIO14 = 5
spin
function down()
spin = spin + 1
tmr.delay(50) -- time delay for switch debounce
gpio.trig(GPIO14, "up", up) -- change trigger on falling edge
end
function up()
tmr.delay(50)
gpio.trig(GPIO14, "down", down) -- trigger on rising edge
end
gpio.mode(GPIO14, gpio.INT) -- gpio.FLOAT by default
gpio.trig(GPIO14, "down", down)
I also suggest running this against the dev branch because you said it be related to network I/O during interrupts.
I have nearly the same problem.
Running ESP8266Webserver, using GPIO14 Interrupt, with too fast Impulses as input ,
the system stopps recording the interrupts.
Please see here for more details.
http://www.esp8266.com/viewtopic.php?f=28&t=9702
I'm using ARDUINO IDE 1.69 but the Problem seems to be the same.
I used an ESP8266-07 as generator & counter (without Webserver)
to generate the Pulses, wired to my ESP8266-Watersystem.
The generator works very well, with much more than 240 puls / sec,
generating and counting on the same ESP.
But the ESP-Watersystem, stops recording interrupts here at impuls > 50/ second:
/*************************************************/
/* ISR Water pulse counter */
/*************************************************/
/**
* Invoked by interrupt14 once per rotation of the hall-effect sensor. Interrupt
* handlers should be kept as small as possible so they return quickly.
*/
void ICACHE_RAM_ATTR pulseCounter()
{
// Increment the pulse Counter
cli();
G_pulseCount++;
Serial.println ( "!" );
sei();
}
The serial output is here only for showing whats happening.
It shows the correct counted Impuls, until the webserver interacts with the network.
Than is seams the Interrupt is blocked.(no serial output from here)
By stressing the System, when I several times refresh the Website in an short time,
the interrupt counting starts for an short time, but it stops short time again.
The problem is anywhere along Interrupt handling and Webservices.
I hope I could help to find this issues.
Interessted in getting some solutions.
Who can help?
Thanks from Mickbaer
Berlin Germany
Email: michael.lorenz#web.de

How to find out the value of 1 iteration in microblaze

I am trying to find out a way to increase the computation time of a function to 1 second without using the sleep function in xilinx microblaze, using the xilkernel.
Hence, may i know how many iterations do i need to do in a simple for loop to increase the computation time to 1 second?
You can't do this reliably and accurately. If you want do a bodge like this, you'll have to calibrate it yourself for your particular system as Microblaze is so configurable, there isn't one right answer. The bodgy way is:
Set up a GPIO peripheral, set one of the pins to '1', run a loop of 1000 iterations (make sure the compiler doesn't optimise it away!) set the pins to '0'. Hang a scope off that pin (you're doing work on embedded systems, you do have a scope, right?) and see how long it takes to run the loop.
But the right way to do it is to use a hardware timer peripheral. Even at a very simple level, you could clear the timer at the start of the function, then poll it at the end until it reaches whatever value corresponds to 1 second. This will still have some imperfections, but given that you haven't specified how close to 1 sec you need to be, it is probably adequate.

Resources