Script timing for getting data from API - lua

I am trying to write a macro that combines /use on a trinket then calls the WoW API and outputs some data.
/use 14
/run local d=string.format("%.2f",GetDodgeChance()); print("Dodge Trinket used, dodge at:",d);
This works fine apart from there seems to be a timing problem with using the trinket and then getting the updated dodge chance from the API. When I click once, the trinket activates it shows the dodge chance without the trinket buff. If I click immediately again it then shows the correct value, including the buff.
Is there some timing issue, such as the first /use command not firing until the macro ends? How do I ensure the GetDodgeChance() call includes the trinket buff?

Your game client needs to send the trinket use to the server, and get a confirmation back that you received the buff which changes your stats.
However, execution of code can be delayed with C_Timer.After
Your example would become:
/use 14
/run C_Timer.After( 0.5, function() local d=string.format("%.2f",GetDodgeChance()); print("Dodge Trinket used, dodge at:",d) end)
with half a second delay, which should both be long enough for the Dodge function to return the correct (=anticipated) value, and appear as "almost instant" to human perception.
You can of course tweak that value, try shorter delays - e.g. 0.1 seconds should also be quite long enough for the calculation to be correct, but as this involves communicating with the server, it might work fine 99% of the time for a very short value of delay, and then there's a random minor lag spike, not even something you notice in gameplay, but the info about the buff and changed dodge chance reaches your client a few milliseconds too late.

Related

What happens in the GPU between the call to gl.drawArrays() to g.readPixels?

Changing the Title in the hopes of being more accurate.
We have some code which runs several programs in succession by calling drawArrays() . The output textures from each stage are fed into the next and so on.
After the final call to draw, a call to readPixels() is made.
This call takes an enormous amount of time (for an output of < 1000 floats). I have measured a readPixels of that size in isolation which takes 1 or 2 ms. However in our case we see a delay of about 1500ms.
So we conjectured that the actual computation must have not started until we called readPixels(). To test this theory and to force the computation, we placed a call to gl.flush() after each gl.drawxx(). This made no difference.
So we replaced that with a call to gl.finish(). Again no difference. We finally replaced it with a call to getError(). Still no difference.
Can we conclude that gpu actually does not draw anything unless the framebuffer is read from? Can we force it to do so?

Track Events with Prometheus Counters

Using Prometheus for things that are per second works really great and I've had great success with rate and irate. I am just at a loss how to graph something that's happening very rarely and is a big deal.
So I have a counter I am incrementing that's called job_failed. Whenever that happens it shows up in my instant-vector. If I graph it directly it always goes up and I see a bump in the graph, but this isn't giving me clear enough indication that a job has failed. So I'd like to have it be a spike in a zeroed graph.
If I do a rate(job_failed[15s]) I get my spike - but it's a per second spike so it's value is 0.1 although the change I want is 1.
I tried increase(job_failed[1m]) but that is also not adding up correctly, occasionally leaving me with values like 2.18 etc.
Is there a way to only see a single spike? This seems like a rather trivial thing but I can't figure it out.
Prometheus is suited more to high volume than low volume events, as at low volumes artifacts from how we keep things accurate on average show up.
So for example rate(job_failed[15s]) with an increase of 1 over the 15 seconds is 1/15 = 0.066/s. Rounding could make that show as 0.1.
https://www.youtube.com/watch?v=67Ulrq6DxwA goes into more detail as to how this all works.
The short version is what you're doing now is the way to do it.
For similar requirement, I was using delta function with threshold configured as per requirement.
https://prometheus.io/docs/querying/functions/#delta

Interrupt during network I/O == crash?

It seems that when an I/O pin interrupt occurs while network I/O is being performed, the system resets -- even if the interrupt function only declares a local variable and assigns it (essentially a do-nothing routine.) So I'm fairly certain it isn't to do with spending too much time in the interrupt function. (My actual working interrupt functions are pretty spartan, strictly increment and assign, not even any conditional logic.)
Is this a known constraint? My workaround is to disconnect the interrupt while using the network, but of course this introduces potential for data loss.
function fnCbUp(level)
lastTrig = rtctime.get()
gpio.trig(pin, "down", fnCbDown)
end
function fnCbDown(level)
local spin = rtcmem.read32(20)
spin = spin + 1
rtcmem.write32(20, spin)
lastTrig = rtctime.get()
gpio.trig(pin, "up", fnCbUp)
end
gpio.trig(pin, "down", fnCbDown)
gpio.mode(pin, gpio.INT, gpio.FLOAT)
branch: master
build built on: 2016-03-15 10:39
powered by Lua 5.1.4 on SDK 1.4.0
modules: adc,bit,file,gpio,i2c,net,node,pwm,rtcfifo,rtcmem,rtctime,sntp,tmr,uart,wifi
Not sure if this should be an answer or a comment. May be a bit long for a comment though.
So, the question is "Is this a known constraint?" and the short but unsatisfactory answer is "no". Can't leave it like that...
Is the code excerpt enough for you to conclude the reset must occur due to something within those few lines? I doubt it.
What you seem to be doing is a simple "global" increment of each GPIO 'down' with some debounce logic. However, I don't see any debounce, what am I missing? You get the time into the global lastTrig but you don't do anything with it. Just for debouncing you won't need rtctime IMO but I doubt it's got anything to do with the problem.
I have a gist of a tmr.delay-based debounce as well as one with tmr.now that is more like a throttle. You could use the first like so:
GPIO14 = 5
spin
function down()
spin = spin + 1
tmr.delay(50) -- time delay for switch debounce
gpio.trig(GPIO14, "up", up) -- change trigger on falling edge
end
function up()
tmr.delay(50)
gpio.trig(GPIO14, "down", down) -- trigger on rising edge
end
gpio.mode(GPIO14, gpio.INT) -- gpio.FLOAT by default
gpio.trig(GPIO14, "down", down)
I also suggest running this against the dev branch because you said it be related to network I/O during interrupts.
I have nearly the same problem.
Running ESP8266Webserver, using GPIO14 Interrupt, with too fast Impulses as input ,
the system stopps recording the interrupts.
Please see here for more details.
http://www.esp8266.com/viewtopic.php?f=28&t=9702
I'm using ARDUINO IDE 1.69 but the Problem seems to be the same.
I used an ESP8266-07 as generator & counter (without Webserver)
to generate the Pulses, wired to my ESP8266-Watersystem.
The generator works very well, with much more than 240 puls / sec,
generating and counting on the same ESP.
But the ESP-Watersystem, stops recording interrupts here at impuls > 50/ second:
/*************************************************/
/* ISR Water pulse counter */
/*************************************************/
/**
* Invoked by interrupt14 once per rotation of the hall-effect sensor. Interrupt
* handlers should be kept as small as possible so they return quickly.
*/
void ICACHE_RAM_ATTR pulseCounter()
{
// Increment the pulse Counter
cli();
G_pulseCount++;
Serial.println ( "!" );
sei();
}
The serial output is here only for showing whats happening.
It shows the correct counted Impuls, until the webserver interacts with the network.
Than is seams the Interrupt is blocked.(no serial output from here)
By stressing the System, when I several times refresh the Website in an short time,
the interrupt counting starts for an short time, but it stops short time again.
The problem is anywhere along Interrupt handling and Webservices.
I hope I could help to find this issues.
Interessted in getting some solutions.
Who can help?
Thanks from Mickbaer
Berlin Germany
Email: michael.lorenz#web.de

How to find out the value of 1 iteration in microblaze

I am trying to find out a way to increase the computation time of a function to 1 second without using the sleep function in xilinx microblaze, using the xilkernel.
Hence, may i know how many iterations do i need to do in a simple for loop to increase the computation time to 1 second?
You can't do this reliably and accurately. If you want do a bodge like this, you'll have to calibrate it yourself for your particular system as Microblaze is so configurable, there isn't one right answer. The bodgy way is:
Set up a GPIO peripheral, set one of the pins to '1', run a loop of 1000 iterations (make sure the compiler doesn't optimise it away!) set the pins to '0'. Hang a scope off that pin (you're doing work on embedded systems, you do have a scope, right?) and see how long it takes to run the loop.
But the right way to do it is to use a hardware timer peripheral. Even at a very simple level, you could clear the timer at the start of the function, then poll it at the end until it reaches whatever value corresponds to 1 second. This will still have some imperfections, but given that you haven't specified how close to 1 sec you need to be, it is probably adequate.

How to show the number of times functions are called in Instruments Time Profiler

I've tried every possible fields but can not find the number of times functions are called.
Besides, I don't get Self and # Self. What do these two numbers mean?
There are several other ways to accomplish this. One is obviously to create a static hit counter and an NSLog that emits and increments a counter. This is intrusive though and I found a way to do this with lldb.
Set a breakpoint
Execute the program until you hit the breakpoint the first time and note the breakpoint number on the right hand side of the line you hit (e.g. "Thread 1: breakpoint 7.1", note the 7.1)
Context click on the breakpoint and choose "Edit Breakpoint"
Leave condition blank and choose "Add Action"
Choose "Debugger Command"
In the command box, enter "breakpoint list 7.1" (using the breakpoint number for your breakpoint from step 2). I believe you can use "info break " if you are using gdb.
Check Options "Automatically Continue after evaluating"
Continue
Now, instead of stopping, llvm will emit info about the breakpoint including the number of times it has been passed.
As for the discussion between Glenn and Mike on the previous answer, I'll describe a performance problem where function execution count was useful: I had a particular action in my app where performance degraded considerably with each execution of the action. The Instruments time profiler showed that each time the action was executed, a particular function was taking twice as long as the time before until quickly the app would hang if the action was performed repeatedly. With the count, I was able to determine that with each execution, the function was called twice as many times as it was during the previous execution. It was then pretty easy to look for the reason, which turned out to be that someone was re-registering for a notification in NotificationCenter on each event execution. This had the effect of doubling the number of response handler calls on each execution and thus doubling the "cost" of the function each time. Knowing that it was doubling because it was called twice as many times and not because the performance was just getting worse caused me to look at the calling sequence rather than for reasons the function itself could be degrading over time.
While it's interesting, knowing the number of times called doesn't have anything to do with how much time is spent in them. Which is what Time Profiler is all about. In fact, since it does sampling, it cannot answer how many times.
It seems you cannot use Time Profiler for counting function calls. This question seems to address potential methods for counting.
W/ respect to self and #self:
Self is "The number of times the symbol calls itself." according to the Apple Docs on the Time Profiler.
From the way the numbers look though, it seems self is the summed duration of samples that had this symbol at the bottom of its stack trace. That would make:
# self: the number of samples where this symbol was at the bottom of the stack trace
% self: the percent of self samples relative to total samples of currently displayed call tree
(eg - #self / total samples).
So this wouldn't tell you how many times a method was called. But it would give you an idea how much time is spent in a method or lower in the call tree.
NOTE: I too am unsure about the various 'self' meanings though. Would love to see someone answer this authoritatively. Arrived here searching for that...
IF your objective is to find out what you need to fix to make the program as fast as possible,
Number of calls and self time may be interesting but are irrelevant.
Look at my answer to this question, in particular points 6 and 8.
EDIT: To clarify the point further, suppose the following is the timeline of execution of the program. Some of that time (in this case about 50%) is spent in an activity that can be removed, if you know what it is, such as needless buried I/O, excessive calls to new, runaway notifications, or "insignificant" data validation. If a random-time sample is taken, it has a 50% chance of occurring in that activity, and an examination of the call stack and/or program variables shows that it is doing something that can be removed. Then, if 10 such samples are taken, the activity will be seen on roughly 5 of them, regardless of whether the activity occurs in a few large chunks of time, or many small ones. The activity may be a few lines of code in a function doing something unnecessary, or it may be something much more generalized. Regardless, you recognize it, fix it, and get roughly a factor of 2 speedup. Call counts and self time contribute nothing to this process.

Resources