FreeRTOS Posix GCC Simulator vTaskDelay doesn't delay properly - freertos

I am playing with FreeRTOS Posix GCC Simulator and creating simple task and delaying 1 sec and printing just doesn't give right results.
Creating task like this should show text being printed every 1 sec but it seems that it is more like 8-9 sec between prints.
What could be the issue?
void prvTask1( void *pvParameters )
{
for ( ;; )
{
printf( "Task 1 ...%d\n", xTaskGetTickCount());
vTaskDelay( 1000 / portTICK_RATE_MS );
}
}
Config:
#define configTICK_RATE_HZ ( ( portTickType ) 1000 )
#define portTICK_RATE_MS ( ( portTickType ) 1000 / configTICK_RATE_HZ )
I've tested with values:
#define configTICK_RATE_HZ ( ( portTickType ) 250)
#define portTICK_RATE_MS ( ( portTickType ) 1000 / configTICK_RATE_HZ )
It looks like it is ~1 sec per printf. Somehow it seems raising values from ~500 > 1000 gives worser results on 1 sec delay (becomes much more then 1 sec).

FreeRtos demo FreeRtosConfig.h says:
#define configTICK_RATE_HZ ( 1000 )
// In this non-real time simulated environment
// the tick frequency has to be at least a multiple
// of the Win32 tick frequency, and therefore very slow.
Maybe you should try to build and run the original example of FreeRtos.
I did try to run the example of FreeRtos8.2.1 and vTaskDelay works just fine

Related

Dataflow - Approx Unique on unbounded source

I'm getting unexpected results streaming in the cloud.
My pipeline looks like:
SlidingWindow(60min).every(1min)
.triggering(Repeatedly.forever(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(30)))
)
)
.withAllowedLateness(15sec)
.accumulatingFiredPanes()
.apply("Get UniqueCounts", ApproximateUnique.perKey(.05))
.apply("Window hack filter", ParDo(
if(window.maxTimestamp.isBeforeNow())
c.output(element)
)
)
.toJSON()
.toPubSub()
If that filter isn't there, I get 60 windows per output. Apparently because the pubsub sink isn't window aware.
So in the examples below, if each time period is a minute, I'd expect to see the unique count grow until 60 minutes when the sliding window closes.
Using DirectRunner, I get expected results:
t1: 5
t2: 10
t3: 15
...
tx: growing unique count
In dataflow, I get weird results:
t1: 5
t2: 10
t3: 0
t4: 0
t5: 2
t6: 0
...
tx: wrong unique count
However, if my unbounded source has older data, I'll get normal looking results until it catches up at which point I'll get the wrong results.
I was thinking it had to do with my window filter, but removing that didn't change the results.
If I do a Distinct() then Count().perKey(), it works, but that slows my pipeline considerably.
What am I overlooking?
[Update from the comments]
ApproximateUnique inadvertently resets its accumulated value when result is extracted. This is incorrect when the value is read more than once as with windows firing multiple times. Fix (will be in version 2.4): https://github.com/apache/beam/pull/4688

MQL4: can't workout how to get decimal value of 1/6

Can't work out why 1/6 keeps returning me 0 and how to resolve it.
Print(1/6);
Print(DoubleToString((1/6),8));
Prints 0.00000000
You need one double number in expression. Try: Print(1/6.0);
Why:
The code above presents a pair of constant ( that happen to be 1 and 6 in the posted case ).
The MQL4 language is a compiled language and ( after expansion of #define macros, #include and similar pre-compilation steps have taken their place ) the compiler in the compilation phase "reads" your code and lexically analyses, what is there intended to be done.
Having seen an expression of 1 / 6 in the code, the compiler knows, there is nothing that could change this part of code at runtime, inside the run-time ecosystem, and thus it tried to reduce this part of static-value code.
The compiler was hardwired so as to assemble the code-execution file-format ( an .MQ4-file ) into which it will -- no surprise in this -- put a straight 0, as the compiler decided from the static code analysis -- a constant, derived from a compile-time known pair of integer literal constants 1 and 6, not to waste a single nanosecond at runtime to ( re-)-evaluate the compile-time known value of 0.
How to resolve it:
double enumerator = 1,
divisor = 6;
Print( enumerator / divisor );
Print( DoubleToString( enumerator / divisor ),
8
)
);

NodeMCU gpio triggering incorrectly

I'm attempting to read IR information from a NodeMCU running Lua 5.1.4 from a master build as of 8/19/2017.
I might be misunderstanding how GPIO works and I'm having a hard time finding examples that relate to what I'm doing.
pin = 4
pulse_prev_time = 0
irCallback = nil
function trgPulse(level, now)
gpio.trig(pin, level == gpio.HIGH and "down" or "up", trgPulse)
duration = now - pulse_prev_time
print(level, duration)
pulse_prev_time = now
end
function init(callback)
irCallback = callback
gpio.mode(pin, gpio.INT)
gpio.trig(pin, 'down', trgPulse)
end
-- example
print("Monitoring IR")
init(function (code)
print("omg i got something", code)
end)
I'm triggering the initial interrupt on low, and then alternating from low to high in trgPulse. In doing so I'd expect the levels to alternate from 1 to 0 in a perfect pattern. But the output shows otherwise:
1 519855430
1 1197
0 609
0 4192
0 2994
1 589
1 2994
1 1198
1 3593
0 4201
1 23357
0 608
0 5390
1 1188
1 4191
1 1198
0 3601
0 3594
1 25147
0 608
1 4781
0 2405
1 3584
0 4799
0 1798
1 1188
1 2994
So I'm clearly doing something wrong or fundamentally don't understand how GPIO works. If this is expected, why are the interrupts being called multiple times if the low/high levels didn't change? And if this does seem wrong, any ideas how to fix it?
I'm clearly doing something wrong or fundamentally don't understand how GPIO works
I suspect it's a bit a combination of both - the latter may be the cause for the former.
My explanation may not be 100% correct from a mechanical/electronic perspective (not my world) but it should be enough as far as writing software for GPIO goes. Switches tend to bounce between 0 and 1 until they eventually settle for one. A good article to read up on this is https://www.allaboutcircuits.com/technical-articles/switch-bounce-how-to-deal-with-it/. The effect can be addressed with hardware and/or software.
Doing it with software usually involves introducing some form of delay to skip the bouncing signals as you're only interested in the "settled state". I documented the NodeMCU Lua function I use for that at https://gist.github.com/marcelstoer/59563e791effa4acb65f
-- inspired by https://github.com/hackhitchin/esp8266-co-uk/blob/master/tutorials/introduction-to-gpio-api.md
-- and http://www.esp8266.com/viewtopic.php?f=24&t=4833&start=5#p29127
local pin = 4 --> GPIO2
function debounce (func)
local last = 0
local delay = 50000 -- 50ms * 1000 as tmr.now() has μs resolution
return function (...)
local now = tmr.now()
local delta = now - last
if delta < 0 then delta = delta + 2147483647 end; -- proposed because of delta rolling over, https://github.com/hackhitchin/esp8266-co-uk/issues/2
if delta < delay then return end;
last = now
return func(...)
end
end
function onChange ()
print('The pin value has changed to '..gpio.read(pin))
end
gpio.mode(pin, gpio.INT, gpio.PULLUP) -- see https://github.com/hackhitchin/esp8266-co-uk/pull/1
gpio.trig(pin, 'both', debounce(onChange))
Note: delay is an empiric value specific to the sensor/switch!

How to design an Average Directional Movement Index Expert Advisor in MQL4/5?

I have a trading strategy based on ADX, in a simplest way I enter when ADX is above 30 both on the 30 minutes and hourly chart.
I need to create an EA in MQL5 just to give me a sound alert, when ADX has hit level 30 both on 30 minutes and hourly timeframe.
I would really appreciate if someone can help me with that.
So,
let's move on:
//+------------------------------------------------------------------+
//| Expert tick function |
//+------------------------------------------------------------------+
void OnTick()
{
if ( iADX( _Symbol, PERIOD_H1, anAvgPERIOD, PRICE_HIGH, MODE_MAIN, 0 ) > 30.
&& iADX( _Symbol, PERIOD_M30, anAvgPERIOD, PRICE_HIGH, MODE_MAIN, 0 ) > 30.
){
PlaySound( "aFileWithDesiredSOUND.wav" );
}
}
One ought not be surprised, that this does not work for obvious reasons in the MT4 Strategy Tester.

Running Pktgen with Lua scripts

I am currently working with pktgen in order to achieve high-rate packet generation for a project. I am using version 2.9.5 of pktgen, along with DPDK version 2.1.0. It does work perfectly fine ( I am able to generate and send packet at 10 Gb/s rate ) until I try to use lua scripts for formatting packets.
As a matter of fact, scripts examples that you can find with the sources seem to not always have the expected behaviour. Simplest scripts work perfectly fine ( HelloWorld.lua for example, I am also able to set packet size, rate or burst with script commands )
package.path = package.path ..";?.lua;test/?.lua;app/?.lua;"
2
3 pktgen.set("all", "count", 0);
4 pktgen.set("all", "rate", 50);
5 pktgen.set("all", "size", 256);
6 pktgen.set("all", "burst", 128);
7 pktgen.start(0)
( the code above does work )
However, I have encountered issues as I tried to define directly a packet format with a table, like in this example :
-- Lua uses '--' as comment to end of line read the
2 -- manual for more comment options.
3 local seq_table = { -- entries can be in any order
4 ["eth_dst_addr"] = "0011:4455:6677",
5 ["eth_src_addr"] = "0011:1234:5678",
6 ["ip_dst_addr"] = "10.12.0.1",
7 ["ip_src_addr"] = "10.12.0.1/16", -- the 16 is the size of the mask va lue
8 ["sport"] = 9, -- Standard port numbers
9 ["dport"] = 10, -- Standard port numbers
10 ["ethType"] = "ipv4", -- ipv4|ipv6|vlan
11 ["ipProto"] = "udp", -- udp|tcp|icmp
12 ["vlanid"] = 1, -- 1 - 4095
13 ["pktSize"] = 128 -- 64 - 1518
14 };
15 -- seqTable( seq#, portlist, table );
16 pktgen.seqTable(0, "all", seq_table );
17 pktgen.set("all", "seqCnt", 1);
When I try this script, Pktgen will usually give me a
Unknown Ethertype 0x0000
Error as soon as I try start 0 in commandline ( 0 is the port I use to transmit packet ).
Same problem occurs when I try the main.lua script.
Long story short, I have troubles understanding how the pktgen.seqtable function works, and why it does not work properly in my case. I did not find any really helpful documentation on the net on this subject.
the command I use to launch scripts is :
sudo -E $PKTGEN_CMD/pktgen -c 0x7 -n 4 -- -m "1.0,2.1" -f test/set_seq.lua
( test/set_seq.lua being a script example ).
I have got the same script running with small modification to enable vlan in packet generation. Please refer the changes below
local seq_table = { -- entries can be in any order
["eth_dst_addr"] = "0011:4455:6677",
["eth_src_addr"] = "0011:1234:5678",
["ip_dst_addr"] = "10.12.0.1",
["ip_src_addr"] = "10.12.0.1/16", -- the 16 is the size of the mask va lue
["sport"] = 9, -- Standard port numbers
["dport"] = 10, -- Standard port numbers
["ethType"] = "ipv4", -- ipv4|ipv6|vlan
["ipProto"] = "udp", -- udp|tcp|icmp
["vlanid"] = 1100, -- 1 - 4095
["pktSize"] = 128 -- 64 - 1518
};
-- seqTable( seq#, portlist, table );
pktgen.seqTable(0, "all", seq_table );
pktgen.vlan("all", "enable");
pktgen.set("all", "seqCnt", 1);
Executing the modified script does not produce Pktgen will usually give me a Unknown Ethertype 0x0000 Error as soon as I try start 0 in commandline ( 0 is the port I use to transmit packet ).
note: there is mention of a fix to range and vlan id in pktgen doc page 75 3.0.01- Fixed the Range sequence and VLAN problem. hence I recommend to use latest pktgen (I have tested this on PKTGEN 21.11.0)

Resources