FreeRTOS is slow to wake after long processor sleep - freertos

(Cross-posted on Electrical Engineering Stack Exchange)
I'm using FreeRTOS in an application which requires the processor to sleep in low power mode for a long time (as long as 12 hours), then wake up in response to an interrupt request. I'm having a problem with the amount of time taken for FreeRTOS to wake up.
Before going to sleep, I disable the scheduler through a call to vTaskSuspendAll().
On waking, I calculate the amount of time that the processor has been asleep, and update FreeRTOS via a call to vTaskJumpTime(). I then make a call to xTaskResumeAll() to restart the scheduler.
The issue I have is that xTaskResumeAll() makes a call to xTaskIncrementTick() once for each tick that has been missed while the processor was asleep (recorded through vTaskJumpTime()). It takes about 12 seconds to wake after an hour asleep (3,600,000 calls to xTaskIncrementTick()).
Much as I'm tempted to modify the FreeRTOS xTaskIncrementTick() function, so that it can jump a number of ticks in one call, experience says that I would be smart to look for a standard way first.
Has anyone found another way to implement this behavior which doesn't result in the long wake-up delay?
This is implemented on an Microchip/Atmel SAM4L (Cortex-M4). I use the AST sub system to record the time that the processor is asleep. The sleep is implemented through BPM sub system bpm_sleep() in RETENTION mode.

FreeRTOS's built in low power support uses vTaskStepTick() to jump the tick count forward in one go. It can do that because it won't calculate a wake time (the time at which it leaves low power mode) past the time it knows a task must unblock because a timeout expired. If you remain in sleep mode past the time a task should have unblocked because its timeout expired you have an error anyway. The best place to get FreeRTOS support is it's dedicated support forum.

Related

BGProcessingTaskRequest runs only for 295 seconds before the expiration handler is called on iOS 14.8

We need to process a lot of data in the background for iOS and came across BGProcessingTaskRequest which is intended for tasks which may take minutes. However in practice, the task is always killed after exactly 295 seconds on my iOS 14.8 device.
Is this the most amount of background processing time we can expect on iOS or are there any other ways to increase the background execution time? If not, is it possible to chain the requests by scheduling the task again in it's own handler?
As you know, they only promise that it “can run for minutes”. One would have hoped that setting the task’s requiresExternalPower, might increase the allotted time as the battery concerns are eliminated, but in my tests, the allotted time is still limited to roughly 5 minutes (tested in iOS 15).
As you suggest, you can schedule another processing request when time expires. In my experiments, though, this subsequent request took even longer before it started and expired even more quickly.
In short, you can chain requests as you suggest, but you are at the mercy of the OS.

What is the best method of synchronizing audio across iOS devices with WiFi?

Basically, for my team's app, we need to be able to synchronize music across multiple iOS devices. The first way we did this was by having the music on all the devices already and just sending a play command to all the devices. Some would get it later than others, so that method did not work. There was an idea mentioned to calculate the latency between all the devices and send the commands at the appropriate times based on the latency.
The second way proposed would be to stream the music. If we were to implement streaming, how should we go about doing it. Should Audio Units be used, OpenAL, etc.? Also, if streaming was being done, how would we go about making sure that each device's stream was in sync.
Basically, the audio has to be in sync so that the person hearing it cannot differentiate between the devices. A few milliseconds off should not be a problem (unless the listener has super-human hearing).
You'd be amazed at how good the human ear us at spotting audio anomalies...
Sync the time of day
Effectively your trying to meet a real time requirement with a whole load if very variable things in the way (wifi, etc). I strongly suspect the only way you're going to get close to doing this is to issue a 'play' instruction that includes a particular time to start playing. Of course that relies on all the clocks being accurately set.
NTP
I don't know how iPhones get their time of day. If they use (or could use) NTP then you'll be getting close. NTP is designed to convey accurate time of day information over a network despite variable network delays. I've had a quick look and it seems that most NTP clients for iOS are the simple ones, not the full NTP that measures and tunes out network delays, clock drifts, etc.
GPS
Alternatively GPS is also a very good source of time information. Again I don't know if iPhones can or do use GPS for setting their clock but if it could be done then that would likely be pretty good. On Solaris (and I think Linux too) the 1 pulse per second that most GPS chips generate from the GPS signal can be used to condition the internal OS clock, making it very accurate indeed (sub microsecond accuracy).
I fear that iPhones don't do either of these things natively; both involve using a fair bit of electricity, so I wouldn't be surprised if they did something else less sophisticated.
Cell Time Service
Some Cell networks provide a time service too, but I don't think it's designed for accurate time setting. Also it tends not to be available everywhere. You often find it at major airports so that recent arrivals get their phones set to something close to local time.
Play at time X
So if one of those could be used to ensure that all the iPhones are set to exactly the same time of day then all you have to do is write your software to start playing at a specific time. That will probably involve polling the clock in a very tight loop waiting for it to tick over; most OSes don't provide a means of sleeping until a specific time. They do at least allow for sleeping for a period of time, which can be used to sleep until close to the appointed time. You'd then start polling the clock until the right time is reached.
Delay Measurement and Standard Deviation
Your first method is doomed I think. You might be able to measure average delays and so forth but that doesn't mean that every message has exactly the same latency. The standard deviation in the latency will tell you what you can expect to achieve, and I don't think that's going to be particularly small. If so then the message has got to include a timestamp.
NTP can work because it's only interested in the average delay measured over a period of time (hours sometimes), whereas you're interested in instantaneous delay.
Streaming with RTP
Your second method may work if you can time sync the devices as discussed above. The RTP protocol was designed for use in these circumstances; it doesn't help with achieving sync, but it does help a lot with the streaming. It tells you whereabouts in the stream any one piece of received data fits, allowing you to play it at the right time.
Clock Drift
Another problem to deal with is how long you're playing for. If it's a long time then you may discover that the 44kHz (or whatever) audio clock rate on each device isn't quite the same. So, whilst you might find a way of starting to play all at the same time, the separate devices will then start diverging ever so slightly. Over a long period of time they may be noticeably out.
BlueTooth
It might be possible to do something with BlueTooth. It has many weird and wonderful profiles, and it might be that one of those would serve to send an accurate 'start now' message.
Audio Trigger
You might also use sound as a means of conveying a start signal. One device can play a particular sound whilst your software in the others is listening with the mic. When a particular feature is detected in the sound, that's the time for everyone to start playing. Sort of a computerised "1, 2, a 1 2 3 4".
Camera Flash
Should be easy to spot in software...
I think your first way would work if you expand it a little bit. Assuming all the clocks on the devices are in sync you could include a timestamp in your play command. Then each device would calculate the time between the timestamp and when it received the command. You would then play the music and offset it by the time difference.

Interval of a timer - from integer to float/double - Delphi

Is there a way to work around the Limits of the Ttimer's inteval so it can be preciser? for example instead of only integers like 1000ms , to use 1000.5ms . And if no, which component can I use instead which will give me preciser interval
You are trying to keep track of time to a reasonable degree of accuracy. However, the standard system timer cannot be used for that purpose. All that the system timer guarantees is that it will fire no sooner than the interval which you specify. And you can get the message late if you are tardy in pumping your message queue. Quite simply, the system timer is not designed to be used as a stopwatch and to attempt to do so is inappropriate.
Instead you need to use the high resolution performance counter which you can get hold of by calling QueryPerformanceCounter.
If you are using Delphi 2010 or later then you can use Diagnostics.TStopwatch which provides a very convenient wrapper to the high performance timer.
You can still use a system timer to give your app a regular tick or pulse, but make sure that you keep track of time with the high resolution timer.
Having said all of that, I'm not sure that you will ever be able to achieve what you are hoping to do. If you want to keep reasonably accurate time then what I say above is true. Trying to maintain lock-step synchronisation with code running on another machine somewhere remote over the net sounds pretty much intractable to me.
1) The TTimer class is not accurate enough for your task, period! (then again, neither would the web-site timer be, either)
2) If you increase the timer resolution using the TimeBeginPeriod() API call, this will get you closer, but still nowhere near close enough
3) If you adjust the TTimer interval each time based on a constant start time (and synchronised with the PC clock), you can average a set number of milliseconds for each time event compared to the PC clock
4) I don't know if the TTimer class handles 3) correctly, but I have a TTimer equivalent that does
5) To account for PC clock drift you will need to synchronise the PC clock periodically with an NTP server
6) I have a system that keeps the PC clock on a good machine to with +/- 5 milliseconds of a reference time permanently (I adjust every minute) and a timer with a resolution of +/- 2 milliseconds as long as the machine is not overloaded (Windows is not a real-time OS)
7) It took me a long time to get to this point - is this what you really need, or are you asking the wrong question?

Sleep performance

I am developing a program in c++ and I have to implement a cron. This cron should be executed every hour and every 24 hours for different reasons. My first idea was to make an independent pthread and sleep it during 1h every time. Is this correct? I mean, is really efficient to have a thread asleep more than awake? What are the inconvenients of having a thread slept?
I would tend to prefer to have such a task running via cron/scheduler since it is to run at pre-determined intervals, as opposed to in response to some environmental event. So the program should just 'do' what ever it needs to do, and then be executed by the operating system as needed. This also makes it easy to change the frequency of execution - just change the scheduling, rather than needing to rebuild the app or expose extra configurability.
That said, if you really, really wanted to do it that way, you probably would not sleep for the whole hour; You would sleep in multiple of some smaller time frame (perhaps five minutes, or what ever seems appropriate) and have a variable keeping the 'last run' time so you know when to run again.
Sleep() calls typically won't be exceptionally accurate as far as the time the thread ends up sleeping; it depends on what other threads have tasks waiting, etc.
There is no performance impact of having a thread sleeping for a long time, aside of the fact that it will occupy some memory without doing anything useful. Maybe if you'd do it for thousands of threads there would be some slow down in the OS's management of threads, but it doesn't look like you are planning to do that.
A practical disadvantage of having a thread sleep for long is, that you can't do anything with it. If you, for example, want to tell the thread that it should stop because the application wants to shut down, the thread could only get this message after the sleep. So your application either needs a very long time to shut down, or the thread will need to be stopped forcefully.
My first idea was to make an independent pthread and sleep it during 1h every time.
I see no problem.
Is this correct? I mean, is really efficient to have a thread asleep more than awake?
As long as a thread sleeps and doesn't wake up, OS wouldn't even bother with its existence.
Though otherwise, if thread sleeps most of its life, why to have a dedicated thread? Why other thread (e.g. main thread) can't check the time, and start a thread to do the cron job?
What are the inconvenients of having a thread slept?
None. But the fact that the sleeping thread cannot be easily unblocked. That is a concern if one needs proper shutdown of an application. That is why it is a better idea to have another (busy) thread to check the time and start the cron job when needed.
When designing your solution keep these scenarios in mind:
At 07:03 the system time is reset to 06:59. What happens one minute later?
At 07:03 the system time is moved forward to 07:59. What happens one minute later?
At 07:59 the system time is moved forward to 08:01. Is the 08:00-job ever executed?
The answers to those questions will tell you a lot about how you should implement your solution.
The performance of the solution should not be an issue. A single sleeping thread will use minimal resources.

Safest way to idle delphi application to wait for timer?

I am doing a delphi application that will run on my pc 24/7 in the background and will check if it has to do some actions or not, wait 30 minutes and check again, and so on.
How can I make sure the application will not overload cpu or memory because of being running all the time.
Create a timer to run every 30 minutes, and call your checks/actions from there. Then your application can just sit idle when there is nothing to do.
Alternatively you could create a Scheduled Task that just runs periodically to do this.
The answers about timers are good solutions, and I add this:
Make sure that the timer event, or subsequent procedure called, checks for busy. i.e. if you wake up, make sure that the last batch is done before starting a new batch. This is easy to miss when things are flowing well, and then you have a situation where things are backed up and the whole system logjams at 8 in the morning because something bad happened at midnight and now there are now 16 calls stacked up (or threads, processes, etc..).
So write your timer event like this:
OnMyTimer...
begin
MyTimer.Enabled := false;
try
DoSomethingForALongTime; // Usually runs in 45 seconds, but sometimes takes 45 minutes!
finally
MyTimer.Enabled := true; // and (SomeAbortRequest = False) and (SomeHorribleErrorCount = 0);
end;
end;
The answers about timers are pretty much exactly what you're looking for. As for your question about not overloading the CPU or memory, take a look at your program in the Task Manager. When it's not doing anything, it should sit at a "steady state" of memory, not allocating any more, and using 1% or less of CPU time.
It's safe to let most programs idle for long periods. The VCL knows how to handle the idling without hogging CPU resources for you, and you just need to use a timer to make sure it wakes up and activates its event at the right time.
Most programming languages have a "sleep" function that you can call to make the program stop doing stuff.
You are in control of the memory usage; you can deallocate your memory before going to sleep and reallocate it when coming awake, but...
It might be better to just set up some recurring job. I don't know how to do this, but I suspect there's a way with Windows Scripting Host to just launch your application on whatever schedule you want.
If you want to enforce an absolute state of inactivity, I guess you could use the "sleep" function. Though I'm not sure how it would behave on a reboot. (I guess Windows would report the application as being unresponsive.)
If the application has no main form and just sitting in the tray (Or being totally invisible), it won't do much. The main message loop will handle all message it receive from the OS, but it shouldn't receive many. And the few message it will receive, it should process them (Shutdown messages, System parameters change notification, etc)
So, I think you could just set up a timer and forget about setting code to force your program to stay idle.
If you really want to limit that process activity to a maximum, you could set the thread priority when you enter/leave the timer's event. So you would set the priority to "normal" when you enter the event, and set it to "Low" when getting out of it.
You didn't tell, but if your application uses more than one thread, this could add to the amount of CPU the OS spends on your process (read up on time slices and thread-switches for example).
The OS may also swap out some of your memory pages, thus using less memory and/or reducing memory-accesses in the "wait-time" helps too.
So, if you use only one thread and have no additional message-loops either, just calling Sleep() could be a good way, as that will signal the OS that you don't need a time slice at all for a long while to come.
Avoid YieldThread()/SwitchToThread() and your own time-keeping (using Now() for example) as that would mean lots of thread-switching is taking place, to do .... nothing!
Another method could be to use WaitForMultipleObjects with a large timeout, so your application can still respond to messages.
I would suggest creating a service application (set startup type as automatic) and use CreateTimerQueueTimer as your timer.
Memory leaks can be mitigated by reserving memory requirements/pooling classes.

Resources