pthread: locking mutex with timeout - timeout

I try to implement following logic (a kind of pseudo-code) using pthread:
pthread_mutex_t mutex;
threadA()
{
lock(mutex);
// do work
timed_lock(mutex, current_abs_time + 1 minute);
}
threadB()
{
// do work in more than 1 minute
unlock(mutex);
}
I do expect threadA to do the work and wait untill threadB signals but not longer than 1 minute. I have done similar a lot of time in Win32 but stuck with pthreads: a timed_lock part returns imediately (not in 1 minute) with code ETIMEDOUT.
Is there a simple way to implement the logic above?
even following code returns ETIMEDOUT immediately
pthread_mutex_t m;
// Thread A
pthread_mutex_init(&m, 0);
pthread_mutex_lock(&m);
// Thread B
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
struct timespec time = {now.tv_sec + 5, now.tv_nsec};
pthread_mutex_timedlock(&m, &time); // immediately return ETIMEDOUT
Does anyone know why? I have also tried with gettimeofday function
Thanks

I implemented my logic with conditional variables with respect to other rules (using wrapping mutex, bool flag etc.)
Thank you all for comments.

For the second piece of code: AFAIK pthread_mutex_timedlock only works with CLOCK_REALTIME.
CLOCK_REALTIME are seconds since 01/01/1970
CLOCK_MONOTONIC typically since boot
Under these premises, the timeout set is few seconds into 1970 and therefore in the past.

try something like this :
class CmyClass
{
boost::mutex mtxEventWait;
bool WaitForEvent(long milliseconds);
boost::condition cndSignalEvent;
};
bool CmyClass::WaitForEvent(long milliseconds)
{
boost::mutex::scoped_lock mtxWaitLock(mtxEventWait);
boost::posix_time::time_duration wait_duration = boost::posix_time::milliseconds(milliseconds);
boost::system_time const timeout=boost::get_system_time()+wait_duration;
return cndSignalEvent.timed_wait(mtxEventWait,timeout); // wait until signal Event
}
// so inorder to wait then call the WaitForEvent method
WaitForEvent(1000); // it will timeout after 1 second
// this is how an event could be signaled:
cndSignalEvent.notify_one();

Related

What does DispatchWallTime do on iOS?

I thought the difference between DispatchTime and DispatchWallTime had to do with whether the app was suspended or the device screen was locked or something: DispatchTime should pause, whereas DispatchWallTime should keep going because clocks in the real world keep going.
So I wrote a little test app:
#UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Override point for customization after application launch.
return true
}
func applicationDidEnterBackground(_ application: UIApplication) {
print("backgrounding the app, starting timers for 60 seconds", Date())
DispatchQueue.main.asyncAfter(deadline: .now() + 60) {
print("deadline 60 seconds ended", Date())
}
DispatchQueue.main.asyncAfter(wallDeadline: .now() + 60) {
print("wallDeadline 60 seconds ended", Date())
}
}
func applicationWillEnterForeground(_ application: UIApplication) {
print("app coming to front", Date())
}
}
I ran the app on my device. I backgrounded the app, waited for a while, then brought the app to the foreground. Sometimes "waited for a while" included switching off the screen. I got results like this:
backgrounding the app, starting timers for 60 seconds 2018-08-15 17:41:18 +0000
app coming to front 2018-08-15 17:41:58 +0000
wallDeadline 60 seconds ended 2018-08-15 17:42:24 +0000
deadline 60 seconds ended 2018-08-15 17:42:24 +0000
backgrounding the app, starting timers for 60 seconds 2018-08-15 17:42:49 +0000
app coming to front 2018-08-15 17:43:21 +0000
wallDeadline 60 seconds ended 2018-08-15 17:43:55 +0000
deadline 60 seconds ended 2018-08-15 17:43:55 +0000
The delay before the deadline timer fires is not as long as I expected: it's 6 seconds over the 60 second deadline, even though I "slept" the app for considerably longer than that. But even more surprising, both timers fire at the same instant.
So what does wallDeadline do on iOS that's different from what deadline does?
There's nothing wrong with The Dreams Wind's answer, but I wanted to understand these APIs more precisely. Here's my analysis.
DispatchTime
Here's the comment above DispatchTime.init:
/// Creates a `DispatchTime` relative to the system clock that
/// ticks since boot.
///
/// - Parameters:
/// - uptimeNanoseconds: The number of nanoseconds since boot, excluding
/// time the system spent asleep
/// - Returns: A new `DispatchTime`
/// - Discussion: This clock is the same as the value returned by
/// `mach_absolute_time` when converted into nanoseconds.
/// On some platforms, the nanosecond value is rounded up to a
/// multiple of the Mach timebase, using the conversion factors
/// returned by `mach_timebase_info()`. The nanosecond equivalent
/// of the rounded result can be obtained by reading the
/// `uptimeNanoseconds` property.
/// Note that `DispatchTime(uptimeNanoseconds: 0)` is
/// equivalent to `DispatchTime.now()`, that is, its value
/// represents the number of nanoseconds since boot (excluding
/// system sleep time), not zero nanoseconds since boot.
So DispatchTime is based on mach_absolute_time. But what is mach_absolute_time? It's defined in mach_absolute_time.s. There is a separate definition for each CPU type, but the key is that it uses rdtsc on x86-like CPUs and reads the CNTPCT_EL0 register on ARMs. In both cases, it's getting a value that increases monotonically, and only increases while the processor is not in a sufficiently deep sleep state.
Note that the CPU is not necessarily sleeping deeply enough even if the device appears to be asleep.
DispatchWallTime
There's no similarly helpful comment in the DispatchWallTime definition, but we can look at the definition of its now method:
public static func now() -> DispatchWallTime {
return DispatchWallTime(rawValue: CDispatch.dispatch_walltime(nil, 0))
}
and then we can consult the definition of dispatch_walltime:
dispatch_time_t
dispatch_walltime(const struct timespec *inval, int64_t delta)
{
int64_t nsec;
if (inval) {
nsec = (int64_t)_dispatch_timespec_to_nano(*inval);
} else {
nsec = (int64_t)_dispatch_get_nanoseconds();
}
nsec += delta;
if (nsec <= 1) {
// -1 is special == DISPATCH_TIME_FOREVER == forever
return delta >= 0 ? DISPATCH_TIME_FOREVER : (dispatch_time_t)-2ll;
}
return (dispatch_time_t)-nsec;
}
When inval is nil, it calls _dispatch_get_nanoseconds, so let's check that out:
static inline uint64_t
_dispatch_get_nanoseconds(void)
{
dispatch_static_assert(sizeof(NSEC_PER_SEC) == 8);
dispatch_static_assert(sizeof(USEC_PER_SEC) == 8);
#if TARGET_OS_MAC
return clock_gettime_nsec_np(CLOCK_REALTIME);
#elif HAVE_DECL_CLOCK_REALTIME
struct timespec ts;
dispatch_assume_zero(clock_gettime(CLOCK_REALTIME, &ts));
return _dispatch_timespec_to_nano(ts);
#elif defined(_WIN32)
static const uint64_t kNTToUNIXBiasAdjustment = 11644473600 * NSEC_PER_SEC;
// FILETIME is 100-nanosecond intervals since January 1, 1601 (UTC).
FILETIME ft;
ULARGE_INTEGER li;
GetSystemTimePreciseAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
return li.QuadPart * 100ull - kNTToUNIXBiasAdjustment;
#else
struct timeval tv;
dispatch_assert_zero(gettimeofday(&tv, NULL));
return _dispatch_timeval_to_nano(tv);
#endif
}
It consults the POSIX CLOCK_REALTIME clock. So it is based on the common idea of time and will change if you change your device's time in Settings (or System Preferences on a Mac).
The Mysterious Six Seconds
You said your timer fired
6 seconds over the 60 second deadline
so let's see where that came from.
Both asyncAfter(deadline:execute:) and asyncAfter(wallDeadline:execute:) call the same C API, dispatch_after. The kind of deadline (or “clock”) is encoded into a dispatch_time_t along with the time value. The dispatch_after function calls the internal GCD function _dispatch_after, which I quote in part here:
static inline void
_dispatch_after(dispatch_time_t when, dispatch_queue_t dq,
void *ctxt, void *handler, bool block)
{
dispatch_timer_source_refs_t dt;
dispatch_source_t ds;
uint64_t leeway, delta;
snip
delta = _dispatch_timeout(when);
if (delta == 0) {
if (block) {
return dispatch_async(dq, handler);
}
return dispatch_async_f(dq, ctxt, handler);
}
leeway = delta / 10; // <rdar://problem/13447496>
if (leeway < NSEC_PER_MSEC) leeway = NSEC_PER_MSEC;
if (leeway > 60 * NSEC_PER_SEC) leeway = 60 * NSEC_PER_SEC;
snip
dispatch_clock_t clock;
uint64_t target;
_dispatch_time_to_clock_and_value(when, &clock, &target);
if (clock != DISPATCH_CLOCK_WALL) {
leeway = _dispatch_time_nano2mach(leeway);
}
dt->du_timer_flags |= _dispatch_timer_flags_from_clock(clock);
dt->dt_timer.target = target;
dt->dt_timer.interval = UINT64_MAX;
dt->dt_timer.deadline = target + leeway;
dispatch_activate(ds);
}
The _dispatch_timeout function can be found in time.c. Suffice to say it returns the number of nanoseconds between the current time and the time passed to it. It determines the “current time” based on the clock of the time passed to it.
So _dispatch_after gets the number of nanoseconds to wait before executing your block. Then it computes leeway as one tenth of that duration. When it sets the timer's deadline, it adds leeway to the deadline you passed in.
In your case, delta is about 60 seconds (= 60 * 109 nanoseconds), so leeway is about 6 seconds. Hence your block is executed about 66 seconds after you call asyncAfter.
This question has been here for quite a while without any answers, so I'd like to give it a try and point out subtle difference I noticed in practice.
DispatchTime should pause, whereas DispatchWallTime should keep going
because clocks in the real world keep going
You are correct here, at least they are supposed to act this way. However it tends to be really tricky to check, that DispatchTime works as expected. When iOS app is running under Xcode session, it has unlimited background time and isn't getting suspended. I also couldn't achieve that by running application without Xcode connected, so it's still a big question if DispatchTime is paused under whatever conditions. However the main thing to note is that DispatchTime doesn't depend on the system clock.
DispatchWallTime works pretty much the same (it's not being suspended), apart from that it depends on the system clock. In order to see the difference, you can try out a little longer timer, say, 5 minutes. After that go to the system settings and set time 1 hour forward. If you now open the application you can notice, that WallTimer immediately expires whereas DispatchTime will keep waiting its time.

ESP8266 Soft WDT Error

How can I avoid the Soft WDT reset Error in this loop.
The Error consistently occurs when reaching the number 3190.
unsigned long TimeFrame = 10000;
void setup() {
Serial.begin(9600);
}
void loop() {
unsigned long StartTime = millis();
while (millis() - StartTime <= TimeFrame){
Serial.println(millis() - StartTime);
}
}
I could count 4 times to 2500 but would this be the correct approach to this error?
Thanks for the explanation. I added a delay(10) to the code and it works.
void setup() {
Serial.begin(9600);
}
void loop() {
unsigned long StartTime = millis();
while (millis() - StartTime <= TimeFrame){
Serial.println(millis() - StartTime);
delay(10);
}
}
WDT is the "watchdog timer". Watchdog timers are used to get control back when something goes wrong in a system - say, an infinite loop or some other unexpected condition. When the underlying system gets control back it resets these timers so that they start counting up from zero again. When they hit their maximum value they trigger a hardware reset on the chip.
Your code measures the duration of the ESP8266 software watchdog timer - in this case 3.19 seconds.
The ESP8266 has both hardware and software watchdog timers. loop() isn't intended to run indefinitely - it's intended to do a small amount of work and then return. When it returns, the ESP8266 SDK gets to reset the watchdog timers.
Both the delay() and yield() functions give the SDK a chance to say "things are okay" and reset the timers. If you need to have long running code in loop() you should call one of those occasionally to give the rest of the system a chance to run.
Keeping loop() brief isn't just about the watchdog timer, either. It also gives the network stack a chance to run and do processing that it needs to do.
You should always design your program so that loop() does a small batch of repetitive processing and then returns. It should never contain an infinite loop or long loop of code.
For instance, suppose you need to do something every 20 seconds. This is the wrong way to do it:
void loop() {
unsigned long start = millis();
while(millis() - start < 20*1000) ;
do_something();
}
That breaks the way the software is designed to work - with loop() executing briefly. It doesn't allow any other software to run while it's waiting. The watchdog timer will fire and reset your CPU.
This is better:
void loop() {
delay(20*1000);
do_something();
}
because delay() lets the underlying system get control back, reset the watchdog timer and also do network-related processing.
In my opinion, this is best:
static unsigned long start_time;
void setup() {
start_time = millis();
}
void loop() {
if(millis() - start_time > 20*1000) {
do_something();
start_time = millis();
}
}
because it does the least possible work inside loop(), only when it's time to do the work.

RxJava Observable to smooth out bursts of events

I'm writing a streaming Twitter client that simply throws the stream up onto a tv. I'm observing the stream with RxJava.
When the stream comes in a burst, I want to buffer it and slow it down so that each tweet is displayed for at least 6 seconds. Then during the quiet times, any buffer that's been built up will gradually empty itself out by pulling the head of the queue, one tweet every 6 seconds. If a new tweet comes in and faces an empty queue (but >6s after the last was displayed), I want it to be displayed immediately.
I imagine the stream looking like that described here:
Raw: --oooo--------------ooooo-----oo----------------ooo|
Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o|
And I understand that the question posed there has a solution. But I just can't wrap my head around its answer. Here is my solution:
myObservable
.concatMap(new Func1<Long, Observable<Long>>() {
#Override
public Observable<Long> call(Long l) {
return Observable.concat(
Observable.just(l),
Observable.<Long>empty().delay(6, TimeUnit.SECONDS)
);
}
})
.subscribe(...);
So, my question is: Is this too naïve of an approach? Where is the buffering/backpressure happening? Is there a better solution?
Looks like you want to delay a message if it came too soon relative to the previous message. You have to track the last target emission time and schedule a new emission after it:
public class SpanOutV2 {
public static void main(String[] args) {
Observable<Integer> source = Observable.just(0, 5, 13)
.concatMapEager(v -> Observable.just(v).delay(v, TimeUnit.SECONDS));
long minSpan = 6;
TimeUnit unit = TimeUnit.SECONDS;
Scheduler scheduler = Schedulers.computation();
long minSpanMillis = TimeUnit.MILLISECONDS.convert(minSpan, unit);
Observable.defer(() -> {
AtomicLong lastEmission = new AtomicLong();
return source
.concatMapEager(v -> {
long now = scheduler.now();
long emission = lastEmission.get();
if (emission + minSpanMillis > now) {
lastEmission.set(emission + minSpanMillis);
return Observable.just(v).delay(emission + minSpanMillis - now, TimeUnit.MILLISECONDS);
}
lastEmission.set(now);
return Observable.just(v);
});
})
.timeInterval()
.toBlocking()
.subscribe(System.out::println);
}
}
Here, the source is delayed by the number of seconds relative to the start of the problem. 0 should arrive immediately, 5 should arrive # T = 6 seconds and 13 should arrive # T = 13. concatMapEager makes sure the order and timing is kept. Since only standard operators are in use, backpressure and unsubscription composes naturally.

Actionscript settimeout not working

The Timer always jumps right into the function instead of waiting 2000 milliseconds
stop();
var hey:Number= 2000;
function exercise(frame) {
trace("sup");
this.gotoAndStop(frame);
}
setTimeout(exercise(2),hey);
trace(hey);
The setTimeout function (ie. not a Timer) takes the following parameters:
public function setTimeout(closure:Function, delay:Number, ... arguments):uint
Try:
// Save the return value so you can clear it later, otherwise your app for leak memory
var myTimeOut:uint = setTimeout(exercise, hey, 2);
trace(hey);
// the timeout closure object will not be garage collected if you do not clear it
clearTimeout(myTimeOut);
If you are doing this over and over, consider creating a Timer and setting/reseting the repeat count to 1 and then you can just restart the timer when need.

How to make a timer keep running even if game enters background in cocos2d-x c++ game in ios

I am using a timer in my cocos2d-x game (c++) ios. I am using cocos2d-x 2.2 version.
My function for time is as follows
in my init
this->schedule(schedule_selector(HelloWorld::UpdateTimer), 1);
I have defined the function as follows.
void HelloWorld::UpdateTimer(float dt)
{
if(seconds<=0)
{
CCLOG("clock stopped");
CCString *str=CCString::createWithFormat("%d",seconds);
timer->setString(str->getCString());
this->unschedule(schedule_selector(HelloWorld::UpdateTimer));
}
else
{
CCString *str=CCString::createWithFormat("%d",seconds);
timer->setString(str->getCString());
seconds--;
}
}
Everythings is working fine. But i have this timer to keep running even if the game enters background state. I have tried commenting the body of didEnter Background in appdelegate but not successfull. Any help will be appreciated
Thanks
If the app gets in the background, apart from some special background threads, no other thread gets executed.
Best way for you would be to save the unix timestamp in a variable during didEnterBackground, and when app resumes, fetch the current unix timestamp and compare the delta, to get the total time passed and update your timer accordingly.
In my AppDelegate.cpp I wrote the following code in applicationDidEnterBackground function. Here I took a value of time in seconds whenever the app goes background and store it in a CCUserdefault key. And when the app comes to foreground I again took the local system time and subtracted that from the time I stored in the key. Following is my code
void AppDelegate::applicationDidEnterBackground()
{
time_t rawtime;
struct tm * timeinfo;
time (&rawtime);
timeinfo = localtime (&rawtime);
CCLog("year------->%04d",timeinfo->tm_year+1900);
CCLog("month------->%02d",timeinfo->tm_mon+1);
CCLog("day------->%02d",timeinfo->tm_mday);
CCLog("hour------->%02d",timeinfo->tm_hour);
CCLog("minutes------->%02d",timeinfo->tm_min);
CCLog("seconds------->%02d",timeinfo->tm_sec);
int time_in_seconds=(timeinfo->tm_hour*60)+(timeinfo->tm_min*60)+timeinfo->tm_sec;
CCLOG("time in seconds is %d",time_in_seconds);
CCUserDefault *def=CCUserDefault::sharedUserDefault();
def->setIntegerForKey("time_from_background", time_in_seconds);
CCDirector::sharedDirector()->stopAnimation();
// if you use SimpleAudioEngine, it must be pause
// SimpleAudioEngine::sharedEngine()->pauseBackgroundMusic();
}
void AppDelegate::applicationWillEnterForeground()
{
CCUserDefault *def=CCUserDefault::sharedUserDefault();
int time1=def->getIntegerForKey("time_from_background");
time_t rawtime;
struct tm * timeinfo;
time(&rawtime);
timeinfo = localtime (&rawtime);
CCLog("year------->%04d",timeinfo->tm_year+1900);
CCLog("month------->%02d",timeinfo->tm_mon+1);
CCLog("day------->%02d",timeinfo->tm_mday);
CCLog("hour------->%02d",timeinfo->tm_hour);
CCLog("mintus------->%02d",timeinfo->tm_min);
CCLog("seconds------->%02d",timeinfo->tm_sec);
int time_in_seconds=(timeinfo->tm_hour*60)+(timeinfo->tm_min*60)+timeinfo->tm_sec;
int resume_seconds= time_in_seconds-time1;
CCLOG("app after seconds == %d", resume_seconds);
CCDirector::sharedDirector()->startAnimation();
// if you use SimpleAudioEngine, it must resume here
// SimpleAudioEngine::sharedEngine()->resumeBackgroundMusic();
}
You can see and calculate the time the app remained in background.

Resources