Strictly scheduled loop timing in Swift - ios

What is the best way to schedule a repeated task with very strict timing (accurate and reliable enough for musical sequencing)? From the Apple docs, it is clear that NSTimer is not reliable in this sense (i.e., "A timer is not a real-time mechanism"). An approach that I borrowed from AudioKit's AKPlaygroundLoop seems consistent within about 4ms (if not quite accurate), and might be feasible:
class JHLoop: NSObject{
var trigger: Int {
return Int(60 * duration) // 60fps * t in seconds
}
var counter: Int = 0
var duration: Double = 1.0 // in seconds, but actual loop is ~1.017s
var displayLink: CADisplayLink?
weak var delegate: JHLoopDelegate?
init(dur: Double) {
duration = dur
}
func stopLoop() {
displayLink?.invalidate()
}
func startLoop() {
counter = 0
displayLink = CADisplayLink(target: self, selector: "update")
displayLink?.frameInterval = 1
displayLink?.addToRunLoop(NSRunLoop.currentRunLoop(), forMode: NSRunLoopCommonModes)
}
func update() {
if counter < trigger {
counter++
} else {
counter = 0
// execute loop here
NSLog("loop executed")
delegate!.loopBody()
}
}
}
protocol JHLoopDelegate: class {
func loopBody()
}
↑ Replaced code with the actual class I will try to use for the time being.
For reference, I am hoping to make a polyrhythmic drum sequencer, so consistency is most important. I will also need to be able to smoothly modify the loop, and ideally the looping period, in real time.
Is there a better way to do this?

You can try to use mach_wait_until() api. It’s pretty good for high precision timer. I changed apple example from here a little. It works fine in mine command line tool project. In below code snippet I changed main() method from my project to startLoop(). Also you can see this.
Hope it helps.
static const uint64_t NANOS_PER_USEC = 1000ULL;
static const uint64_t NANOS_PER_MILLISEC = 1000ULL * NANOS_PER_USEC;
static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MILLISEC;
static mach_timebase_info_data_t timebase_info;
static uint64_t nanos_to_abs(uint64_t nanos) {
return nanos * timebase_info.denom / timebase_info.numer;
}
func startLoop() {
while(true) { //
int64_t nanosec = waitSomeTime(1000); // each second
NSLog(#"%lld", nanosec);
update() // call needed update here
}
}
uint64_t waitSomeTime(int64_t eachMillisec) {
uint64_t start;
uint64_t end;
uint64_t elapsed;
uint64_t elapsedNano;
if ( timebase_info.denom == 0 ) {
(void) mach_timebase_info(&timebase_info);
}
// Start the clock.
start = mach_absolute_time();
mach_wait_until(start + nanos_to_abs(eachMillisec * NANOS_PER_MILLISEC));
// Stop the clock.
end = mach_absolute_time();
// Calculate the duration.
elapsed = end - start;
elapsedNano = elapsed * timebase_info.numer / timebase_info.denom;
return elapsedNano;
}

Related

Why is the performance gap between GCD, ObjC and Swift so large

OC: Simulator iPhoneSE iOS 13;
(60-80 seconds)
NSTimeInterval t1 = NSDate.date.timeIntervalSince1970;
NSInteger count = 100000;
for (NSInteger i = 0; i < count; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSLog(#"op begin: %ld", i);
self.idx = i;
NSLog(#"op end: %ld", i);
if (i == count - 1) {
NSLog(#"耗时: %f", NSDate.date.timeIntervalSince1970-t1);
}
});
}
Swift: Simulator iPhoneSE iOS 13;
(10-14 seconds)
let t1 = Date().timeIntervalSince1970
let count = 100000
for i in 0..<count {
DispatchQueue.global().async {
print("subOp begin i:\(i), thred: \(Thread.current)")
self.idx = i
print("subOp end i:\(i), thred: \(Thread.current)")
if i == count-1 {
print("耗时: \(Date().timeIntervalSince1970-t1)")
}
}
}
My previous work has been writing code using oc. Recently learned to use swift. I was surprised by the wide performance gap for a common use of GCD
tl;dr
You most likely are doing a debug build. In Swift, debug builds do all sort of safety checks. If you do a release build, it turns off those safety checks, and the performance is largely indistinguishable from the Objective-C rendition.
A few observations:
Neither of these are thread-safe regarding their interaction with index, namely your interaction with this property is not being synchronized. If you’re going to update a property from multiple threads, you have to synchronize your access (with locks, serial queue, reader-writer concurrent queue, etc.).
In your Swift code, are you doing a release build? In a debug build, there are safety checks in a debug build that aren’t performed with the Objective-C rendition. Do a “release” build for both to compare apples-to-apples.
Your Objective-C rendition is using DISPATCH_QUEUE_PRIORITY_HIGH, but your Swift iteration is using a QoS of .default. I’d suggest using a QoS of .userInteractive or .userInitiated to be comparable.
Global queues are concurrent queues. So, checking i == count - 1 is not sufficient to know whether all of the current tasks are done. Generally we’d use dispatch groups or concurrentPerform/dispatch_apply to know when they’re done.
FWIW, dispatching 100,000 tasks to a global queue is not advised, because you’re going to quickly exhaust the worker threads. Don’t do this sort of thread explosion. You can have unexpected lockups in your app if you do that. In Swift we’d use concurrentPerform. In Objective-C we’d use dispatch_apply.
You’re doing NSLog in Objective-C and print in Swift. Those are not the same thing. I’d suggest doing NSLog in both if you want to compare performance.
When performance testing, I might suggest using unit tests’ measure routine, which repeats it a number of times.
Anyway, when correcting for all of this, the time for the Swift code was largely indistinguishable from the Objective-C performance.
Here are the routines I used. In Swift:
class SwiftExperiment {
// This is not advisable, because it suffers from thread explosion which will exhaust
// the very limited number of worker threads.
func experiment1(completion: #escaping (TimeInterval) -> Void) {
let t1 = Date()
let count = 100_000
let group = DispatchGroup()
for i in 0..<count {
DispatchQueue.global(qos: .userInteractive).async(group: group) {
NSLog("op end: %ld", i);
}
}
group.notify(queue: .main) {
let elapsed = Date().timeIntervalSince(t1)
completion(elapsed)
}
}
// This is safer (though it's a poor use of `concurrentPerform` as there's not enough
// work being done on each thread).
func experiment2(completion: #escaping (TimeInterval) -> Void) {
let t1 = Date()
let count = 100_000
DispatchQueue.global(qos: .userInteractive).async() {
DispatchQueue.concurrentPerform(iterations: count) { i in
NSLog("op end: %ld", i);
}
let elapsed = Date().timeIntervalSince(t1)
completion(elapsed)
}
}
}
And the equivalent Objective-C routines:
// This is not advisable, because it suffers from thread explosion which will exhaust
// the very limited number of worker threads.
- (void)experiment1:(void (^ _Nonnull)(NSTimeInterval))block {
NSDate *t1 = [NSDate date];
NSInteger count = 100000;
dispatch_group_t group = dispatch_group_create();
for (NSInteger i = 0; i < count; i++) {
dispatch_group_async(group, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^{
NSLog(#"op end: %ld", i);
});
}
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSTimeInterval elapsed = [NSDate.date timeIntervalSinceDate:t1];
NSLog(#"耗时: %f", elapsed);
block(elapsed);
});
}
// This is safer (though it's a poor use of `dispatch_apply` as there's not enough
// work being done on each thread).
- (void)experiment2:(void (^ _Nonnull)(NSTimeInterval))block {
NSDate *t1 = [NSDate date];
NSInteger count = 100000;
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^{
dispatch_apply(count, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^(size_t index) {
NSLog(#"op end: %ld", index);
});
NSTimeInterval elapsed = [NSDate.date timeIntervalSinceDate:t1];
NSLog(#"耗时: %f", elapsed);
block(elapsed);
});
}
And my unit tests:
class MyApp4Tests: XCTestCase {
func testSwiftExperiment1() throws {
let experiment = SwiftExperiment()
measure {
let e = expectation(description: "experiment1")
experiment.experiment1 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testSwiftExperiment2() throws {
let experiment = SwiftExperiment()
measure {
let e = expectation(description: "experiment2")
experiment.experiment2 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testObjcExperiment1() throws {
let experiment = ObjectiveCExperiment()
measure {
let e = expectation(description: "experiment1")
experiment.experiment1 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testObjcExperiment2() throws {
let experiment = ObjectiveCExperiment()
measure {
let e = expectation(description: "experiment2")
experiment.experiment2 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
}
That resulted in

determining throughput speed Cordova bridge. Any insights?

I'm trying to get an idea about the throughput speed of Cordova's WebView-Native bridge. For this I will send a Uint8Array of a determined size. However, I would have expected the throughput speed of this bridge to be higher.
Testing on a iPhone 4s and Cordova 5.4.1 the throughput speed is ~0.8mb/s. This feels low to me (based on some experience with Cordva apps, but this is of course subjective).
Question: is this realistic?
I think that without posting my code this questions show no enough info, therefore I've attached my code:
My JavaScript code:
var success = function(endTime) {
//endTime is unix epoch time (ms)
var time_df = endTime - beginTime
results.push(time_df)
numTests += 1;
sendMessage()
}
var sendMessage = function() {
// execute test 30 times
if (numTests < 30) {
beginTime = +new Date()
bench.send(payload, success, fail)
} else {
console.log('results', results)
}
}
var createPayload = function(size) {
size = size * 1000;
var bufView = new Uint8Array(size);
for (var i = 0; i < size; ++i) {
bufView[i] = i % 20;
}
return bufView.buffer;
}
var startTest = function() {
// create payload of 500kb
payload = createPayload(500)
sendMessage()
}
My Objective-C code.
Code does basicly one thing: return unix epoch time after payload is received.
(void)send:(CDVInvokedUrlCommand*)command
{
NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
long timestamp = (digits * 1000) + decimalDigits;
NSString *timestampString = [NSString stringWithFormat:#"%ld%d",digits ,decimalDigits];
NSString* callbackId = [command callbackId];
CDVPluginResult* result = [CDVPluginResult
resultWithStatus:CDVCommandStatus_OK
messageAsString:timestampString];
[self success:result callbackId:callbackId];
}

Arduino 'time out' function using a millis timer

I've not been programming for long and I just want to expand from electronic engineering with an Arduino UNO board.
I've started a new project based on the Secret Knock Detecting Door Lock by Steve Hoefer on Grathio and I'd like to implement the following:
(http://grathio.com/2009/11/secret_knock_detecting_door_lock/)
(http://grathio.com/assets/secret_knock_detector.pde)
Implementation
If the global value equals 0 and the valid knock patter is true then flash a yellow LED 4 times using millis rather than delay so that it can still 'listen'.
If another valid knock pattern is not heard within 6 seconds it will time out and reset global to 0 so that it can acknowledge the initial true pattern and flash the yellow LED.
If another valid knock pattern is heard withing 6 seconds, increment a counter.
If the counter equals 1, wait for another valid knock pattern and if true within 6 seconds, increment the counter again and don't flash the yellow LED.
Otherwise, time out and reset all values.
And so on until if the counter is greater than or equal to 4 trigger the master LED array.
Once is gets to 4 successful knocks, I'd like it to trigger the master LED array I've built.
Problems
This project was inspired by the test panels used on passenger airplanes. I've seen them a lot and thought it would be a good place to start and learn about timing.
There are a few problems as I don't wish to reset millis() every time and I'm using a button rather than the boolean within the knock detection script so I don't get lost in the code.
I understand this won't respond 50 seconds later and it's a beginners mistake but proves what I've got if I hold down the button. The code below also doesn't have a time out after the 1st digitalRead HIGH or true boolean (I am struggling with this).
Arduino sketch
int inPin = 2; // input pin switch
int outPin = 3; // output pin LED
long currentTime = 0; // counter
long nextTime = 0; // counter
long lastTime = 0; // counter
int patternCounter = 0; // build up
int globalValue = 0; // lock out
int breakIn = 0; // waste of time?
void setup()
{
pinMode(inPin, INPUT);
pinMode(outPin, OUTPUT);
Serial.begin(9600);
Serial.println("GO");
}
void loop(){
// boolean true, switch just for testing
if (digitalRead(inPin)==HIGH&&globalValue==0&&breakIn==0) {
Serial.println("CLEARED 1st");
delay (500); // flood protection
globalValue++;
breakIn++;
if (globalValue>0&&breakIn>0){
currentTime = millis(); // start a 'new' counter and 'listen'
if (currentTime<6000) { // less than
if (digitalRead(inPin)==HIGH) { // and true
Serial.println("CLEARED 2nd"); // cleared the stage
delay (500); // flood protection
patternCounter++;
} // if counter less
} // if true or high
if (currentTime>6000) {
Serial.println("TIMEOUT waiting 2nd"); // timed out
globalValue = 0;
patternCounter = 0;
breakIn = 0;
} // if more than
} // global master
}
// 3rd attempt
if (globalValue==1&&patternCounter==1){ // third round
nextTime = millis(); // start a 'new' counter and 'listen'
if (nextTime<6000) { // less than
if (digitalRead(inPin)==HIGH) { // and true
Serial.println("CLEARED 3rd");
delay (500); // flood protection
patternCounter++;
} // if counter less
} // if true or high
if (nextTime>6000) {
Serial.println("TIMEOUT waiting 3rd"); // timed out
globalValue = 0;
patternCounter = 0;
} // if more than
} // global master
// 4th attempt and latch
if (globalValue==1&&patternCounter==2){ // last round
lastTime = millis(); // start a 'new' counter and 'listen'
if (lastTime<6000) { // less than
if (digitalRead(inPin)==HIGH) { // and true
digitalWrite(outPin, HIGH); // LED on
Serial.println("CLEARED 4th ARRAY"); // cleared the stage
delay(500); // flood protection
} // true or high
} // counter
if (lastTime>6000) {
Serial.println("TIMEOUT waiting 4th"); // timed out
globalValue = 0;
patternCounter = 0;
} // if more than
} // global and alarm
} // loop end
That's the current sketch, I understand the counters I've used are near pointless.
Any help would be greatly appreciated!
That is a lot to wade through so I may not understand your question but the bit of code below stands out as a problem:
currentTime = millis(); // start a 'new' counter and 'listen'
if (currentTime<6000) { // less than
.....
}
Do you understand that there is no "resetting" of millis() possible and that is merely a function that returns the number of milliseconds since the program launched? It will continue to increase as long as the program is running (until it rolls over but that is a separate problem). So in the above code 'currentTime' is only going to be < 6000 very, very briefly (6 seconds) and then never again (except for the rollover condition where millis resets).
So a typical way millis() is used to track time is, in setup, to store it's current value into a variable and add your timeout period value to it:
// timeoutAmount is defined at head of program. Let's say it is 6000 (6 seconds)
nextUpdate = millis() + timeoutAmount;
Then in loop you can do the check:
if (millis() >= nextUpdate){
nextUpdate = millis() + timeoutAmount; // set up the next timeout period
// do whatever you want to do
}
Also be careful using delay() - it is easy to use for flow control but for any program with more than one thing going on it can lead to confusing and hard to solve problems.
Oh - there are more sophisticated ways of doing timing using the built-in timers on the chip to trigger interrupts but better to get the hang of things first.
I've come up with the following sketch after playing around with your help.
The sketch will almost do everything I wanted...
When it times out (T/O) after the 1st, 2nd (inCount = 1) or 3rd (inCount = 2) button press, I'd like it to revert back to the start without having to press it again and loop triggerFlash twice.
Either that or implementing another 'wait and listen' within the time out to move it to the 2nd (inCount = 1) e.t.c. but I think that may cause problems.
I know there's delay used within the flashes but that will be changed to millis(), I'm just trying to get the basic function and understanding.
const int switchPin = 2; // the number of the input pin
const int BswitchPin = 4; // the number of the input pin
const int outPin = 3;
const int thePin = 5;
long startTime; // the value returned from millis when the switch is pressed
long escapeTime; // the value returned from millis when in time out
long duration; // variable to store the duration
int inCount = 0;
int dupe = 0;
void setup()
{
pinMode(switchPin, INPUT);
pinMode(outPin, OUTPUT);
pinMode(thePin, OUTPUT);
digitalWrite(switchPin, HIGH); // turn on pull-up resistor
Serial.begin(9600);
Serial.println("Go");
digitalWrite(outPin, HIGH);
}
void loop()
{
if(inCount==0&&digitalRead(switchPin) == LOW)
{
// here if the switch is pressed
startTime = millis();
while(inCount==0&&digitalRead(switchPin) == LOW)
; // wait while the switch is still pressed
long duration = millis() - startTime;
if (duration<4000) {
Serial.println("1");
triggerFlash();
inCount++;
}
} // master 1
if (inCount>0&&inCount<4&&digitalRead(switchPin) == LOW)
{
// here if the switch is pressed
startTime = millis();
while(inCount>0&&inCount<4&&digitalRead(switchPin) == LOW)
; // wait while the switch is still pressed
long duration = millis() - startTime;
delay(500); // flood protection
if (duration>4000) { // script an escape here - formerly if (while will loop the condition)
Serial.println("T/O");
triggerFlash();
inCount = 0;
}
if (duration<4000) {
dupe = inCount + 1;
Serial.println(dupe);
inCount++;
}
}
if (inCount>=4) {
digitalWrite(thePin, HIGH);
}
} // loop
void triggerFlash() {
int i = 0;
for (i=0; i < 8; i++){
digitalWrite(outPin, LOW);
delay(100);
digitalWrite(outPin, HIGH);
delay(100);
}
}
Any ideas are very appreciated! (edited with improved counting)
The above code is actually WRONG. Please be carefull with millis() as they rollover after some time. it is only long type. So if the millis+timeout is near max(long) and millis() will rollover and start counting from zero, the millis()>=nextupdate will be false even if the timeout actually occurs.
The correct way to do this is:
unsigned long start = millis();
unsigned long timeout = MY_TIMEOUT_HERE;
...
//check if timeout occured
unisgned long now = millis();
unsigned long elapsed = now - start;
if(elapsed > timeout)
//do whatever you need to do when timeout occurs
I just implement Arduino library. hope it help your problem.
I made it to work like setTimeout and setInterval in javascript.
You can download it here, Github
This is example of my code
You can see it in action in Tinkercad
/*
Author : Meng Inventor
Contact : https://www.facebook.com/MLabpage
15 July 2022
*/
#include "simple_scheduler.h"
#define LED1_PIN 7
#define LED2_PIN 6
#define LED3_PIN 5
#define GREEN_LED_PIN 4
Task_list job_queue;
void setup()
{
Serial.begin(115200);
pinMode(LED1_PIN, OUTPUT);
pinMode(LED2_PIN, OUTPUT);
pinMode(LED3_PIN, OUTPUT);
pinMode(GREEN_LED_PIN, OUTPUT);
// setInterval will run repeatly for every given time period (ms)
job_queue.setInterval(blink_green, 1000);
job_queue.setInterval(led1_on, 2000);
}
unsigned long timer = millis();
void loop()
{
job_queue.update();
}
void led1_on(){
digitalWrite(LED1_PIN, HIGH);
job_queue.setTimeout(led1_off, 250); //setTimeout will run once after given time period (ms)
}
void led1_off(){
digitalWrite(LED1_PIN, LOW);
job_queue.setTimeout(led2_on, 250);//setTimeout will run once after given time period (ms)
}
void led2_on(){
digitalWrite(LED2_PIN, HIGH);
job_queue.setTimeout(led2_off, 250);//setTimeout will run once after given time period (ms)
}
void led2_off(){
digitalWrite(LED2_PIN, LOW);
job_queue.setTimeout(led3_on, 250);//setTimeout will run once after given time period (ms)
}
void led3_on(){
digitalWrite(LED3_PIN, HIGH);
job_queue.setTimeout(led3_off, 250);//setTimeout will run once after given time period (ms)
}
void led3_off(){
digitalWrite(LED3_PIN, LOW);
}
void blink_green() {
digitalWrite(GREEN_LED_PIN,HIGH);
job_queue.setTimeout(blink_green_off, 500);
}
void blink_green_off() {
digitalWrite(GREEN_LED_PIN,LOW);
}

CocosDenshion: why isPlaying always false (I can hear the music)?

I have this:
-(ALuint)ID
{ return self.soundSource->_sourceId; }
-(void)play
{
[self.engine playSound:self.soundSource.soundId
sourceGroupId:0
pitch:self.pitch
pan:1.0
gain:1.0
loop:NO];
}
-(NSTimeInterval)duration
{ return durationOfSourceId(self.ID); }
-(NSTimeInterval)offset
{ return elapsedTimeOfSourceId(self.ID); }
-(BOOL)isPlaying
{
NSTimeInterval secondsRemaining = self.duration - self.offset;
NSLog(#"<%.3f>", self.soundSource.durationInSeconds);
NSLog(#"<%.3f> - <%.3f> = <%.3f> isPlaying <%i>", self.duration, self.offset, secondsRemaining, self.soundSource.isPlaying);
return (secondsRemaining > 0.0);
}
#pragma mark - OpenAL addons
static NSTimeInterval elapsedTimeOfSourceId(ALuint sourceID)
{
float result = 0.0;
alGetSourcef(sourceID, AL_SEC_OFFSET, &result);
return result;
}
static NSTimeInterval durationOfSourceId(ALuint sourceID)
{
//Thanks to http://stackoverflow.com/a/8822347
ALint bufferID, bufferSize, frequency, bitsPerSample, channels;
alGetSourcei(sourceID, AL_BUFFER, &bufferID);
alGetBufferi(bufferID, AL_SIZE, &bufferSize);
alGetBufferi(bufferID, AL_FREQUENCY, &frequency);
alGetBufferi(bufferID, AL_CHANNELS, &channels);
alGetBufferi(bufferID, AL_BITS, &bitsPerSample);
NSTimeInterval result = ((double)bufferSize)/(frequency*channels*(bitsPerSample/8));
return result;
}
Where engine is just an instance of CDSoundEngine. I really want to know when will the music stop. I'm into it for a whole day now, and I'm tired.
It logs:
[1445:707] <1.656>
[1445:707] <1.656> - <0.000> = <1.656> isPlaying <0>
So the OpenAL source ID is right (since I can get the duration).
The CDSoundSource is also right (since I can get the duration from that as well).
I can hear the sound playing.
But AL_SEC_OFFSET is always 0.0, isPlaying is always NO.
Why dont you get the state of the source and check if it is really being played?:
alGetSourcei(source, AL_SOURCE_STATE, &state);
return (state == AL_PLAYING);

Some problems with Arduino protothreads

I'm doing a project about controlling two sensors (ultrasonic and infrared), managing them with Arduino. The IR receiver has a filter system inside, so it receives at the frequency of 36 kHz. I use the module srf04 to handle the ultrasonic stuff. If I do a program which has to control only one sensor, it works. But I have to interpolate the two signals into one result. So I used protothreads! But it doesn't work... What's the error?
Here is the code:
#include <pt.h>
int iro = 8, iri = 4, us = 12, distanza, us_vcc = 13, ir_vcc = 7;
long durata;
static struct pt pt1, pt2, pt3;
static int irthread(struct pt *pt) {
PT_BEGIN(pt);
while(1) {
PT_WAIT_UNTIL(pt, 1>0);
digitalWrite(iro, HIGH);
delayMicroseconds(9);
digitalWrite(iro, LOW);
delayMicroseconds(9);
}
PT_END(pt);
}
static int usthread(struct pt *pt) {
static unsigned long timer = 0;
PT_BEGIN(pt);
while(1) {
PT_WAIT_UNTIL(pt, millis() - timer > 200);
timer = millis();
pinMode(us, OUTPUT);
digitalWrite(us, LOW);
delayMicroseconds(5);
digitalWrite(us, HIGH);
delayMicroseconds(10);
digitalWrite(us, LOW);
pinMode(us, INPUT);
durata = pulseIn(us, HIGH);
distanza = durata/58;
}
PT_END(pt);
}
static int leggithread(struct pt *pt) {
static unsigned long timer = 0;
PT_BEGIN(pt);
while(1) {
PT_WAIT_UNTIL(pt, millis() - timer > 200);
timer = millis();
Serial.print(distanza);
Serial.print("cm ");
if (digitalRead(iri) == LOW)
Serial.println("ir si");
else
Serial.println("ir no");
}
PT_END(pt);
}
void setup() {
pinMode(iro, OUTPUT);
pinMode(iri, INPUT);
pinMode(us_vcc, OUTPUT);
digitalWrite(us_vcc, HIGH);
pinMode(ir_vcc, OUTPUT);
digitalWrite(ir_vcc, HIGH);
Serial.begin(9600);
PT_INIT(&pt1);
PT_INIT(&pt2);
PT_INIT(&pt3);
}
void loop() {
irthread(&pt1);
usthread(&pt2);
leggithread(&pt3);
}
The single parts of code of each thread works.
Update
I solved my problem (eliminated irthread()) and the code is now like this:
#include <pt.h>
int iro = 8, iri = 4, us = 12, distanza, us_vcc = 13, ir_vcc = 7;
long durata;
static struct pt pt1, pt2;
static int usthread(struct pt *pt) {
static unsigned long timer = 0;
PT_BEGIN(pt);
while(1) {
PT_WAIT_UNTIL(pt, millis() - timer > 200);
timer = millis();
pinMode(us, OUTPUT);
digitalWrite(us, LOW);
delayMicroseconds(5);
digitalWrite(us, HIGH);
delayMicroseconds(10);
digitalWrite(us, LOW);
pinMode(us, INPUT);
durata = pulseIn(us, HIGH);
}
PT_END(pt);
}
static int leggithread(struct pt *pt) {
static unsigned long timer = 0;
PT_BEGIN(pt);
while(1) {
PT_WAIT_UNTIL(pt, millis() - timer > 200);
timer = millis();
distanza = durata/58;
Serial.print(distanza);
Serial.print("cm ");
if(digitalRead(iri) == LOW)
Serial.println("ir si");
else
Serial.println("ir no");
}
PT_END(pt);
}
void setup() {
pinMode(iro, OUTPUT);
tone(iro, 36000);
pinMode(iri, INPUT);
pinMode(us_vcc, OUTPUT);
digitalWrite(us_vcc, HIGH);
pinMode(ir_vcc, OUTPUT);
digitalWrite(ir_vcc, HIGH);
Serial.begin(9600);
PT_INIT(&pt1);
PT_INIT(&pt2);
}
void loop() {
usthread(&pt1);
leggithread(&pt2);
}
Now the problem is the ultrasonic sensor. If I control it in a single program without protothreads it can reach objects to a distance of 3 meters. Now even if I put something at 1 meter the "distanza" is 15 cm max. What is the error?
In irthread() the second argument to macro PT_WAIT_UNTIL always evaluates to true:
PT_WAIT_UNTIL(pt, 1>0);
Thus the program will be stuck in irthread()'s infinite loop, because part of the result of macro PT_WAIT_UNTIL in this case is something like if(!(1>0)) return 0;; the statement return 0 is never called.
It works for usthread() and leggithread() as the second argument is false for the first 200 milliseconds and the variables are set up so it will be false again for another 200 milliseconds after being true for a single time.
Some background information is in How protothreads really work.
The timers in leggithread() and usthread() interferes with each other. They use the same variable, timer. When time is up, after about 200 milliseconds since last time, in, say leggithread(), the variable is reset. It means the condition in the other function, usthread() (that is called right after), will be false even though the condition there was about to be true. Thus at least another 200 milliseconds will pass before usthread() can do work (outputting a 10 microsecond pulse on port 12).
There is no guarantee that both functions will be called. If you are unlucky only one of them may be called if it is a deterministic system (driven from the same clock, the microcontroller's crystal).
It could be random which one is called or there could be some aliasing between several frequencies (for instance, one frequency represented by the number of executed instructions for each loop - that frequency will change when the program is changed).
If you want both leggithread() and usthread() doing work five times per second then they should each have an independent timer, using separate variables, for example, timer1 and timer2.
Why have you put while(1) in your function? Since 1 is always true -
while(1) {
// The code in it will repeat forever
}
// And the Arduino will never get here
Either you put a logic instead of 1 (like while(x > 10), while(task_finished)) or don't put your code in the while statement.
static int usthread(struct pt *pt) {
static unsigned long timer = 0;
PT_BEGIN(pt);
while(1) { // <<<<<<<<< Fault 1
PT_WAIT_UNTIL(pt, millis() - timer > 200);
PT_BEGIN(pt);
while(1) { //<<<<<<<<< Fault 2
PT_WAIT_UNTIL(pt, millis() - timer > 200);
timer = millis();

Resources