Execution is transferred to second process while waiting for an event in contiki - contiki

I have created two processes in Contiki. In the first process, I want to keep calling a function after a certain amount of time for this I have set an event timer. But while waiting for the event for the expiration of the timer the control of execution is transferred to the second process.
Code:
PROCESS(hello_world_process, "Hello world process");
PROCESS(hello_world_process2, "Hello world process2");
AUTOSTART_PROCESSES(&hello_world_process,&hello_world_process2);
#define TIMEOUT (CLOCK_SECOND / 4)
static struct etimer timer;
void setFlag(void)
{
int i;
i = random_rand();
if (i>0 && i<32767)
{
printf("Setting flag\n");
flag =1;
}
}
PROCESS_THREAD(hello_world_process, ev, data)
{
PROCESS_BEGIN();
printf("1st process started\n");
do
{
etimer_set(&timer, TIMEOUT);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
setFlag();
} while(!flag);
printf("1st process completed\n");
PROCESS_END();
}
PROCESS_THREAD(hello_world_process2, ev, data)
{
printf("2nd process started\n");
PROCESS_BEGIN();
printf("2nd process completed\n");
PROCESS_END();
}
Output:
Contiki-list-1532-g2ca33d4 started with IPV6, RPL
Rime started with address 1.2.3.4.5.6.7.8
MAC nullmac RDC nullrdc NETWORK sicslowpan
Tentative link-local IPv6 address fe80:0000:0000:0000:0302:0304:0506:0708
1st process started
2nd process started
2nd process completed
Setting flag
1st process completed
I want that second process should only be executed once the first process is executed completely i.e. while waiting for the event, the control should not get transferred to the second process.

Try this:
AUTOSTART_PROCESSES(&hello_world_process);
// ...
PROCESS_THREAD(hello_world_process, ev, data)
{
PROCESS_BEGIN();
printf("1st process started\n");
do
{
etimer_set(&timer, TIMEOUT);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
setFlag();
} while(!flag);
printf("1st process completed\n");
process_start(&hello_world_process2, NULL); // start the second process
PROCESS_END();
}
See https://github.com/contiki-os/contiki/wiki/Processes.
Also, don't put any executable code before PROCESS_BEGIN()! You will not get a warning, but the result may not be what you expect, as that code will be executed every time when the process is resumed.

Related

Dart - Detect non-completing futures

I have a Dart console app that is calling into a third-party library.
When my console app calls the third-party library the call to the method returns however my CLI app then 'hangs' for 10 seconds or so before finally shutting down.
I suspect that the library has some type of resource that it has created but has not closed/completed.
My best guess is that it is a non-completed future.
So I'm looking for ways to detect resources that haven't been freed.
My first port of call would be looking for a technique to detect futures that haven't been completed but solutions for other resource types would be useful.
I'm currently using a runZoneGuarded, passing in a ZoneSpecification to hook calls.
Edit: with some experimentation, I've found I can detect timers and cancel them. In a simple experiment, I've found that a non-cancelled timer will cause the app to hang. If I cancel the timers (during my checkLeaks method) the app will shut down, however, this isn't enough in my real-world app so I'm still looking for ways to detect other resources.
Here is the experimental code I have:
#! /usr/bin/env dcli
import 'dart:async';
import 'package:dcli/dcli.dart';
import 'package:onepub/src/pub/global_packages.dart';
import 'package:onepub/src/pub/system_cache.dart';
import 'package:onepub/src/version/version.g.dart';
import 'package:pub_semver/pub_semver.dart';
void main(List<String> arguments) async {
print(orange('OnePub version: $packageVersion '));
print('');
print(globals);
// await globals.repairActivatedPackages();
await runZonedGuarded(() async {
Timer(Duration(seconds: 20), () => print('timer done'));
unawaited(Future.delayed(Duration(seconds: 20)));
var completer = Completer();
unawaited(
Future.delayed(Duration(seconds: 20), () => completer.complete()));
// await globals.activateHosted(
// 'dcli_unit_tester',
// VersionConstraint.any,
// null, // all executables
// overwriteBinStubs: true,
// url: null, // hostedUrl,
// );
print('end activate');
}, (error, stackTrace) {
print('Uncaught error: $error');
}, zoneSpecification: buildZoneSpec());
print('end');
checkLeaks();
// await entrypoint(arguments, CommandSet.ONEPUB, 'onepub');
}
late final SystemCache cache = SystemCache(isOffline: false);
GlobalPackages? _globals;
GlobalPackages get globals => _globals ??= GlobalPackages(cache);
List<void Function()> actions = [];
List<Source<Timer>> timers = [];
int testCounter = 0;
int timerCount = 0;
int periodicCallbacksCount = 0;
int microtasksCount = 0;
ZoneSpecification buildZoneSpec() {
return ZoneSpecification(
createTimer: (source, parent, zone, duration, f) {
timerCount += 1;
final result = parent.createTimer(zone, duration, f);
timers.add(Source(result));
return result;
},
createPeriodicTimer: (source, parent, zone, period, f) {
periodicCallbacksCount += 1;
final result = parent.createPeriodicTimer(zone, period, f);
timers.add(Source(result));
return result;
},
scheduleMicrotask: (source, parent, zone, f) {
microtasksCount += 1;
actions.add(f);
final result = parent.scheduleMicrotask(zone, f);
return result;
},
);
}
void checkLeaks() {
print(actions.length);
print(timers.length);
print('testCounter $testCounter');
print('timerCount $timerCount');
print('periodicCallbacksCount $periodicCallbacksCount');
print('microtasksCount $microtasksCount');
for (var timer in timers) {
if (timer.source.isActive) {
print('Active Timer: ${timer.st}');
timer.source.cancel();
}
}
}
class Source<T> {
Source(this.source) {
st = StackTrace.current;
}
T source;
late StackTrace st;
}
I'm my real-world testing I can see that I do have hanging timers caused by HTTP connections. As I originally guessed this does seem to point to some other problem with the HTTP connections not being closed down correctly.
Active Timer: #0 new Source (file:///home/bsutton/git/onepub/onepub/bin/onepub.dart:105:21)
#1 buildZoneSpec.<anonymous closure> (file:///home/bsutton/git/onepub/onepub/bin/onepub.dart:68:18)
#2 _CustomZone.createTimer (dart:async/zone.dart:1388:19)
#3 new Timer (dart:async/timer.dart:54:10)
#4 _HttpClientConnection.startTimer (dart:_http/http_impl.dart:2320:18)
#5 _ConnectionTarget.returnConnection (dart:_http/http_impl.dart:2381:16)
#6 _HttpClient._returnConnection (dart:_http/http_impl.dart:2800:41)
#7 _HttpClientConnection.send.<anonymous closure>.<anonymous closure>.<anonymous closure> (dart:_http/http_impl.dart:2171:25)
#8 _rootRunUnary (dart:async/zone.dart:1434:47)
In general, it's impossible to find things that doesn't happen.
There is no way to find all futures in the program.
With a zone, you might be able to intercept all the callbacks being "registered" in the zone, but you can't know which of them must be called. A future can have both value handlers and an error handlers, and at most one of them will ever be called. So, just because a callback on a future isn't called, it doesn't mean the future didn't complete.
A future most likely won't keep the isolate alive, though.
An incompleted future will just be garbage collected if nothing important is hanging on to it.
The most likely culprits for keeping an isolate alive are timers and receive ports.
(The VM internal implementation of timers, and I/O, and sockets, all use receive ports, so it's really just the ports.)
Again, there is no way to find all open ports programmatically.
You need a debugger with memory inspection tools for that.
I'd recommend using the developer tools to look for instances of ReceivePort or RawReceivePort that are not being garbage collected, and see whether they are still alive.
Also be careful with runZonedGuarded.
Since runZonedGuarded introduces a new error zone (because it introduces an uncaught error handler in the new zone), an error future created inside the zone will not be seen to complete outside the zone.
That means that the code:
await runZonedGuarded(() async {
will not work if the body throws. The error of the future is handled by the zone instead of the await, so the await just sees a future which never completes.

FreeRTOS Mutex Unexpected behaviour

#include "main.h"
#include "FreeRTOS.h"
#include "semphr.h"
#include "task.h"
/* Private function prototypes -----------------------------------------------*/
void SystemClock_Config(void);
static void MX_GPIO_Init(void);
static void MX_USART2_UART_Init(void);
void Task1(void *argument);
void Task2(void *argument);
void PrintMsg(char *data);
/* USER CODE BEGIN PFP */
SemaphoreHandle_t hMutex = NULL;
TaskHandle_t hTask1 = NULL;
TaskHandle_t hTask2 = NULL;
UART_HandleTypeDef huart2;
int main(void) {
/* MCU Configuration--------------------------------------------------------*/
/* Reset of all peripherals, Initializes the Flash interface and the Systick. */
HAL_Init();
/* Configure the system clock */
SystemClock_Config();
/* Initialize all configured peripherals */
MX_GPIO_Init();
MX_USART2_UART_Init();
/* USER CODE BEGIN 2 */
xTaskCreate(Task1, "Task 1", configMINIMAL_STACK_SIZE, NULL, 2, &hTask1);
xTaskCreate(Task2, "Task 2", configMINIMAL_STACK_SIZE, NULL, 2, &hTask2);
/* USER CODE END 2 */
hMutex = xSemaphoreCreateMutex();
if (hMutex == NULL) {
PrintMsg("Mutex not created\r\n");
} else {
PrintMsg("Mutex created\r\n");
}
vTaskStartScheduler();
/* USER CODE BEGIN WHILE */
while (1) {
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
}
/* USER CODE END 3 */
}
void Task1(void *argument) {
/* USER CODE BEGIN 5 */
/* Infinite loop */
for (;;) {
if (xSemaphoreTake(hMutex,2000) == pdTRUE) {
PrintMsg("Shared Resource Start and Executing Task 1\r\n");
xSemaphoreGive(hMutex);
PrintMsg("Shared Resource End and Executing Task 1\r\n");
vTaskDelay(100);
} else {
PrintMsg("Task 1 Didn't get access to shared resource\r\n");
}
}
/* USER CODE END 5 */
}
void Task2(void *argument) {
/* USER CODE BEGIN 5 */
/* Infinite loop */
for (;;) {
if (xSemaphoreTake(hMutex,2000) == pdTRUE) {
PrintMsg("Shared Resource Start and Executing Task 2\r\n");
//xSemaphoreGive(hMutex);
PrintMsg("Shared Resource End and Executing Task 2\r\n");
vTaskDelay(100);
} else {
PrintMsg("Task 2 Didn't get access to shared resource\r\n");
}
}
/* USER CODE END 5 */
}
void PrintMsg(char *data) {
int i = 0;
while (*(data + i) != '\0') {
HAL_UART_Transmit(&huart2, (uint8_t*) &data[i], 1, 0xFF);
i++;
}
}
HW USED:STM32F446RE
When i run this code,I get output as follows
//Output Start
Mutex created
Shared Resource Start and Executing Task 1
Shared Resource End and Executing Task 1
Task 2 Didn't get access to shared resource
Task 1 DiTask 2 Didn't get access to shared resource
Task 1 Didn't get access to shared resource
Task 2 Didn't get access to shared resource
.
.
//Output End
Question1)
Considering Task1 was scheduled first as the priority of Task1 and 2 are same.
Task1 is executed correctly.
After this,Task2 is scheduled but was not able to take the mutex because of which i got output as "Task 2 Didn't get access to shared resource".Why is this so?
Question2 )
"Task 1 DiTask 2 Didn't get access to shared resource" in this line,it seems that Task1 was exceuting but it got preempted by Task2 which should have not happened as both the Tasks have same priority??
Q1: The likely explanation here is that you're being fooled by competing tasks overwriting each others' buffered UART output. UART is slow, task switching is fast :)
After Task1 prints "Shared Resource Start and Executing Task 1" and releases the mutex for the first time, scheduler immediately switches to Task2 (maybe because Task1 has used up its time slot?). Task2 grabs the mutex, quickly spits both of its messages into UART buffer and enters sleep for 100 ticks. The scheduler immediately switches back to Task1, but unfortunately the buffered messages from Task2 haven't hit the UART output yet. UART buffer gets overwritten by the message "Shared Resource End and Executing Task 1" before a single byte of Task2-s messages is printed.
After that the entire process stalls because you haven't released the mutex in Task2.
Q2: tasks with equal priorities get preempted by each other when the scheduler decides so (usually when they enter sleep or have exhausted their time slot)
I would recommend this tutorial/book, chapter 3 for a great explanation of how tasks work.
USART is a resource, so it can't be shared by two tasks without a sync mechanism, like a mutex. Your usage of PrintMsg() violates this rule.
The scheduler is run at configTICK_RATE_HZ frequency. If configUSE_TIME_SLICING is 1 (or undefined), the scheduler switches between equal priority tasks. If you don't want this default behavior, set configUSE_TIME_SLICING to 0.
Keep it in mind that your configTICK_RATE_HZ setting of 1000 gives at most ~1 ms runtime to each task, unless there are no other tasks ready to run. Higher priority tasks can also preempt it before that 1 ms passes. With 115200 baud rate, you can send ~10 bytes or so in this ~1 ms time period.

How to run parallel processes in Contiki

I want to create two processes that should be executed parallelly. I am trying with following code:
PROCESS(hello_world_process, "Hello world process");
PROCESS(hello_world_process2, "Hello world process2");
AUTOSTART_PROCESSES(&hello_world_process,&hello_world_process2);
/*---------------------------------------------------------------------------*/
int i;
void program1(void)
{
for (i=0;i<10;i++)
{
printf("%d from 1st process\n",i);
}
}
void program2(void)
{
for (i=0;i<10;i++)
{
printf("%d from IIund process\n",i);
}
}
PROCESS_THREAD(hello_world_process, ev, data)
{
PROCESS_BEGIN();
program1();
PROCESS_END();
}
PROCESS_THREAD(hello_world_process2, ev, data)
{
PROCESS_BEGIN();
program2();
PROCESS_END();
}
But the second process is getting started after the completion of the first process.
Output:
Contiki-list-1532-g2ca33d4 started with IPV6, RPL
Rime started with address 1.2.3.4.5.6.7.8
MAC nullmac RDC nullrdc NETWORK sicslowpan
Tentative link-local IPv6 address fe80:0000:0000:0000:0302:0304:0506:0708
0 from 1st process
1 from 1st process
2 from 1st process
3 from 1st process
4 from 1st process
5 from 1st process
6 from 1st process
7 from 1st process
8 from 1st process
9 from 1st process
0 from IIund process
1 from IIund process
2 from IIund process
3 from IIund process
4 from IIund process
5 from IIund process
6 from IIund process
7 from IIund process
8 from IIund process
9 from IIund process
How I can execute both these processes parallelly?
Contiki processes are based on Dunkel's protothreads: http://dunkels.com/adam/pt/
As such, they have some specifics:
They are a form of cooperative multithreading. That means, the
source code should explicitly tell at which point to yield the
execution; this is not preemptive multithreading, where switching
between threads happens automatically.
They are implemented in C, a language that does not have good
support for this out the box, so all protothread operations are
limited to the main process function. Functions that are called from
the main process function cannot yield, only the main function can.
The processes share a common stack. The concept of thread-local
(process-local) variable does not exist; static variables must be
used instead to preserve values across multiple invocations of a
single process.
Your code has all of these problems:
There are no explicit yields in the code
Your expect the (implicit) yields to take place from within a subfunction
The variable i is in fact global, so it will be shared between the multiple processes - probably not what you want.
Try this code instead:
PROCESS_THREAD(hello_world_process, ev, data)
{
static int i;
PROCESS_BEGIN();
for (i=0;i<10;i++)
{
printf("%d from 1st process\n",i);
process_poll(&hello_world_process2);
PROCESS_YIELD();
}
PROCESS_END();
}
PROCESS_THREAD(hello_world_process2, ev, data)
{
static int i;
PROCESS_BEGIN();
for (i=0;i<10;i++)
{
printf("%d from IIund process\n",i);
process_poll(&hello_world_process);
PROCESS_YIELD();
}
PROCESS_END();
}

how to make executor service wait until all thread finish

i use executor service to launch multiple thread to sent request to api and get data back. sometimes i see some threads haven't finished their job yet, the service kill that thread already, how can i force the service to wait until the thread finish their job?
here is my code:
ExecutorService pool = Executors.newFixedThreadPool(10);
List<Future<List<Book>>> futures = Lists.newArrayList();
final ObjectMapper mapper1 = new ObjectMapper();
for (final Author a : authors) {
futures.add(pool.submit(new Callable<List<Book>>() {
#Override
public List<Book> call() throws Exception {
String urlStr = "http://localhost/api/book?limit=5000&authorId=" + a.getId();
List<JsonBook> Jsbooks = mapper1.readValue(
new URL(urlStr), BOOK_LIST_TYPE_REFERENCE);
List<Book> books = Lists.newArrayList();
for (JsonBook jsonBook : Jsbooks) {
books.add(jsonBook.toAvro());
}
return books;
}
}));
}
pool.shutdown();
pool.awaitTermination(3, TimeUnit.MINUTES);
List<Book> bookList = Lists.newArrayList();
for (Future<List<Book>> future : futures) {
if (!future.isDone()) {
LogUtil.info("future " + future.toString()); <-- future not finished yet
throw new RuntimeException("Future to retrieve books: " + future + " did not complete");
}
bookList.addAll(future.get());
}
and i saw some excepitons at the (!future.isDone()) block. how can i make sure every future is done when executor service shutdown?
I like to use the countdown latch.
Set the latch to the size that you're iterating and pass that latch into your callables, then in your run / call method have a try/finally block that decrements the countdown latch.
After everything has been enqueued to your executor service, just call your latch's await method, which will block until it's all done. At that time all your callables will be finished, and you can properly shut down your executor service.
This link has an example of how to set it up.
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CountDownLatch.html

Abnormally disconnected TCP sockets and write timeout

I will try to explain the problem in shortest possible words. I am using c++ builder 2010.
I am using TIdTCPServer and sending voice packets to a list of connected clients. Everything works ok untill any client is disconnected abnormally, For example power failure etc. I can reproduce similar disconnect by cutting the ethernet connection of a connected client.
So now we have a disconnected socket but as you know it is not yet detected at server side so server will continue to try to send data to that client too.
But when server try to write data to that disconnected client ...... Write() or WriteLn() HANGS there in trying to write, It is like it is wating for somekind of Write timeout. This hangs the hole packet distribution process as a result creating a lag in data transmission to all other clients. After few seconds "Socket Connection Closed" Exception is raised and data flow continues.
Here is the code
try
{
EnterCriticalSection(&SlotListenersCriticalSection);
for(int i=0;i<SlotListeners->Count;i++)
{
try
{
//Here the process will HANG for several seconds on a disconnected socket
((TIdContext*) SlotListeners->Objects[i])->Connection->IOHandler->WriteLn("Some DATA");
}catch(Exception &e)
{
SlotListeners->Delete(i);
}
}
}__finally
{
LeaveCriticalSection(&SlotListenersCriticalSection);
}
Ok i already have a keep alive mechanism which disconnect the socket after n seconds of inactivity. But as you can imagine, still this mechnism cant sync exactly with this braodcasting loop because this braodcasting loop is running almost all the time.
So is there any Write timeouts i can specify may be through iohandler or something ? I have seen many many threads about "Detecting disconnected tcp socket" but my problem is little different, i need to avoid that hangup for few seconds during the write attempt.
So is there any solution ?
Or should i consider using some different mechanism for such data broadcasting for example the broadcasting loop put the data packet in some kind of FIFO buffer and client threads continuously check for available data and pick and deliver it to themselves ? This way if one thread hangs it will not stop/delay the over all distribution thread.
Any ideas please ? Thanks for your time and help.
Regards
Jams
There are no write timeouts implemented in Indy. For that, you will have to use the TIdSocketHandle.SetSockOpt() method to set the socket-level timeouts directly.
The FIFO buffer is a better option (and a better design in general). For example:
void __fastcall TForm1::IdTCPServer1Connect(TIdContext *AContext)
{
...
AContext->Data = new TIdThreadSafeStringList;
...
}
void __fastcall TForm1::IdTCPServer1Disconnect(TIdContext *AContext)
{
...
delete AContext->Data;
AContext->Data = NULL;
...
}
void __fastcall TForm1::IdTCPServer1Execute(TIdContext *AContext)
{
TIdThreadSafeStringList *Queue = (TIdThreadSafeStringList*) AContext->Data;
TStringList *Outbound = NULL;
TStringList *List = Queue->Lock();
try
{
if( List->Count > 0 )
{
Outbound = new TStringList;
Outbound->Assign(List);
List->Clear();
}
}
__finally
{
Queue->Unlock();
}
if( Outbound )
{
try
{
AContext->Connection->IOHandler->Write(Outbound);
}
__finally
{
delete Outbound;
}
}
...
}
...
try
{
EnterCriticalSection(&SlotListenersCriticalSection);
int i = 0;
while( i < SlotListeners->Count )
{
try
{
TIdContext *Ctx = (TIdContext*) SlotListeners->Objects[i];
TIdThreadSafeStringList *Queue = (TIdThreadSafeStringList*) Ctx->Data;
Queue->Add("Some DATA");
++i;
}
catch(const Exception &e)
{
SlotListeners->Delete(i);
}
}
}
__finally
{
LeaveCriticalSection(&SlotListenersCriticalSection);
}

Resources