Arduino non blocking Wifi connection - wifi

I'm putting a system that monitors a few sensors and based on them it turns a few lights on/off.
For further data analysis we also want to send that data to a central server, so we've added a Wifi shield. Keep in mind that the system should be fully functional if there's no network. So what I've done is monitor the network status in the loop() and connect again if it goes down.
Now, the problem is that Wifi.begin() blocks execution until is either connected or throws an error. This is not acceptable since during that time the system would be unresponsive.
I've looked into using threads in Arduino, for example here, but then this shows up in the Limitations:
One of the major potential problems with this library is the fact that
a single Thread that gets hung will lock the entire system up, since
the next Thread can’t be called until the current one finishes its
loop() function.
So, anyone has any pointers, ideas or experience?
Thanks,
Juan

You could create a timer interrupt that when expires kills the thread with the stalled blocking function.
Actually you may want to use an Interrupt such as Timer1 library to perform the critical updates. That way you don't need to worry about blocking code, anywhere.

Related

CAN J1939 device stops responding after communication timeout

I'm a higher layer guy, I don't and don't want to know much about can-bus, j1939 or even particular ECUs. I just don't like the software solution, so I'd like to ask, if customer's requirements are legitimate.
If particular ECU doesn't receive CAN frame within 300 ms timeout after powerup, it stops responding to any further frames and must be power cycled. This is a information from customer's technicians, I have to just believe it.
It is possible to powerup ECU after CAN driver thread is ready, but it would require some extra wiring by end customers.
Software solutions are all bad or worse, like running FreeRTOS before important checks, put CAN driver code to code common with other products, or start CAN periphery in the bootloader and left running without software control until driver starts.
The sensitive part is, that we have no explicit demand to start CAN driver within such a short time in specification. Customer says, that it's part of J1939 specification.
Can someone confirm or disprove, that J1939 allows devices to unrecoverably stop receiving after 300 ms of silence or requires devices to start transmitting within 300 ms after powerup? Or at least guide me to parts of J1939 standard, which could possibly regard this?
Thank you
If particular ECU doesn't receive CAN frame within 300 ms timeout after powerup, it stops responding to any further frames and must be power cycled. This is a information from customer's technicians, I have to just believe it.
This does of course entirely depend on what task it is performing.
Generally, an ECU, as in an automotive computer in a car/truck etc is never allowed to hang up/latch up. The normal course of action would be for the ECU to either reboot/reset itself or revert to a fail-safe mode.
But in case of tractors and heavy machinery the normal safe mode is "stop everything".
It is possible to powerup ECU after CAN driver thread is ready, but it would require some extra wiring by end customers.
I don't know what this is supposed to mean. What is "extra wiring"? Something to keep other nodes in common mode while one is rebooting? Terminating resistors? Some dirty power-up delay circuit?
Software solutions are all bad or worse, like running FreeRTOS before important checks, put CAN driver code to code common with other products, or start CAN periphery in the bootloader and left running without software control until driver starts.
Generally speaking, it's custom to initialize critical hardware like clocks, watchdogs, prescalers, pull resistors etc very early on. Initializing hardware peripherals may or may not be critical. It's custom to do this after the CRT has been executed, at the beginning of main() and the order of initialization usually matters a lot.
If you have a delay longer than 300ms from power-on reset to the start of main(), something is terribly wrong with the program.
The sensitive part is, that we have no explicit demand to start CAN driver within such a short time in specification. Customer says, that it's part of J1939 specification.
I haven't worked much with J1939 and I don't remember what it says specifically, but 300ms is an eternity in a real-time system! It's not a "short time".
In general, correctly designed mission-/safety-critical CAN control systems in automotive/industrial settings work like this:
All data is sent repeatedly in fixed intervals, regardless of if it has changed or not. Commonly once per 10ms or once per 100ms.
A node which has not received new data will use the previously received data for now.
There is a timeout from the point of when last valid data was received, when the receiving node must stop using old data and revert to a fail-safe mode. This time is often relative to how fast the controlled object can move. It's common to have timeouts after some multiple of 100ms.
I would say that your customer's requirements are very reasonable, it's nothing out of the ordinary.
My colleague answered, that there's no such demand, only vague 300 ms timeout.

Interrupting register write operation on LPC microcontroller

These days I have solved the issue of ocasional false register write. The problem was, that I was writing a lot in the GPIO output register (LPC_GPIO_PORT->SET[1]) in the main loop. In the interrupt routine I was writing in these same registers, and when interrupt happened just in time when these registers were being writen in the main loop, upon return from interrupt, the changes to those registers were discarded and replaced with those writen into register before entering interrupt.
I am using LPC1549 microcontroller. The register writes in interrupts are used for BLDC motor controll, so you could hear loud bang from the motor every 10-30 seconds. By reducing writing registers in the main loop, i have completly eliminated the problem. The question is, is it the same with all registers in microcontroller? I cant find anything describing this problem, which can be a serious issue, and also, hard to find, once it starts causing trouble.
Sounds to be "The Critical Section Problem". This topic pops up a lot more in literature about Operating Systems but exists in any interrupt-driven platform that has shared resources. It might help your searching to look at this problem.
In your case, you have 2 data accessors: the interrupt handler and the main loop. both accessing the same shared resource (memory mapped I/O). This can lead to updates being overwritten immediately based on the timing of the two events like you describe above.
As for your second question, this can affect any shared resources in a concurrent system.

Using ISR functions in the Contiki Context

I am new to using the Contiki OS and I have a fundamental question.
Can I safely use a low level ISR from within a Contiki Process?
I am doing this as a quick test and it is functioning well.
However, I am concerned that I may be undermining something in the OS that will
fail at a later time under different conditions.
In the context of a process which is fired periodically based upon an event timer,
I am calling a function which sets up an LED to blink.
The LED blinking function itself is a callback from an ISR fired by a hardware timer on an Atmel SAMD21 MCU.
Can some one please clarify for me what constraints I should be concerned about in this particular case?
Thank You.
Basically you can, but you have to understand the context in which each part of the code run in.
A Process has the context of a function, the Contiki's scheduler runs in the main body, timers will enqueue process wakes in this scheduler, in fact, think of Contiki Processes as functions called after each other, notice that those PROCESS_* macros does in fact call return on the function.
When you are at an interrupt handler or callback, you are in a different context, here you can have race conditions if you share data with processes, the same it would be in a bare-metal firmware where interrupt and main() are different contexts.
I strongly recommend you to read about the "protothreads", besides they sound like threads, they are not, they are functions running in the main body. (I believe this link will enlighten you http://dunkels.com/adam/pt/)
On the problem you described, I see nothing wrong.
Contiki itself has some hardware abstractions modules, so you won't have to deal with the platform directly from you application code. I have written big firmwares using Contiki and found these abstractions not very much usable, since it has limited applications. What I did, on this case, was to write my own low level layer to touch the platform, so in the application everything is still platform independent, but, from the OS perspective, I had application code calling platform registers.

Unordered socket read & close notification using IOCP

Most server framework/examples using sockets and I/O completion ports makes notifications in a way I couldn't completely figure out the purpose.
Upon read packets are processed, usually they are reordered to circumvent thread scheduling issues processing packets out of order no matter IOCP ensure a FIFO queue.
The problem is when a socket is closed gracefully or by an error. I saw in both situation, and again by the o.s. thread scheduler, the close notification may be sent to the application (i.e. http server using the framework) "before" the notification of data previously readed.
I think that the close notification should be queued in such way so the application receives it after previous reads.
Is there any intended use in most code I saw or my behavior may be correct depending on the situation?
What you suggest makes sense and I would imagine that any code that handles graceful close (a read returning 0 bytes) would do so by processing it after any proceeding successful read. Errors coming out of GetQueuedCompletionStatus(), such as connection reset errors, etc, are harder to integrate into the receive flow as they occur out of band as far as the receive data is concerned. Your question's a bit vague and depends very much on the code you're using and how you (or the people who wrote that code) want to handle these things. There is no single correct way, IMHO.

why my background working thread is blocking UI thread?

I am working on an app, which uploads native contacts to server then get responses(JSON, a contact list that already installed the app). When native contacts are large enough, server response will be slow and unstable. And user cannot do other things. so I put network request into background thread. every time I will upload 100 contacts, do some tasks , then next 100 contacts until loop finish.
But in running, the result is not as expected. background thread is running, it keeps to request server. UI thread is blocked, I still cannot do anything.
is this cause a long loop in background thread? Although I have 2 thread, but they will compete CPU resources(test device is iPod, 1 core. And I think this may not related core numbers)?
Could anyone tell me hints on how to handle this kind of scenario? Thanks in advance!
Update:
I have found the root cause. A global variable in App delegate is set to wrong value, therefore UI behavior is weird. I found this by comment all network request method. So this problem is not related with multiple threading. Sorry for the bother.
I think there needs to be some clarification as to how you are performing the network operations.
1st, NSOperatiomQueue deals with NSOperations, so you are presumably wrapping your network code in an NSOperation subclass.
2nd, are you using NSURLConnections for your networking code?
3rd, is the blocking part the NSURLConnection or you delegate callback for NSURLConnection?
1 thing to note is that plain ol' NSURLConnections are implemented under the hood multithreaded. The object is placed into your main threads run loop by default (when run from the main thread), but the object is just a wrapper that handles callbacks to the delegate from the lower level networking code (BSD sockets) which happens on another thread.
You really shouldn't be able to block your UI with NSURLConnections on the main thread, unless A) you are blocking the thread with expensive code in the delegate callback methods or B) you are overwhelming your run loop with too many simultaneous URL connections (which is where NSOperationQueue's setMaxConcurrentOperationsCount: comes into play)

Resources