Is contiki scheduler preemptive? - contiki

Is contiki scheduler preemptive? Tinyos is not; there's nanork which i'm not sure of what state its development is in.

Contiki supports preemptive threads. Refer to:
https://github.com/contiki-os/contiki/wiki/Multithreading

Contiki-OS for IoT's supports preemptive multi-threading. In contiki, multi-threading is implemented as a library on top of the event-driven kernel for dynamic loading and replacement of individual services. The library can be linked with applications that require multi-threading. Contiki multithreading library is divided in two parts: (i) a platform independent part (2) platform specific. The platform independent part interfaces to the event kernel and the platform specific part of the library implements stack switching and preemption primitives. Contiki uses protothreads for implementing so called multi-threading. Protothreads are designed for severely memory constraint devices because they are stack-less and lightweight. The main features of protothreads are: very small memory overhead (only two bytes per protothread), no extra stack for a thread, highly portable (i.e., they are fully written in C and hence there is no architecture-specific assembly code). Contiki does not allow interrupt handlers to post new events, no process synchronization is provided in Contiki. The interrupt handler (when required) and the reader functions must be synchronized to avoid race condition. Please have a look at the following link [The Ring Buffer Library] also: https://github.com/contiki-os/contiki/wiki/Libraries

It might be worth noting that the port for the most widely used sensor node, the TelosB, does not support preemption.

Related

When to write a Custom Kernel Module

Problem Statement:
I have a very high bandwidth data link that is UDP based. The source of this data is not configurable, and sends on UDP a stream of datagrams. We have code that uses the standard methods for receiving data on the UDP socket that works adequately. I wanted to know if
Does there exist a command interface to extract multiple UDP datagrams at a time? to improve efficiency?
If one doesn't exist, does it make sense to create a kernel module to provide the capability?
I am a novice, and i wanted to understand what thought process has to happen when writing your own kernel module seems appropriate. I know that such a surgical procedure isn't meant to done lightly, but there must be a set of criteria where that action is prudent. Maybe not in my case, but in general.
HW / Kernel Module Perspective
A typical network adapter these days would be capable of distributing received packets across multiple hardware Rx queues thus letting the host run multiple software Rx queues bound to different CPU cores reading out packets in parallel. From a single HW/SW queue perspective, the host may poll it for new packets (see Linux NAPI), with each poll ideally yielding a batch of packets, and, alternatively, the host may still use interrupt-driven approach for Rx signalling with interrupt coalescing turned on for improved efficiency.
Existing NIC drivers in Linux kernel strive to stick with the most performant techniques, and the kernel itself should be able to leverage all of that properly.
Userland / Application Perspective
There's PACKET_MMAP interface provided by Linux kernel for improved Rx/Tx efficiency on the application side. Long story short, an application can set up a memory buffer shared between the kernel- and userspace and read out incoming packets from it, ideally in batches, or blocks, thus avoiding costly kernel-to-userspace copies and context switches so customary when using regular methods.
For added efficiency, the application may have multiple sockets bound to the NIC in separate threads / processes and demand that packet reception be load balanced across these sockets (see AF_PACKET fanout mode description).
DPDK Perspective
Kernel bypass framework that allows an application to seize full control of a network adapter by means of a vendor-specific poll-mode driver, or PMD, effectively running in userspace as part of the application and by its very nature not needing any kernel-to-userspace copies, context switches and, most likely, locking. Multi-queue receive operation, load balancing (round robin, RSS, you name it) and more cutting edge offloads are likely to be available, too (it's vendor specific).
Summary
The short of it, given the fact that multiple network acceleration techniques already exist, one need never write their own kernel module to solve the problem in question. By the looks of it, your application, which, as you say, uses standard methods, is not aware of PACKET_MMAP technique. So I'd be tempted to suggest looking at this one closely. DPDK approach might require that the application be effectively re-implemented from scratch, so I would first go for PACKET_MMAP approach as a low-hanging fruit.

Using kqueue for simple async io

How does one actually use kqueue() for doing simple async r/w's?
It's inception seems to be as a replacement for epoll(), and select(), and thus the problem it is trying to solve is scaling to listening on large number of file descriptors for changes.
However, if I want to do something like: read data from descriptor X, let me know when the data is ready - how does the API support that? Unless there is a complimentary API for kicking-off non-blocking r/w requests, I don't see a way other than managing a thread pool myself, which defeats the purpose.
Is this simply the wrong tool for the job? Stick with aio?
Aside: I'm not savvy with how modern BSD-based OS internals work - but is kqueue() built on aio or visa-versa? I would imagine it would depend on whether the OS io subsystem system is fundamentally interrupt-driven or polling.
None of the APIs you mention, aside from aio itself, has anything to do with asynchronous IO, as such.
None of select(), poll(), epoll(), or kqueue() are helpful for reading from file systems (or "vnodes"). File descriptors for file system items are always "ready", even if the file system is network-mounted and there is network latency such that a read would actually block for a significant time. Your only choice there to avoid blocking is aio or, on a platform with GCD, dispatch IO.
The use of kqueue() and the like is for other kinds of file descriptors such as sockets, pipes, etc. where the kernel maintains buffers and there's some "event" (like the arrival of a packet or a write to a pipe) that changes when data is available. Of course, kqueue() can also monitor a variety of other input sources, like Mach ports, processes, etc.
(You can use kqueue() for reads of vnodes, but then it only tells you when the file position is not at the end of the file. So, you might use it to be informed when a file has been extended or truncated. It doesn't mean that a read would not block.)
I don't think either kqueue() or aio is built on the other. Why would you think they were?
I used kqueues to adapt a Linux proxy server (based on epoll) to BSD. I set up separate GCD async queues, each using a kqueue to listen on a set of sockets. GCD manages the threads for you.

NIF to wrap my multi-threaded C++ code

I have a C++ code that implement a special protocol over the serial port. The code is multi-threaded and internally polls the serial port and do its own cyclic processing. I would like to call this driver from erlang and also receive events from this driver. My concern is that this C++ code is multi-threaded and also statefull meaning that when I call a certain function on the driver, it caches things internally which will be used/required on the subsequent calls of the driver. My questions are
1.Does NIF run in the same os process as the rest of my erlang proceses or NIF is launched in a separate os process?
2.Does it make sense to warp this multi-threaded stateful C++ code with NIF?
4.If NIF is not the right approach, what is the better way for me to make Elrang talk back and forth with this C++ code. I also prefer my C++ code to be inside the same OS process as the rest of my Erlang processes and as it looks like linked-in drivers are an option but not sure if the multi-threaded nature of my C++ code will be ok to that model. Plus I hear they can mess up elrang scheduler?
Unlike ports, NIFs are run within Erlang VM process, similar to drivers. Because of that, any NIF crashes will bring VM down as well. And, answering in advance, to your last question, NIFs, like drivers, may block your scheduler.
That depends on the functionality you are implementing by this C++ code. Due to the answer 1), you probably want to avoid concurrency in the C++ part, since it's a potential source of errors. It's not always possible, of course. But if you are implementing, say, some workers pool, go ahead and implement 1-threaded code, spawning it as many times as you need.
Drivers can be multi-threaded too, with same potential problems and quite similar performance (well, still slightly faster than NIFs). If you are not completely sure about your C++ code stability, use it as an Erlang port.
Speaking of the difference between NIFs and drivers, the former is synchronous natively, and the latter can be asynchronous (which can be really a huge advantage if you don't want to receive any answers for most of the commands). Drivers are easier to mess up and harder to implement (but once you grasp the main patterns and problems, they seem okay, actually).
Here's a good start for drivers:
http://www.erlang.org/doc/apps/erts/driver.html
And something similar (behold the difference in complexity) for NIFs:
http://www.erlang.org/doc/tutorial/nif.html

0MQ with green threads?

I've grown to like erlang, and it's a great (cough) architectural fit to my problem. Meanwhile I still like to imagine that I can kludge erlang processes & asynchronous message passing in python (I am currently in therapy to rid myself of this obsession).
During a recent binge I came across 0MQ & I like its messaging features. These may be self-evident to an erlang/OTP expert, but I'm just a humble python programmer (my shrink will no doubt get to read this clever argument). The 0MQ user-guide states that it uses native OS threads, and not virtual "green" threads.
Is there a way to make 0MQ work with say eventlet/gevent?
Or, should I avoid the green-eyed monster and stick to a single Python app thread, with non-blocking I/O handled by 0MQ's message queuing & its own (skilled) use of native threads?
Or, check out of rehab & go back to erlang?
Responding to a stale thread because I am kind of in the same boat. Thought I would share my thoughts.
1: It looks like all the heavy lifting has already been done: https://github.com/traviscline/gevent-zeromq has integrated the gevent loop with a nonblocking zmq socket and even some Cpython speedups. It also seems to be (at the time of this writing), reasonably well maintainted.
2: It depends; if you are writing something that can use zmq without a ton of external event logic, then you should just use zmq. If OTOH you need to integrate with other protocols, you may want to use gevent (or twisted perhaps, although it has no workable zmq now at all). My projects generally require multiple protocols (ie: private queue manager, public http, public https, private memcache, etc), so I am investigating switching to gevent for quicker project turnaround than my current favorite: twisted.
3: You may want to skip zmq entirely and integrate with an existing erlang based solution like rabbitMQ; the performance advantages of zmq may not be as important as you think, and then you have an erlang message queue that easily integrates with python with existing libraries.
Also see: Messsage Queue comparison at second life wiki
Zero MQ now works with Eventlet:
https://lists.secondlife.com/pipermail/eventletdev/2010-October/000907.html

Any successful profibus communications from .NET?

Has anyone successfully talked profibus from a .NET application?
If you did, what device/card did you use to accomplish this, what was the application, and did you use any kind of preexisting or available code?
We've not used Profibus, but have used DeviceNET (another CAN based protocol), Ethernet/IP and ControlNet which all have similar challenges.
We've been doing this since the late 1990's and therefore rely mainly on our own generated code using off-the-shelf hardware. The companies that have shown longevity during that period that I remember are:-
AnyBus (HMS, www.anybus.com) we've recently started using their gateway products as we can place fieldbus interfaces close to the hardware and then communicate over normal Ethernet (usually using Ethernet/IP www.odva.org). This has the advantage of separating hardware and PC using only a network cable. The Ethernet/IP .NET classes were written by ourselves as nothing much was on the market at the time. I'm sure a quick google search would find suitable class libraries
SST (www.mysst.com) have had fieldbus interfaces for more than a decade. The last SST card we used for DeviceNET still only had VB6 sample code. A good selection of fieldbus support and different form-factors e.g. PC104, PCI, PMCIA
Beckhoff/Wago (www.beckhoff.com, www.wago.com) we typically use Beckhoff for the I/O more than the interface cards but again a company that has been around a long time. They also have products that support exposing using OPC (another way for you to get I/O information without directly communicating with the hardware/devicedrivers)
I suggest not using OPC interfaces to the hardware directly (it’s OK for communication using PC (.NET)->PLC->Profibus) as you need to ensure that the control system responds to loss of control from your .NET application. I’m assuming that you are needing a profibus Master here (not a slave), so as long as your control system is intrinsically fail safe, then loss of communication should mean the control system enters an "Idle" state and therefore most of the I/O will return to the fails safe state.
We also try to ensure that we do not put safety related code in .NET. Most of our .NET code is userinterface from a PLC, but in some places we do control the fieldbus directly but ensure hardware interlocks will prevent un-safe operation, either using safety switches/relays or a small PLC with the the task of interlocking only. And above all make the system fail-safe! Loss of comms from the .NET code should shutdown the automation to the fail-safe state.
We have used Steeplechase to connect to our profibus to our automated pick system.
http://www.phoenixcontact.com/automation/32131_31909.htm
Try this: http://libnodave.sourceforge.net

Resources