Should you use tasks and mutex/semaphore in multiple device drivers that share a bus? - task

I'm writing a set device drivers for an RTOS that share a single hardware bus. It seems like a bad idea to have tasks running in each device driver that shares this resource because, hierarchically, they are on the same level. Also they have no knowledge of when the other driver may try to access the bus. So wouldn't it be cleaner and more robust to have the functions that access the drivers be task synchronized at the application layer to avoid this bus contention?

Yes, you must implement some form of resource synchronization to prevent tasks from accessing the resource simultaneously and interfering with another task's use of the resource.
Yes, you can use a mutex to protect the resource from multiple concurrent accesses. Every task should get the mutex before using the resource and then release the mutex when finished with the resource. This way a task that needs the resource will block waiting for the mutex if the resource is in use by another task. A mutex is a better tool than a semaphore for this use-case because a mutex is designed such that only the owner can release it.
An alternate design is to create one task that is responsible for all accesses to the resource. Other tasks must send requests to the resource task via an inter-task communications mechanism such as a mailbox or queue. This way the resource usage gets synchronized by having to go through the resource task's queue.

Related

Specific processes sharing memory via mmap()

My question is simple How do I share memory among processes allowing reads and writes of memory .The main thing is only specific processes(like specific PID's for example) should have the ability to share that memory .Not all processes should have the ability to share the memory.
One option is to use the standard Sys V IPC shared memory. After call to shmget(), use shmctl() to set the permissions. Give read write permission to only one group/user and start the processes which are allowed to access this as the specific user. The shared memory key and IDs can be found using ipcs, and you need trust the standard unix user/group based security to do the job.
Another option is implementing a shared memory driver. Something similar to Android ashmem. In the driver, you can validate the PID/UID of the caller who is trying to access the memory and allow/deny the request based on filters. You can also implement a sysfs entry to modify these filter. If the filters needs to be configurable, again you need to trust the Unix user/group based security. If you are implementing a driver, you will have plenty of security options

Using kqueue for simple async io

How does one actually use kqueue() for doing simple async r/w's?
It's inception seems to be as a replacement for epoll(), and select(), and thus the problem it is trying to solve is scaling to listening on large number of file descriptors for changes.
However, if I want to do something like: read data from descriptor X, let me know when the data is ready - how does the API support that? Unless there is a complimentary API for kicking-off non-blocking r/w requests, I don't see a way other than managing a thread pool myself, which defeats the purpose.
Is this simply the wrong tool for the job? Stick with aio?
Aside: I'm not savvy with how modern BSD-based OS internals work - but is kqueue() built on aio or visa-versa? I would imagine it would depend on whether the OS io subsystem system is fundamentally interrupt-driven or polling.
None of the APIs you mention, aside from aio itself, has anything to do with asynchronous IO, as such.
None of select(), poll(), epoll(), or kqueue() are helpful for reading from file systems (or "vnodes"). File descriptors for file system items are always "ready", even if the file system is network-mounted and there is network latency such that a read would actually block for a significant time. Your only choice there to avoid blocking is aio or, on a platform with GCD, dispatch IO.
The use of kqueue() and the like is for other kinds of file descriptors such as sockets, pipes, etc. where the kernel maintains buffers and there's some "event" (like the arrival of a packet or a write to a pipe) that changes when data is available. Of course, kqueue() can also monitor a variety of other input sources, like Mach ports, processes, etc.
(You can use kqueue() for reads of vnodes, but then it only tells you when the file position is not at the end of the file. So, you might use it to be informed when a file has been extended or truncated. It doesn't mean that a read would not block.)
I don't think either kqueue() or aio is built on the other. Why would you think they were?
I used kqueues to adapt a Linux proxy server (based on epoll) to BSD. I set up separate GCD async queues, each using a kqueue to listen on a set of sockets. GCD manages the threads for you.

iOS what is the highest level networking abstraction that is appropriate for handling bi-directional sync over http?

I'm looking at the Apple networking guidelines that suggest that the user should try to work with the highest level of abstraction possible when dealing with networking.
I'm working on a client-server app, where the server is master, and an iOS device is slave. These communicate over HTTP, establishing a connection that lives for the lifetime of the app's usage session. The app and the server synchronize assets over this connection.
My question is - what level of abstraction is appropriate for implementing bi-directional sync over HTTP? Is it sockets, NSURLConnection, some AFNetworking subclass, input/output streams?
There are a lot of possible good answers to this. I think all I can do is offer one pattern which has worked well for me but it may not apply to your needs and use cases. To restate my comment above "whatever you do will be a tradeoff between responsiveness, power consumption, data consistency, and implementation cost."
The level of abstraction I aim for is a set of service objects which expose an interface in terms of the application's domain models. The rest of the app, primarily objects in the controller layer, should be able to communicate with these services by passing models to methods (e.g. "fetchUserWithId:userId" or "createUser:user") and without any awareness of the urls, paths, or HTTP verbs involved at the network layer.
Those service objects can map domain model operations into paths, HTTP verbs, and possibly request bodies or headers. In most cases I find that the services themselves can then share a lower level service which accepts those values and constructs the actual HTTP request. This provides a single location to configure host names, set global headers, and manage a request queue via NSURLRequest, NSURLSession, AFNetworking, or whatever library you prefer.
I'll include completion blocks on my service object methods so that controllers can be notified of success or failure but try not to use those blocks to pass models back up to the controller layer. Instead I prefer to have controllers monitor Core Data or some other persistence layer and react to changes. That way controllers remain flexible and respond to any update in the models they are concerned with and do not assume that they are aware of all possible sources of changes to those models.
So far none of this addresses how you should check for remote changes to your models. The best option may be to design a system which does not need to do so. What if your client obtained a set of recent changes only when posting data to the server, could it still provide a good user experience? Could the server use push notifications to occasionally notify clients of updates?
If you must check for changes sockets or long polling are usually more responsive than short polling but it may be hard for roaming mobile clients to keep those connections open. All of these approaches also tend to keep the client's radios active and consume lots of power in the process.
Without knowing more about the problem I'd default to short polling but try to design interactions which allow this to be as infrequent as possible (e.g. one check when the app resumes). I also use HTTP features (etags, if-modified-since, or custom content ranges) to limit the size of responses when there are no changes. If you have a good service layer managing network requests that also gives you a good place to introduce rate limiting. Allowing controllers to express interest to fetching up to date information but deferring to the services to throttle or batch requests based on what the rest of the app is doing (e.g. don't repeat the same request if those models were updated recently unless the user deliberately triggered the action).

erlang inter-process lock mechanism (such as flock)

Does Erlang have an inter-process (I mean Linux or Windows process) lock mechanism such as flock ?
The usage would be as follows :
an Erlang server starts serving a repository, and puts a file lock (or whatever)
if another OS process (another Erlang server or a command-line Erlang script) interacts with the repo, then the file lock warns about possible conflict
If you mean between Erlang processes, no, it has inter-process lock mechanisms. That is not the Erlang way of controlling access to a shared resource. Generally if you want to control access to a resource you have an Erlang process which manages the resource and all access to the resource goes through this process. This means we have no need for inter-process locks or mutexes to control access. It is also safe as you can't "cheat" and access anyway and the managing process can detect if clients die in the middle of a transaction.
In Erlang, you would probably use a different way of solving this. One thing that comes to mind is to keep a single Erlang node() which handles all the repositories. It has a lock_mgr process which does the resource lock management.
When another node or escript wants to run, it can connect to the running Erlang node over distribution and request the locking.
There is module global which could fit your needs.
global:set_lock/1,2,3
Sets a lock on the specified nodes (or on all nodes if none are specified) on ResourceId for LockRequesterId.

When is it appropriate to mark services as not stoppable?

Services can be marked as "not stoppable":
In C/C++ by specifying SERVICE_ACCEPT_STOP flag when calling SetServiceStatus (see SERVICE_STATUS for details).
If .NET, by set ServiceBase.CanStop to false.
However, is this good practice? It means that even a user with administrator privileges cannot stop such a service in a clean manner. (It would still be possible to kill it, but this would prevent proper cleanup. And if there are other services running in the same process, they would be killed as well.)
In my specific case, I have a service which controls a scientific instrument. It has been suggested to make the service not stoppable while it is acquiring data from the instrument in order to prevent losing the data being acquired. I should add that:
We grant non-administrator users the right to start/stop the service
We have program that provides a UI to start/stop this service. This program will issue a warning if the user tries to stop the service during acquisition. However, it is of course also possible to stop is with SC.EXE or the "Services" snap-in. In this case, there is no warning.
Does Microsoft (or anyone else) provide guidance?
My own stance so far is that marking a service not stoppable is a drastic measure that should be used sparingly. Acceptable uses would be:
For brief periods of time, to protect critical operations that should not be interrupted
For services that are critical for the operation or security of the OS
My example would not fall under those criteria, so should be stoppable

Resources