Specific processes sharing memory via mmap() - memory

My question is simple How do I share memory among processes allowing reads and writes of memory .The main thing is only specific processes(like specific PID's for example) should have the ability to share that memory .Not all processes should have the ability to share the memory.

One option is to use the standard Sys V IPC shared memory. After call to shmget(), use shmctl() to set the permissions. Give read write permission to only one group/user and start the processes which are allowed to access this as the specific user. The shared memory key and IDs can be found using ipcs, and you need trust the standard unix user/group based security to do the job.
Another option is implementing a shared memory driver. Something similar to Android ashmem. In the driver, you can validate the PID/UID of the caller who is trying to access the memory and allow/deny the request based on filters. You can also implement a sysfs entry to modify these filter. If the filters needs to be configurable, again you need to trust the Unix user/group based security. If you are implementing a driver, you will have plenty of security options

Related

Can I write a file to a specific cluster location?

You know, when an application opens a file and write to it, the system chooses in which cluster will be stored. I want to choose myself ! Let me tell you what I really want to do... In fact, I don't necessarily want to write anything. I have a HDD with a BAD range of clusters in the middle and I want to mark that space as it is occupied by a file, and eventually set it as a hidden-unmoveable-system one (like page file in windows) so that it won't be accessed anymore. Any ideas on how to do that ?
Later Edit:
I think THIS is my last hope. I just found it, but I need to investigate... Maybe a file could be created anywhere and then relocated to the desired cluster. But that requires writing, and the function may fail if that cluster is bad.
I believe the answer to your specific question: "Can I write a file to a specific cluster location" is, in general, "No".
The reason for that is that the architecture of modern operating systems is layered so that the underlying disk store is accessed at a lower level than you can access, and of course disks can be formatted in different ways so there will be different kernel mode drivers that support different formats. Even so, an intelligent disk controller can remap the addresses used by the kernel mode driver anyway. In short there are too many levels of possible redirection for you to be sure that your intervention is happening at the correct level.
If you are talking about Windows - which you haven't stated but which appears to assumed - then you need to be looking at storage drivers in the kernel (see https://learn.microsoft.com/en-us/windows-hardware/drivers/storage/). I think the closest you could reasonably come would be to write your own Installable File System driver (see https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/_ifsk/). This is really a 'filter' as it sits in the IO request chain and can intercept and change IO Request Packets (IRPs). Of course this would run in the kernel, not in userspace, and normally this would be written in C and I note your question is tagged for Delphi.
Your IFS Driver can sit at differnt levels in the request chain. I have used this technique to intercept calls to specific file system locations (paths / file names) and alter the IRP so as to virtualise the request - even calling back to user space from the kernel to resolve how the request should be handled. Using the provided examples implementing basic functionality with an IFS driver is not too involved because it's a filter and not a complete storgae system.
However the very nature of this approach means that another filter can also alter what you are doing in your driver.
You could look at replacing the file system driver that interfaces to the hardware, but I think that's likely to be an excessive task under the circumstances ... and as pointed out already by #fpiette the disk controller hardware can remap your request anyway.
In the days of MSDOS the access to the hardware was simpler and provided by the BIOS which could be hooked to allow the requests to be intercepted. Modern environments aren't that simple anymore. The IFS approach does allow IO to be hooked, but it does not provide the level of control you need.
EDIT regarding suggestion by the OP of using FSCTL_MOVE_FILE
For simple environment this may well do what you want, it is designed to support a defragmentation process.
However I still think there's no guarantee that this actually will do what you want.
You will note from the page you have linked to it states that it is moving one or more virtual clusters of a file from one logical cluster to another within the same volume
This is a code that's passed to the underlying storage drivers which I have referred to above. What the storage layer does is up to the storage layer and will depend on the underlying technology. With more advanced storage there's no guarantee this actually addresses the physical locations which I believe your question is asking about.
However that's entirely dependent on the underlying storage system. For some types of storage relocation by the OS may not be honoured in the same way. As an example consider an enterprise storage array that has a built in data-tiering function. Without the awareness of the OS data will be relocated within the storage based on the tiering algorithms. Also consider that there are technologies which allow data to be directly accessed (like NVMe) and that you are working with 'virtual' and 'logical' clusters, not physical locations.
However, you may well find that in a simple case, with support in the underlying drivers and no remapping done outside the OS and kernel, this does what you need.
Since you problem is to mark bad cluster, you don't need to write any program. Use the command line utility CHKDSK that Windows provides.
I an elevated command prompt (Run as administrator), run the command:
chkdsk /r c:
The check will be done on the next reboot.
Don't forget to read the documentation.

Device driver inside Intel SGX enclosure?

Is it possible to run a device driver inside an Intel SGX enclave? Or is it impossible for an enclave to access DMA memory and perform memory-mapped I/O?
I already have a device driver that has mapped all of the necessary memory but I don't know if it will be possible to create an enclave that shares these mappings. I am essentially confused about whether enclaves can only access their own private memory or whether they can also access arbitrary physical memory that I would map to them.
The documentation seems to say that the enclave cannot access code at arbitrary locations but I want to know the rules for data and MMIO.
Enclaves are Statically Linked libraries, as so they share the Process with the application it gets loaded into. Multiple enclaves can be loaded into one process.
An Enclave owns one or more Page Tables, these pages are encrypted and protected from outside access. This is better explained on: https://software.intel.com/sites/default/files/332680-002.pdf, page 28.
Enclaves can access memory from the process they run, but their memory can only be accessed by themselves. DMA access attempts are rejected/aborted, is not possible to map to an enclave's memory, however, you can map to the memory of the process and access it from within the enclave.
Enclaves are by concept isolated from the outside world, they don't have I/O capabilites appart of the Protected File System Library. So, I don't think it's possible to run a driver inside sgx.

Load Balancing in ASP.NET MVC Web Application. What can/can't be done?

I'm in the middle of developing a web application and have been asked the question whether it will work with a load balancer. My initial reaction is yes, since there is no state tracked between requests anywhere in the system. However, there is some application specific state loaded on app start (configuration settings from the database mainly.)
This data is all Read Only. Is it sufficient to rely on the normal cache dependency mechanisms to manage this and invalidate these objects across all the applications in the cluster or would I have to move to a shared cache system like App Fabric to ensure reliability/consistency?
With diagnostics enabled, I've got numerous logging calls using EventSource.Write and an out of process logger picking these up. I assume in this case, I'd need one logger installed on each of the servers in the cluster to pick up the events each one triggers. I'm not too fussed about that, but what is a good way to identify which server in the cluster serviced the request?
If you initialize the data on each server seperately and it is read-only, there's no problem. The separate applications will have a copy each.
Yes, you'd need a logger on each instance. In order to identify the server you could include the servers' IP into the log. That way you can track the server. (provided you have static IP's, but I assume you do).

When is it appropriate to mark services as not stoppable?

Services can be marked as "not stoppable":
In C/C++ by specifying SERVICE_ACCEPT_STOP flag when calling SetServiceStatus (see SERVICE_STATUS for details).
If .NET, by set ServiceBase.CanStop to false.
However, is this good practice? It means that even a user with administrator privileges cannot stop such a service in a clean manner. (It would still be possible to kill it, but this would prevent proper cleanup. And if there are other services running in the same process, they would be killed as well.)
In my specific case, I have a service which controls a scientific instrument. It has been suggested to make the service not stoppable while it is acquiring data from the instrument in order to prevent losing the data being acquired. I should add that:
We grant non-administrator users the right to start/stop the service
We have program that provides a UI to start/stop this service. This program will issue a warning if the user tries to stop the service during acquisition. However, it is of course also possible to stop is with SC.EXE or the "Services" snap-in. In this case, there is no warning.
Does Microsoft (or anyone else) provide guidance?
My own stance so far is that marking a service not stoppable is a drastic measure that should be used sparingly. Acceptable uses would be:
For brief periods of time, to protect critical operations that should not be interrupted
For services that are critical for the operation or security of the OS
My example would not fall under those criteria, so should be stoppable

Sandboxing user code with Erlang

As far as I know Erlang provides advanced features for error handling and isolation of processes.
I'm building a system that allow user to submit their code to be executed on the shared server environment and need to make it safe.
Requirements are:
limit CPU and Memory usage individually for each user-process.
forbid user-process to communicate with other processes (except some processes specially designed for such purpose).
forbid access to all sytem resources (shell, file system, ...).
terminate user-process in case of errors or high resource consumption.
Is it possible to to all this with Erlang and keep it performance efficient?
In general, Erlang doesn't provide means to sandbox code which a user can inject. You can try writing your own piece of protection code, but it is rather hard.
A better choice would probably be a language like "safe haskell":
http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/safe-haskell.html
which is specifically built to do this kind of thing.
The isolation provided by Erlang is not intended to protect against malicious modules being injected. In fact, there is no such protection in the distributed case either. As soon as two machines are connected, there is no limit to what you can do to the other machine.
There has been work done on Safe Erlang in the past and you can find several papers about it.
The ErlHive project addresses the problem in an interesting way.

Resources