hwloc_obj_t obj = hwloc_get_obj_by_depth(topology, depth_CORE,MyRank%4);
hwloc_cpuset_t cpuset = hwloc_bitmap_dup(obj->cpuset);
hwloc_set_cpubind(topology, cpuset, 0)
Is there anyway in hwloc to know if a thread is already binded to that "cpuset".
Reason why I need to know this:
Suppose I have a quad core. But if I issue 8 processors at runtime. So it happens to bind 2 process to each core. However I want to bind process to core only when it is free. So is there anyway I could know that core has been already binded to process.
It seems that the only way to do so is to enumerate all processes and check whether some of them are bound to the specified core. You can get an idea how to do that by examining the source code of the hwloc-ps utility. What it does is that it reads through the /proc filesystem and extracts process PIDs form there, then uses hwloc_get_proc_cpubind() to obtain the binding mask. This should work on Linux and Solaris, as well as *BSD systems with mounted /proc. On Windows one should use the system-specific API from the Tool Help library to obtain the list of PIDs. OS X does not support processor affinity.
Related
I'm looking for a way of letting many threads have read-access to different snapshots of a large file (10 GB) while another thread is writing changes randomly to the same file, without any of the changed bytes being visible to the readers and without making copies of the data.
I'm running this in a .NET 6 process with Docker on Linux (Synology DSM 7.1 to be specific) which is using Btrfs. Both Btrfs and Docker volumes seem to be using "copy-on-write" and use "snapshots" in various ways. I'm no expert but I think copy-on-write is the term to describe what I'm looking for here.
But the problem now is how to leverage this feature from within my .NET 6 process? My dream scenario would be something like this:
var read = File.Open("/file.txt", FileMode.Snapshot);
which of course doesn't work. Do you have any suggestions how I could move towards this using Btrfs, Docker volumes or something else for this usecase?
You know, when an application opens a file and write to it, the system chooses in which cluster will be stored. I want to choose myself ! Let me tell you what I really want to do... In fact, I don't necessarily want to write anything. I have a HDD with a BAD range of clusters in the middle and I want to mark that space as it is occupied by a file, and eventually set it as a hidden-unmoveable-system one (like page file in windows) so that it won't be accessed anymore. Any ideas on how to do that ?
Later Edit:
I think THIS is my last hope. I just found it, but I need to investigate... Maybe a file could be created anywhere and then relocated to the desired cluster. But that requires writing, and the function may fail if that cluster is bad.
I believe the answer to your specific question: "Can I write a file to a specific cluster location" is, in general, "No".
The reason for that is that the architecture of modern operating systems is layered so that the underlying disk store is accessed at a lower level than you can access, and of course disks can be formatted in different ways so there will be different kernel mode drivers that support different formats. Even so, an intelligent disk controller can remap the addresses used by the kernel mode driver anyway. In short there are too many levels of possible redirection for you to be sure that your intervention is happening at the correct level.
If you are talking about Windows - which you haven't stated but which appears to assumed - then you need to be looking at storage drivers in the kernel (see https://learn.microsoft.com/en-us/windows-hardware/drivers/storage/). I think the closest you could reasonably come would be to write your own Installable File System driver (see https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/_ifsk/). This is really a 'filter' as it sits in the IO request chain and can intercept and change IO Request Packets (IRPs). Of course this would run in the kernel, not in userspace, and normally this would be written in C and I note your question is tagged for Delphi.
Your IFS Driver can sit at differnt levels in the request chain. I have used this technique to intercept calls to specific file system locations (paths / file names) and alter the IRP so as to virtualise the request - even calling back to user space from the kernel to resolve how the request should be handled. Using the provided examples implementing basic functionality with an IFS driver is not too involved because it's a filter and not a complete storgae system.
However the very nature of this approach means that another filter can also alter what you are doing in your driver.
You could look at replacing the file system driver that interfaces to the hardware, but I think that's likely to be an excessive task under the circumstances ... and as pointed out already by #fpiette the disk controller hardware can remap your request anyway.
In the days of MSDOS the access to the hardware was simpler and provided by the BIOS which could be hooked to allow the requests to be intercepted. Modern environments aren't that simple anymore. The IFS approach does allow IO to be hooked, but it does not provide the level of control you need.
EDIT regarding suggestion by the OP of using FSCTL_MOVE_FILE
For simple environment this may well do what you want, it is designed to support a defragmentation process.
However I still think there's no guarantee that this actually will do what you want.
You will note from the page you have linked to it states that it is moving one or more virtual clusters of a file from one logical cluster to another within the same volume
This is a code that's passed to the underlying storage drivers which I have referred to above. What the storage layer does is up to the storage layer and will depend on the underlying technology. With more advanced storage there's no guarantee this actually addresses the physical locations which I believe your question is asking about.
However that's entirely dependent on the underlying storage system. For some types of storage relocation by the OS may not be honoured in the same way. As an example consider an enterprise storage array that has a built in data-tiering function. Without the awareness of the OS data will be relocated within the storage based on the tiering algorithms. Also consider that there are technologies which allow data to be directly accessed (like NVMe) and that you are working with 'virtual' and 'logical' clusters, not physical locations.
However, you may well find that in a simple case, with support in the underlying drivers and no remapping done outside the OS and kernel, this does what you need.
Since you problem is to mark bad cluster, you don't need to write any program. Use the command line utility CHKDSK that Windows provides.
I an elevated command prompt (Run as administrator), run the command:
chkdsk /r c:
The check will be done on the next reboot.
Don't forget to read the documentation.
This question already has answers here:
How Erlang atoms can be garbage collected
(3 answers)
Closed 3 years ago.
Can atoms be removed from a running Erlang/Elixir system?
Specifically, I am interested in how I would create an application server where modules, representing applications, can be loaded and run on demand and later removed.
I suppose it's more complicated than just removing the atom representing the module in question, as it may be defining more atoms which may be difficult or impossible to track.
Alternatively, I wonder if a module can be run in isolation so that all references it produces can be effectively removed from a running system when it is no longer needed.
EDIT: Just to clarify, because SO thinks this question is answered elsewhere, the question does not relate to garbage collection of atoms, but manual management thereof. To further clarify, here is my comment on Alex's answer below:
I have also thought about spinning up separate instances (nodes?) but
that would be very expensive for on-demand, per-user applications.
What I am trying to do is imitate how an SAP ABAP system works. One
option may be to pre-emptively have a certain number of instances
running, then restart them each time a request is complete. (Again,
pretty expensive though). Another may be to monitor the atom table of
an instance and restart that instance when it is close to the limit.
The drawback I see with running several nodes/instances (although that is what an ABAP system has; several OS processes serving requests from users) is that you lose out on the ability to share cached bytecode between those instances. In an ABAP system, the cache of bytecode (which they call a "load") is accessible to the different processes so when a program is started, it checks the cache first before fetching it from storage.
Unfortunately not, atoms are not destroyed within the VM at all until the VM shuts down. Atom limits are also shared across processes, meaning that spawning a new process to handle atom allocation/deallocation won't work in your case.
You might have some luck spawning a completely separate VM instance by running a separate Erlang application and communicating to it through sockets, although I'm not sure how effective that will be.
Is there an api to do memory mapping, just like
mmap()
on linux?
Depends on what exactly you want to use it for. If you want to map existing files into memory, that's supported with memory-mapped files. They can also be used to share memory between processes (use named mapping object with no underlying file).
If you want to map physical memory, that's generally not supported from user mode, although there are some tricks.
How to get the CPU Temperature info from Bios using c# I gave a try to the code in CPU temperature monitoring
But no luck. enumerator.Current threw an exception.
How can i achieve this ? Thanks.
Error :
"This system doesn't support the required WMI objects(1) - check the exception file
Not supported
at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)
at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext()
at CedarLogic.WmiLib.SystemStatistics.RefreshReadings() in D:\Downloads\TempMonitorSrc\TemperatureMonitorSln\WmiLib\SystemStatistics.cs:line 25
at CedarLogic.WmiLib.SystemStatistics.get_CurrentTemperature() in D:\Downloads\TempMonitorSrc\TemperatureMonitorSln\WmiLib\SystemStatistics.cs:line 87
at TemperatureMonitor.SystemTrayService.CheckSupport() in D:\Downloads\TempMonitorSrc\TemperatureMonitorSln\TemperatureMonitor\SystemTrayService.cs:line 260"
Have a look at OpenHardwareMonitor.
I'm having the exact same problem:
https://superuser.com/questions/183282/cant-query-cpu-temperature-msacpi-thermalzonetemperature-on-windows-embedded-7
The code in the link you cited is correct. My .exe works fine on Windows/XP and Windows/Vista (as long as I "run as Administrator" on Vista) ... but fails with the WMI error "not supported" on Windows Embedded 7.
At this point, I don't know if the problem is the OS (WES7) or my motherboard (an Intel DH57jg).
Although not ideal, the closest/best solution I have found is to use Speedfan (free), which can expose its probe information to external applications, via a memory-map. Somebody has done the C# conversion:
Reading SpeedFan shared memory with C#
"Building on what I spoke about in my
previous post, lets say we want to
access the data that SpeedFan provides
from a C# application. As a small
aside, reading information from the
SMBus and other low level interfaces
can only be done from the kernel. So
applications like SpeedFan (HWMonitor,
Everest, etc etc) generally run a
driver at kernel level and then a
front-end GUI to present the
information.
In the case of SpeedFan, shared memory
(actually its technically a memory
mapped file on Windows I think) is
used to communicate between the kernel
driver and the userspace GUI
application. Even better, the format
of this file has been made public by
the author of SpeedFan. So, enough
talk, lets see some code!"