If I move a file on the same partition, while I am moving, should I update the reference count of the i-node to two (since I am first copying) and then decrement it to 1 again (when I delete the directory entry) ?
Or the reference count is untouched during moving?
No.
When a file is "moved", what's really happening is that the reference to the inode in the file naming layer is being changed, that's it. In fact, the file is not copied.
Realize that the file naming layer is purely for your convenience. The file system itself really only cares about the inodes. Even directories are, generally, a fiction of the file naming layer. (It is true that some file systems actually organize directories hierarchically at the file system layer, but this would not be true of ext2 and ext3, for instance).
While it is technically true that for an extremely brief period there are either no references or two references to the file in the naming layer, this is really an atomic transaction from the point of view of the system call into the kernel that is being made, so it effectively will not matter and will not be a race condition for you.
Related
First, I understand the difference between value and reference types -this isn't that question. I am rewriting some of my code in Swift, and decided to also refactor some of the classes. Therefore, I thought I would see if some of the classes make sense as structs.
Memory: I have some model classes that hold very large arrays, that are constantly growing in size (unknown final size), and could exist for hours. First, are there any guidelines about a suggested or absolute size for a struct, since it lives on the stack?
Refactoring Use: Since I'm refactoring what right now is a mess with too much dependency, I wonder how I could improve on that. The views and view controllers are mostly easily, it's my model, and what it does, that's always left me wishing for better examples to follow.
WorkerManager: Singleton that holds one or two Workers at a time. One will always be recording new data from a sensor, and the other would be reviewing stored data. The view controllers get the Worker reference from the WorkerManager, and ask the Worker for the data to be displayed.
Worker: Does everything on a queue, to prevent memory access issues (C array pointers are constantly changing as they grow). Listening: The listening Worker listens for new data, sends it to a Processor object (that it created) that cleans up the data and stores it in C arrays held by the Worker. Then, if there is valid data, the Worker tells the Analyzer (also owned by the worker) to analyze the data and stores it in other C arrays to be fed to views. Both the Processor and Analyzer need state to know what has happened in the past and what to process and analyze next. The pure raw data is stored in a separate Record NSManaged object. Reviewer Takes a Record and uses the pure raw data to recreate all of the analyzed data so that it can be reviewed. (analyzed data is massive, and I don't want to store it to disk)
Now, my second question is, could/should Processor and Analyzer be replaced with structs? Or maybe protocols for the Worker? They aren't really "objects" in the normal sense, just convenient groups of related methods and the necessary state. And since the code is nearly a thousand lines for each, and I don't want to put it all in one class, or even the same file.
I just don't have a good sense of how to remove all of my state, use pure functions for all of the complex mathematical operations that are performed on the arrays, and where to put them.
While the struct itself lives on the stack, the array data lives on the heap so that array can grow in size dynamically. So even if you have an array with million items in it and pass it somewhere, none of the items are copied until you change the new array due to the copy-on-write implementation. This is described in details in 2015 WWDC Session 414.
As for the second question, I think that 2015 WWDC Session 414 again has the answer. The basic check that Apple engineers recommend for value types are:
Use a value type when:
Comparing instance data with == makes sense
You want copies to have independent state
The data will be used in code across multiple threads
Use a reference type (e.g. use a class) when:
Comparing instance identity with === makes sense
You want to create shared, mutable state
So from what you've described, I think that reference types fit Processor and Analyzer much better. It doesn't seem that copies of Processor and Analyzer are valid objects if you've not created new Producers and Analyzers explicitly. Would you not want the changes to these objects to be shared?
I need to be extremely concerned with speed/latency in my current multi-threaded project.
Cache access is something I'm trying to understand better. And I'm not clear on how lock-free queues (such as the boost::lockfree::spsc_queue) access/use memory on a cache level.
I've seen queues used where the pointer of a large object that needs to be operated on by the consumer core is pushed into the queue.
If the consumer core pops an element from the queue, I presume that means the element (a pointer in this case) is already loaded into the consumer core's L2 and L1 cache. But to access the element, does it not need to access the pointer itself by finding and loading the element either from either the L3 cache or across the interconnect (if the other thread is on a different cpu socket)? If so, would it maybe be better to simply send a copy of the object that could be disposed of by the consumer?
Thank you.
C++ principally a pay-for-what-you-need eco-system.
Any regular queue will let you choose the storage semantics (by value or by reference).
However, this time you ordered something special: you ordered a lock free queue.
In order to be lock free, it must be able to perform all the observable modifying operations as atomic operations. This naturally restricts the types that can be used in these operations directly.
You might doubt whether it's even possible to have a value-type that exceeds the system's native register size (say, int64_t).
Good question.
Enter Ringbuffers
Indeed, any node based container would just require pointer swaps for all modifying operations, which is trivially made atomic on all modern architectures.
But does anything that involves copying multiple distinct memory areas, in non-atomic sequence, really pose an unsolvable problem?
No. Imagine a flat array of POD data items. Now, if you treat the array as a circular buffer, one would just have to maintain the index of the buffer front and end positions atomically. The container could, at leisure update in internal 'dirty front index' while it copies ahead of the external front. (The copy can use relaxed memory ordering). Only as soon as the whole copy is known to have completed, the external front index is updated. This update needs to be in acq_rel/cst memory order[1].
As long as the container is able to guard the invariant that the front never fully wraps around and reaches back, this is a sweet deal. I think this idea was popularized in the Disruptor Library (of LMAX fame). You get mechanical resonance from
linear memory access patterns while reading/writing
even better if you can make the record size aligned with (a multiple) physical cache lines
all the data is local unless the POD contains raw references outside that record
How Does Boost's spsc_queue Actually Do This?
Yes, spqc_queue stores the raw element values in a contiguous aligned block of memory: (e.g. from compile_time_sized_ringbuffer which underlies spsc_queue with statically supplied maximum capacity:)
typedef typename boost::aligned_storage<max_size * sizeof(T),
boost::alignment_of<T>::value
>::type storage_type;
storage_type storage_;
T * data()
{
return static_cast<T*>(storage_.address());
}
(The element type T need not even be POD, but it needs to be both default-constructible and copyable).
Yes, the read and write pointers are atomic integral values. Note that the boost devs have taken care to apply enough padding to avoid False Sharing on the cache line for the reading/writing indices: (from ringbuffer_base):
static const int padding_size = BOOST_LOCKFREE_CACHELINE_BYTES - sizeof(size_t);
atomic<size_t> write_index_;
char padding1[padding_size]; /* force read_index and write_index to different cache lines */
atomic<size_t> read_index_;
In fact, as you can see, there are only the "internal" index on either read or write side. This is possible because there's only one writing thread and also only one reading thread, which means that there could only be more space at the end of write operation than anticipated.
Several other optimizations are present:
branch prediction hints for platforms that support it (unlikely())
it's possible to push/pop a range of elements at once. This should improve throughput in case you need to siphon from one buffer/ringbuffer into another, especially if the raw element size is not equal to (a whole multiple of) a cacheline
use of std::unitialized_copy where possible
The calling of trivial constructors/destructors will be optimized out at instantiation time
the unitialized_copy will be optimized into memcpy on all major standard library implementations (meaning that e.g. SSE instructions will be employed if your architecture supports it)
All in all, we see a best-in-class possible idea for a ringbuffer
What To Use
Boost has given you all the options. You can elect to make your element type a pointer to your message type. However, as you already raised in your question, this level of indirection reduces locality of reference and might not be optimal.
On the other hand, storing the complete message type in the element type could become expensive if copying is expensive. At the very least try to make the element type fit nicely into a cache line (typically 64 bytes on Intel).
So in practice you might consider storing frequently used data right there in the value, and referencing the less-of-used data using a pointer (the cost of the pointer will be low unless it's traversed).
If you need that "attachment" model, consider using a custom allocator for the referred-to data so you can achieve memory access patterns there too.
Let your profiler guide you.
[1] I suppose say for spsc acq_rel should work, but I'm a bit rusty on the details. As a rule, I make it a point not to write lock-free code myself. I recommend anyone else to follow my example :)
Why does a process's address space have to divide into four segments (text, data, stack and heap)? What is the advandatage? is it possible to have only one whole big segment?
There are multiple reasons for splitting programs into parts in memory.
One of them is that instruction and data memories can be architecturally distinct and discontiguous, that is, read and written from/to using different instructions and circuitry inside and outside of the CPU, forming two different address spaces (i.e. reading code from address 0 and reading data from address 0 will typically return two different values, from different memories).
Another is reliability/security. You rarely want the program's code and constant data to change. Most of the time when that happens, it happens because something is wrong (either in the program itself or in its inputs, which may be maliciously constructed). You want to prevent that from happening and know if there are any attempts. Likewise you don't want the data areas that can change to be executable. If they are and there are security bugs in the program, the program can be easily forced to do something harmful when malicious code makes it into the program data areas as data and triggers those security bugs (e.g. buffer overflows).
Yet another is storage... In many programs a number of data areas aren't initialized at all or are initialized to one common predefined value (often 0). Memory has to be reserved for these data areas when the program is loaded and is about to start, but these areas don't need to be stored on the disk, because there's no meaningful data there.
On some systems you may have everything in one place (section/segment/etc). One notable example here is MSDOS, where .COM-style programs have no structure other than that they have to be less than about 64KB in size and the first executable instruction must appear at the very beginning of file and assume that its location corresponds to IP=0x100 (where IP is the instruction pointer register). How code and data are placed and interleaved in a .COM program is unimportant and up to the programmer.
There are other architectural artifacts such as x86 segments. Again, MSDOS is a good example of an OS that deals with them. .EXE-style programs in it may have multiple segments in them that correspond directly to the x86 CPU segments, to the real-mode addressing scheme, in which memory is viewed through 64KB-long "windows" known as segments. The position of these windows/segments is relative to the value of the CPU's segment registers. By altering the segment register values you can move the "windows". In order to access more than 64KB one needs to use different segment register values and that often implies having multiple segments in the .EXE (can be not just one segment for code and one for data, but also multiple segments for either of them).
At least the text and data segments are separated to prevent malicious code that's stored inside a variable from being run.
Instructions (compiled code) are stored in the text segment, while the contents of your variables are stored in a data segment, the latter of which never gets executed, only read from and written to.
A little more info here.
Isn't this distinction just a big, hacky workaround for patching security into the von-Neumann architecture where data and instructions share the same memory?
I'm developing a website which might grow up to a few thousand users, all of which would upload up to ten pictures on the server.
I'm wondering what would be the best way of storing pictures.
Lets assume that I have, 5000 users with 10 pictures each, which gives us 50 000 pics. (I guess it wouldn't be a good idea to store them in the database in blobs ;) )
Would it be a good way to dynamically create directories for every 100 users registered, (50 dirs in total, assuming 5000 users), and upload their pictures there? Would naming convention 'xxx_yy.jpg' (xxx being user id and yy picture number) be ok?
In this case, however, there would be 1000 (100x10) pictures in one folder, isn't it too many?
I would most likely store the images by a hash of their contents. A 128-bit SHA, for instance. So, I'd rename a user's uploaded image 'foo.jpg' to be its 128-bit sha (probably in base 64, for uniform 16-character names) and then store the user's name for the file and its SHA in a database. I'd probably also add a reference count. Then if some folks all upload the same image, it only gets stored once and you can delete it when all references vanish.
As for actual physical storage, now that you have a guaranteed uniform naming scheme, you can use your file system as a balanced tree. You can either decide how many files maximum you want in a directory, and have a balancer move files to maintain this, or you can imagine what a fully populated tree would look like, and store your files that way.
The only real drawback to this scheme is that it decouples file names from contents so a database loss can mean not knowing what any file is called, but you should be careful to back up that kind of information anyway.
Different filesystems perform differently with directories holding large numbers of files. Some slow down tremendously. Some don't mind at all. For example, IBM JFS2 stores the contents of directory inodes as a B+ Tree sorted by filename.... so it probably provides log(n) access time even in the case of very large directories.
getting ls or dir to read, sort, get size/date info, and print them to stdout is a completely different task from accessing the file contents given the filename.... So don't let the inability of ls to list a huge directory guide you.
Whatever you do, don't optimize too early. Just make sure your file access mechanism can be asbstracted (make a FileStorage that you .getfile(id) from, or something...).
That way you can put in whatever directory structure you like, or for example if you find it's better to store these items as a BLOB column in a database, you have that option...
granted i have never stored 50,000 images, but i usually just store all images in the same directory and name them as such to avoid conflict. then store the reference in the db.
$ext = explode( '.', $filename );
$newName = md5( microtime() ) . '.' . $ext;
that way you never have the same two filenames as microtime will never be the same.
some questions about records in Delphi:
As records are almost like classes, why not use only classes instead of records?
In theory, memory is allocated for a record when it is declared by a variable; but, and how is memory released after?
I can understand the utility of pointers to records into a list object, but with Generics Containers (TList<T>), are there need to use pointer yet? if not, how to delete/release each record into a Generic Container? If I wanna delete a specific record into a Generic Container, how to do it?
There are lots of differences between records and classes; and no "Pointer to record" <> "Class". Each has its own pros and cons; one of the important things about software development is to understand these so you can more easily choose the most appropriate for a given situation.
This question is based on a false premise. Records are not almost like classes, in the same way that Integers are not almost like Doubles.
Classes must always be dynamically instantiated, whereas this is a possibility, but not a requirement for records.
Instances of classes (which we call objects) are always passed around by reference, meaning that multiple sections of code will share and act on the same instance. This is something important to remember, because you may unintentionally modify an object as a side-effect; although when done intentionally it's a powerful feature. Records on the other hand are passed by value; you need to explicitly indicate if you're passing them by reference.
Classes do not 'copy as easily as records'. When I say copy, I mean a separate instance duplicating a source. (This should be obvious in light of the value/reference comment above).
Records tend to work very nicely with typed files (because they're so easy to copy).
Records can overlay fields with other fields (case x of/unions)
These were comments on certain situational benefits of records; conversely, there are also situational benefits for classes that I'll not elaborate on.
Perhaps the easiest way to understand this is to be a little pedantic about it. Let's clarify; memory is not really allocated 'when its declared', it's allocated when the variable is in scope, and deallocated when it goes out of scope. So for a local variable, it's allocated just before the start of the routine, and deallocated just after the end. For a class field, it's allocated when the object is created, and deallocated when it's destroyed.
Again, there are pros and cons...
It can be slower and require more memory to copy entire records (as with generics) than to just copy the references.
Passing records around by reference (using pointers) is a powerful technique whereby you can easily have something else modify your copy of the record. Without this, you'd have to pass your record by value (i.e. copy it) receive the changed record as a result, copy it again to your own structures.
Are pointers to records like classes? No, not at all. Just two of the differences:
Classes support polymorphic inheritance.
Classes can implement interfaces.
For 1 and 2: records are value types, while classes are reference types. They're allocated on the stack, or directly in the memory space of any larger variable that contains them, instead of through a pointer, and automatically cleaned up by the compiler when they go out of scope.
As for your third question, a TList<TMyRecord> internally declares an array of TMyRecord for storage space. All the records in it will be cleaned up when the list is destroyed. If you want to delete a specific one, use the Delete method to delete by index, or the Remove method to find and delete. But be aware that since it's a value type, everything you do will be making copies of the record, not copying references to it.
One of the main benefits of records is, when you have a large "array of record". This is created in memory by allocating space for all records in one contiguous RAM space, which is extremely fast. If you had used "array of TClass" instead, each object in the array would have to be allocated by itself, which is slow.
There has been a lot of work to improve the speed of allocating memory, in order to improve the speed of strings and objects, but it will never be as fast as replacing 100,000 memory allocations with 1 memory allocation.
However, if you use array of record, don't copy the record around in local variables. That may easily kill the speed benefit.
1) To allow for inheritance and polymorphism, classes have some overhead. Records do not allow them, and in some situations may be somewhat faster and simpler to use. Unlike classes, that are always allocated in the heap and managed through references, records can be allocated on the stack also, accessed directly, and assigned each other without requiring to call an "Assign" method.
Also records are useful to access memory blocks with a given structure, because their memory layout is exactly how you define it. A class instance memory layout is controlled by the compiler and has additional data to make objects work (i.e. the pointer to the Virtual Method Table).
2) Unless you allocate records dynamically, using New() or GetMem(), record's memory is managed by the compiler as ordinals, floats or static arrays: global variables memory is allocated at startup and released when the program terminates, and local variables are allocated on the stack entering a function/procedure/method and released exiting. Allocating/releasing memory in the stack is faster because it doesn't require calls to the memory manager, it's just very few assembler instructions to change the stack registers. But be aware that allocating large structure on the stack may cause a stack overflow, because the maximum stack size is fixed and not very large (see linker options).
If records are fields of a class, they are allocated when the class is created and released when the class is freed.
3) One of the advantages of generics is to eliminate the need of low-level pointer management - but be aware of the inner workings.
There are a few other differences between a class and a record. Classes can use polymorphism, and expose interfaces. Records can not implement destructors (although since Delphi 2006 they can now implement constructors and methods).
Records are very useful in segmenting memory into a more logical structure since the first data item in the record is at the same address point of the pointer to the record itself. This is not the case for classes.