Does FileStream.Seek() transfer data across the wire? - network-programming

I need to access random parts of a very large file across the network. (SMB protocol)
Does FileStream.Seek() actually read the data (copying down the entire file in the process) when doing a fs.Seek()?
Is there anything I can do to reduce or buffer this IO?
My intent is to poll the end of a large file for changes, and if they exist read forward from a bookmark (fixed position) in .NET.

No. It is going to operate only on local data in a stream data type. That the FileStream is "cached" locally does not mean that you are performing a .seek() on data on some remote filesystem, but instead downloading it as a stream onto your local machine. Please see this link (http://technet.microsoft.com/en-us/library/bb933993(v=sql.105).aspx) for more about FileStream over network.

Related

Difference between DownloadTo(stream) vs DownloadTo(string) in working with Azure Storage blob

I am working with the Azure Storage and I am using the functionality to download the blob from the Azure Storage Container. I did the search and found several overload methods to download.
I want to understand what is the difference between the method that takes the stream vs string.
I currently used the DownloadTo(string folderTodownLoad). However, if I want to use the stream what should I pass as a parameter and what would be the purpose or benefit if any over Download(string) method.
BlobClient Class
DownloadTo(string) downloads directly to your file system and supports downloading multiple blocks at a time.
DownloadTo(stream) downloads a single block at a time to an stream, the advantage of this, is that it provides you more flexibility.
An simple example could be downloading to an GZipStream so you can decompress an file while downloading it from blob storage.
Another example could be downloading to an MemoryStream, so you can process the result in memory right away, instead of having to load the file from disk.

webmmux directshow seeking queues IStream

I am using the directshow filter for muxing a vp8 and vorbis.
And MOST IMPORTANTLY I am sending (trying to send actually) the webm file in real time.
So there is no file being created.
As data is packed into webm after being encoder i send it off to the socket.
The filesinker filter uses IStream to do the file IO. And it heavely uses the seek operation.
Which I can not use. Since I can not seek on a socket.
Has anyone implemented or know how to use this muxer so that seek operation in not called.
Or maybe a version on the muxer with queues so that it supports fragmentation.
Thanks
I am using the directshow filter providede by www.webmproject.org
Implementation of IStream on writers allow multiplexers update cross references in the written stream/file. So they don't have to write sequentially which is impossible for most of container formats without creating huge buffers or temporary files.
Now if you are creating the file on runtime to progressively send over network, which I suppose you are trying to achieve, you don't know what, where and when the multiplexer is going to update to close the file. Whether it is going to revisit data in the beginning of the file and update references, headers etc.
You are supposed to create the full file first, and then deliver it. Or you need to substitute the whole writer thing and deliver onto socket all writes, including overwrites of already existing data. The most appropriate method to deliver real time data over network however is not not transfer the files at all. Sender send individual streams and receivers either use them as such, or multiplex into file after receiving then is it is necessary.

Loading leveldb from stream

Is there a way to load a leveldb store from a data stream?
If I were to take the stream of a leveldb instance and tuck it in a DLL as a manifest resource stream, will I have a way to just load that db from that stream later when I retrieve the manifest resource from my DLL? Essentially, I am looking for a way to build, save, and later load a leveldb without ever writing to a physical file on disk.
Thanks in advance for any useful info.
Raja.
You might have already figured this out since it's been a long time since you asked.
leveldb allows you to override the "Environment" such that reads and writes don't need to access a physical file.
You might want to look at this file:
http://code.google.com/p/leveldb/source/browse/helpers/memenv/memenv_test.cc
in particular the DBTest, for an example.

What is a stream? [duplicate]

I understand that a stream is a representation of a sequence of bytes. Each stream provides means for reading and writing bytes to its given backing store. But what is the point of the stream? Why isn't the backing store itself what we interact with?
For whatever reason this concept just isn't clicking for me. I've read a bunch of articles, but I think I need an analogy or something.
The word "stream" has been chosen because it represents (in real life) a very similar meaning to what we want to convey when we use it.
Let's forget about the backing store for a little, and start thinking about the analogy to a water stream. You receive a continuous flow of data, just like water continuously flows in a river. You don't necessarily know where the data is coming from, and most often you don't need to; be it from a file, a socket, or any other source, it doesn't (shouldn't) really matter. This is very similar to receiving a stream of water, whereby you don't need to know where it is coming from; be it from a lake, a fountain, or any other source, it doesn't (shouldn't) really matter.
That said, once you start thinking that you only care about getting the data you need, regardless of where it comes from, the abstractions other people talked about become clearer. You start thinking that you can wrap streams, and your methods will still work perfectly. For example, you could do this:
int ReadInt(StreamReader reader) { return Int32.Parse(reader.ReadLine()); }
// in another method:
Stream fileStream = new FileStream("My Data.dat");
Stream zipStream = new ZipDecompressorStream(fileStream);
Stream decryptedStream = new DecryptionStream(zipStream);
StreamReader reader = new StreamReader(decryptedStream);
int x = ReadInt(reader);
As you see, it becomes very easy to change your input source without changing your processing logic. For example, to read your data from a network socket instead of a file:
Stream stream = new NetworkStream(mySocket);
StreamReader reader = new StreamReader(stream);
int x = ReadInt(reader);
As easy as it can be. And the beauty continues, as you can use any kind of input source, as long as you can build a stream "wrapper" for it. You could even do this:
public class RandomNumbersStreamReader : StreamReader {
private Random random = new Random();
public String ReadLine() { return random.Next().ToString(); }
}
// and to call it:
int x = ReadInt(new RandomNumbersStreamReader());
See? As long as your method doesn't care what the input source is, you can customize your source in various ways. The abstraction allows you to decouple input from processing logic in a very elegant way.
Note that the stream we created ourselves does not have a backing store, but it still serves our purposes perfectly.
So, to summarize, a stream is just a source of input, hiding away (abstracting) another source. As long as you don't break the abstraction, your code will be very flexible.
A stream represents a sequence of objects (usually bytes, but not necessarily so), which can be accessed in sequential order. Typical operations on a stream:
read one byte. Next time you read, you'll get the next byte, and so on.
read several bytes from the stream into an array
seek (move your current position in the stream, so that next time you read you get bytes from the new position)
write one byte
write several bytes from an array into the stream
skip bytes from the stream (this is like read, but you ignore the data. Or if you prefer it's like seek but can only go forwards.)
push back bytes into an input stream (this is like "undo" for read - you shove a few bytes back up the stream, so that next time you read that's what you'll see. It's occasionally useful for parsers, as is:
peek (look at bytes without reading them, so that they're still there in the stream to be read later)
A particular stream might support reading (in which case it is an "input stream"), writing ("output stream") or both. Not all streams are seekable.
Push back is fairly rare, but you can always add it to a stream by wrapping the real input stream in another input stream that holds an internal buffer. Reads come from the buffer, and if you push back then data is placed in the buffer. If there's nothing in the buffer then the push back stream reads from the real stream. This is a simple example of a "stream adaptor": it sits on the "end" of an input stream, it is an input stream itself, and it does something extra that the original stream didn't.
Stream is a useful abstraction because it can describe files (which are really arrays, hence seek is straightforward) but also terminal input/output (which is not seekable unless buffered), sockets, serial ports, etc. So you can write code which says either "I want some data, and I don't care where it comes from or how it got here", or "I'll produce some data, and it's entirely up to my caller what happens to it". The former takes an input stream parameter, the latter takes an output stream parameter.
Best analogy I can think of is that a stream is a conveyor belt coming towards you or leading away from you (or sometimes both). You take stuff off an input stream, you put stuff on an output stream. Some conveyors you can think of as coming out of a hole in the wall - they aren't seekable, reading or writing is a one-time-only deal. Some conveyors are laid out in front of you, and you can move along choosing whereabouts in the stream you want to read/write - that's seeking.
As IRBMe says, though, it's best to think of a stream in terms of the operations it offers (which vary from implementation to implementation, but have a lot in common) rather than by a physical analogy. Streams are "things you can read or write". When you start connecting up stream adaptors, you can think of them as a box with a conveyor in, and a conveyor out, that you connect to other streams and then the box performs some transformation on the data (zipping it, or changing UNIX linefeeds to DOS ones, or whatever). Pipes are another thorough test of the metaphor: that's where you create a pair of streams such that anything you write into one can be read out of the other. Think wormholes :-)
A stream is already a metaphor, an analogy, so there's really no need to provide another one. You can think of it basically as a pipe with a flow of water in it where the water is actually data and the pipe is the stream. I suppose it's kind of a 2-way pipe if the stream is bi-directional. It's basically a common abstraction that is placed upon things where there is a flow or sequence of data in one or both directions.
In languages such as C#, VB.Net, C++, Java etc., the stream metaphor is used for many things. There are file streams, in which you open a file and can read from the stream or write to it continuously; There are network streams where reading from and writing to the stream reads from and writes to an underlying established network connection. Streams for writing only are typically called output streams, as in this example, and similarly, streams that are for reading only are called input streams, as in this example.
A stream can perform transformation or encoding of data (an SslStream in .Net, for example, will eat up the SSL negotiation data and hide it from you; A TelnetStream might hide the Telnet negotiations from you, but provide access to the data; A ZipOutputStream in Java allows you to write to files in a zip archive without having to worry about the internals of the zip file format.
Another common thing you might find is textual streams that allow you to write strings instead of bytes, or some languages provide binary streams that allow you to write primitive types. A common thing you'll find in textual streams is a character encoding, which you should be aware of.
Some streams also support random access, as in this example. A network stream, on the other hand, for obvious reasons, wouldn't.
MSDN gives a good overview of streams in .Net.
Sun also have an overview of their general OutputStream class and InputStream class.
In C++, here is the istream (input stream), ostream (output stream) and iostream (bidirectional stream) documentation.
UNIX like operating systems also support the stream model with program input and output, as described here.
The point is that you shouldn't have to know what the backing store is - it's an abstraction over it. Indeed, there might not even be a backing store - you could be reading from a network, and the data is never "stored" at all.
If you can write code that works whether you're talking to a file system, memory, a network or anything else which supports the stream idea, your code is a lot more flexible.
In addition, streams are often chained together - you can have a stream which compresses whatever is put into it, writing the compressed form on to another stream, or one which encrypts the data, etc. At the other end there'd be the reverse chain, decrypting, decompressing or whatever.
The answers given so far are excellent. I'm only providing another to highlight that a stream is not a sequence of bytes or specific to a programming language since the concept is universal (while its implementation may be unique). I often see an abundance of explanations online in terms of SQL, or C or Java, which make sense as a filestream deals with memory locations and low level operations. But they often address how to create a filestream and operate on the potential file in their given language rather than discuss the concept of a stream.
The Metaphor
As mentioned a stream is a metaphor, an abstraction of something more complex. To get your imagination working I offer some other metaphors:
you want to fill an empty pool with water. one way to accomplish this is to attach a hose to a spigot, placing the end of the hose in the pool and turning on the water.
the hose is the stream
similarly, if you wanted to refill your car with gas, you would go to a gas pump, insert the nozzle into your gas tank and open the valve by squeezing the locking lever.
the hose, nozzle and associated mechanisms to allow the gas to flow into your tank is the stream
if you need to get to work you would start driving from your home to the office using the freeway.
the freeway is the stream
if you want to have a conversation with someone you would use your ears to hear and your mouth to speak.
your ears and eyes are streams
Hopefully you notice in these examples that the stream metaphors only exist to allow something to travel through it (or on it in the case of the freeway) and do not themselves always poses the thing they are transferring. An important distinction. We don't refer to our ears as a sequence of words. A hose is still a hose if no water is flowing through it, but we have to connect it to a spigot for it do its job correctly. A car is not the only 'kind' of vehicle that can traverse a freeway.
Thus a stream can exist that has no data travelling through it as long as it is connected to a file.
Removing the Abstraction
Next, we need to answer a few questions. I'm going to use files to describe streams so... What is a file? And how do we read a file? I will attempt to answer this while maintaining a certain level of abstraction to avoid unneeded complexity and will use the concept of a file relative to a linux operating system because of its simplicity and accessibility.
What is a file?
A file is an abstraction :)
Or, as simply as I can explain, a file is one part data structure describing the file and one part data which is the actual content.
The data structure part (called an inode in UNIX/linux systems) identities important pieces of information about the content, but does not include the content itself (or a name of the file for that matter). One of the pieces of information it keeps is a memory address to where the content starts. So with a file name (or a hard link in linux), a file descriptor (a numeric file name that the operating system cares about) and a starting location in memory we have something we can call a file.
(the key takeaway is a 'file' is defined by the operating system since it is the OS that ultimately has to deal with it. and yes, files are much more complex).
So far so good. But how do we get the content of the file, say a love letter to your beau, so we can print it?
Reading a file
If we start from the result and move backwards, when we open a file on our computer its entire contents is splashed on our screen for us to read. But how? Very methodically is the answer. The content of the file itself is another data structure. Suppose an array of characters. We can also think of this as a string.
So how do we 'read' this string? By finding its location in memory and iterating through our array of characters, one character at a time until reaching an end of file character. In other words a program.
A stream is 'created' when its program is called and it has a memory location to attach to or connect to. Much like our water hose example, the hose is ineffective if it is not connected to a spigot. In the case of the stream, it must be connected to a file for it to exist.
Streams can be further refined, e.g, a stream to receive input or a stream to send a files contents to standard output. UNIX/linux connects and keeps open 3 filestreams for us right off the bat, stdin (standard input), stdout (standard output) and stderr (standard error). Streams can be built as data structures themselves or objects which allows us to perform more complex operations of the data streaming through them, like opening the stream, closing the stream or error checking the file a stream is connected to. C++'s cin is an example of a stream object.
Surely, if you so choose, you can write your own stream.
Definition
A stream is a reusable piece of code that abstracts the complexity of dealing with data while providing useful operations to perform on data.
The point of the stream is to provide a layer of abstraction between you and the backing store. Thus a given block of code that uses a stream need not care if the backing store is a disk file, memory, etc...
In addition to things mentioned above there is a different kind of streams - as defined in functional programming languages such as Scheme or Haskell - a possibly infinite datastructure which is generated by some function on-demand.
The word "stream" has been chosen because it represents (in real life) a very similar meaning to what we want to convey when we use it.
Start thinking about the analogy to a water stream. You receive a continuous flow of data, just like water continuously flows in a river. You don't necessarily know where the data is coming from, and most often you don't need to; be it from a file, a socket, or any other source, it doesn't (shouldn't) really matter. This is very similar to receiving a stream of water, whereby you don't need to know where it is coming from; be it from a lake, a fountain, or any other source, it doesn't (shouldn't) really matter. source
To add to the echo chamber, the stream is an abstraction so you don't care about the underlying store. It makes the most sense when you consider scenarios with and without streams.
Files are uninteresting for the most part because streams don't do much above and beyond what non-stream-based methods I'm familiar with did. Let's start with internet files.
If I want to download a file from the internet, I have to open a TCP socket, make a connection, and receive bytes until there are no more bytes. I have to manage a buffer, know the size of the expected file, and write code to detect when the connection is dropped and handle this appropriately.
Let's say I have some sort of TcpDataStream object. I create it with the appropriate connection information, then read bytes from the stream until it says there aren't any more bytes. The stream handles the buffer management, end-of-data conditions, and connection management.
In this way, streams make I/O easier. You could certainly write a TcpFileDownloader class that does what the stream does, but then you have a class that's specific to TCP. Most stream interfaces simply provide a Read() and Write() method, and any more complicated concepts are handled by the internal implementation. Because of this, you can use the same basic code to read or write to memory, disk files, sockets, and many other data stores.
The visualisation I use is conveyor belts, not in real factories because I don't know anything about that, but in cartoon factories where items move along lines and get stamped and boxed and counted and checked by a sequence of dumb devices.
You have simple components that do one thing, for example a device to put a cherry on a cake. This device has an input stream of cherryless cakes, and an output stream of cakes with cherries. There are three advantages worth mentioning structuring your processing in this way.
Firstly it simplifies the components themselves: if you want to put chocolate icing on a cake, you don't need a complicated device that knows everything about cakes, you can create a dumb device that sticks chocolate icing onto whatever is fed into it (in the cartoons, this goes as far as not knowing that the next item in isn't a cake, it's Wile E. Coyote).
Secondly you can create different products by putting the devices into different sequences: maybe you want your cakes to have icing on top of the cherry instead of cherry on top of the icing, and you can do that simply by swapping the devices around on the line.
Thirdly, the devices don't need to manage inventory, boxing, or unboxing. The most efficient way of aggregating and packaging things is changeable: maybe today you're putting your cakes into boxes of 48 and sending them out by the truckload, but tomorrow you want to send out boxes of six in response to custom orders. This kind of change can be accommodated by replacing or reconfiguring the machines at the start and end of the production line; the cherry machine in the middle of the line doesn't have to be changed to process a different number of items at a time, it always works with one item at a time and it doesn't have to know how its input or output is being grouped.
When I heard about streaming for the first time, it was in the context of live streaming with a webcam. So, one host is broadcasting video content, and the other host is receiving the video content. So is this streaming? Well... yes... but a live stream is a concrete concept, and I think that the question refers to the abstract concept of Streaming. See https://en.wikipedia.org/wiki/Live_streaming
So let's move on.
Video is not the only resource that can be streamed. Audio can be streamed too. So we are talking about Streaming media now. See https://en.wikipedia.org/wiki/Streaming_media . Audio can be delivered from source to target in numerous of ways. So let's compare some data delivery methods to each other.
Classic file downloading
Classic file downloading doesn't happen real-time. Before taking the file to use, you'll have to wait until the download is complete.
Progressive download
Progressive download chunks download data from the streamed media file to a temporary buffer. Data in that buffer is workable: audio-video data in the buffer is playable. Because of that users can watch / listen to the streamed media file while downloading. Fast-forwarding and rewinding is possible, offcourse withing the buffer. Anyway, progressive download is not live streaming.
Streaming
Happens real-time, and chunks data. Streaming is implemented in live broadcasts. Clients listening to the broadcast can't fast-forwarding or rewind. In video streams, data is discarded after playback.
A Streaming Server keeps a 2-way connection with its client, while a Web Server closes connection after a server response.
Audio and video are not the only thing that can be streamed. Let's have a look at the concept of streams in the PHP manual.
a stream is a resource object which exhibits streamable behavior. That
is, it can be read from or written to in a linear fashion, and may be
able to fseek() to an arbitrary location within the stream.
Link: https://www.php.net/manual/en/intro.stream.php
In PHP, a resource is a reference to an external source like a file, database connection. So in other words, a stream is a source that can be read from or written to. So, If you worked with fopen(), then you already worked with streams.
An example of a Text-file that is subjected to Streaming:
// Let's say that cheese.txt is a file that contains this content:
// I like cheese, a lot! My favorite cheese brand is Leerdammer.
$fp = fopen('cheese.txt', 'r');
$str8 = fread($fp, 8); // read first 8 characters from stream.
fseek($fp, 21); // set position indicator from stream at the 21th position (0 = first position)
$str30 = fread($fp, 30); // read 30 characters from stream
echo $str8; // Output: I like c
echo $str30; // Output: My favorite cheese brand is L
Zip files can be streamed too. On top of that, streaming is not limited to files. HTTP, FTP, SSH connections and Input/Output can be streamed as well.
What does wikipedia say about the concept of Streaming?
In computer science, a stream is a sequence of data elements made
available over time. A stream can be thought of as items on a conveyor
belt being processed one at a time rather than in large batches.
See: https://en.wikipedia.org/wiki/Stream_%28computing%29 .
Wikipedia links to this: https://srfi.schemers.org/srfi-41/srfi-41.html
and the writers have this to say about streams:
Streams, sometimes called lazy lists, are a sequential data structure
containing elements computed only on demand. A stream is either null
or is a pair with a stream in its cdr. Since elements of a stream are
computed only when accessed, streams can be infinite.
So a Stream is actually a data structure.
My conclusion: a stream is a source that can contains data that can be read from or written to in a sequential way. A stream does not read everything that the source contains at once, it reads/writes sequentially.
Usefull links:
http://www.slideshare.net/auroraeosrose/writing-and-using-php-streams-and-sockets-zendcon-2011 Provides a very clear presentation
https://www.sk89q.com/2010/04/introduction-to-php-streams/
http://www.netlingo.com/word/stream-or-streaming.php
http://www.brainbell.com/tutorials/php/Using_PHP_Streams.htm
http://www.sitepoint.com/php-streaming-output-buffering-explained/
http://php.net/manual/en/wrappers.php
http://www.digidata-lb.com/streaming/Streaming_Proposal.pdf
http://www.webopedia.com/TERM/S/streaming.html
https://en.wikipedia.org/wiki/Stream_%28computing%29
https://srfi.schemers.org/srfi-41/srfi-41.html
It's just a concept, another level of abstraction that makes your life easier. And they all have common interface which means you can combine them in a pipe like manner. For example, encode to base64, then zip and then write this to disk and all in one line!
The best explanation of streams I've seen is chapter 3 of SICP. (You may need to read the first 2 chapters for it to make sense, but you should anyway. :-)
They don't use sterams for bytes at all, but rather integers. The big points that I got from it were:
Streams are delayed lists
The computational overhead [of eagerly computing everything ahead of time, in some cases] is outrageous
We can use streams to represent sequences that are infinitely long
Another point (For reading file situation):
stream can allow you to do something else before finished reading all content of the file.
you can save memory, because do not need to load all file content at once.
Think of streams as of an abstract source of data (bytes, characters, etc.). They abstract actual mechanics of reading from and writing to the concrete datasource, be it a network socket, file on a disk or a response from the web server.
I think you need to consider that the backing store itself is often just another abstraction. A memory stream is pretty easy to understand, but a file is radically different depending on which file system you're using, never mind what hard drive you are using. Not all streams do in fact sit on top of a backing store: network streams pretty much just are streams.
The point of a stream is that we restrict our attention to what is important. By having a standard abstraction, we can perform common operations. Even if you don't want to, for instance, search a file or an HTTP response for URLs today, doesn't mean you won't wish to tomorrow.
Streams were originally conceived when memory was tiny compared to storage. Just reading a C file could be a significant load. Minimizing the memory footprint was extremely important. Hence, an abstraction in which very little needed to be loaded was very useful. Today, it is equally useful when performing network communication and, it turns out, rarely that restrictive when we deal with files. The ability to transparently add things like buffering in a general fashion makes it even more useful.
A stream is an abstracting of a sequence of bytes. The idea is that you don't need to know where the bytes come from, just that you can read them in a standardized manner.
For example, if you process data via a stream then it doesn't matter to your code if the data comes from a file, a network connection, a string, a blob in a database etc etc etc.
There's nothing wrong per-se with interacting with the backing store itself except for the fact that it ties you to the backing store implementation.
A stream is an abstraction that provides a standard set of methods and properties for interacting with data. By abstracting away from the actual storage medium, your code can be written without total reliance on what that medium is or even the implementation of that medium.
An good analogy might be to consider a bag. You don't care what a bag is made of or what it does when you put your stuff in it, as long as the bag performs the job of being a bag and you can get your stuff back out. A stream defines for storage media what the concept of bag defines for different instances of a bag (such as trash bag, handbag, rucksack, etc.) - the rules of interaction.
I'll keep it short, I was just missing the word here:
Streams are queues usually stored in buffer containing any kind of data.
(Now, since we all know what queues are, there's no need to explain this any further.)
A stream is a highly abstracted metaphor and a strict contract. It means that you can manipulate objects in sequence without concern about gaps. That is to say, a stream must have no vacuum or gaps. Objects in it are arranged in sequence one by one continuously. As a result, we don't have to worry about encountering a vacuum abruptly in the midst of processing a stream, or we can't leave a vacuum deliberately when producing a stream. In other words, we don't have to consider the case of a void in processing or producing a stream. There is no way we can come across it or produce it on purpose. If you are constructing a stream, you must not leave any gaps in the stream.
Put another way, if there is a gap, it must not be a stream. When you refer to a sequence as a stream, either you are warranted there is no gaps in it or you have to keep the promise there is no gaps in the sequence you produce.
To recap, think about a water stream. What is the most prominent characteristic of it?
Continuous!
The spirit of the abstraction of a stream is all about it.

Techniques for writing critical text data

We take text/csv like data over long periods (~days) from costly experiments and so file corruption is to be avoided at all costs.
Recently, a file was copied from the Explorer in XP whilst the experiment was in progress and the data was partially lost, presumably due to multiple access conflict.
What are some good techniques to avoid such loss? - We are using Delphi on Windows XP systems.
Some ideas we came up with are listed below - we'd welcome comments as well as your own input.
Use a database as a secondary data storage mechanism and take advantage of the atomic transaction mechanisms
How about splitting the large file into separate files, one for each day.
If these machines are on a network: send a HTTP post with the logging data to a webserver.
(sending UDP packets would be even simpler).
Make sure you only copy old data. If you have a timestamp on the filename with a 1 hour resolution, you can safely copy the data older than 1 hour.
If a write fails, cache the result for a later write - so if a file is opened externally the data is still stored internally, or could even be stored to a disk
I think what you're looking for is the Win32 CreateFile API, with these flags:
FILE_FLAG_WRITE_THROUGH : Write operations will not go through any intermediate cache, they will go directly to disk.
FILE_FLAG_NO_BUFFERING : The file or device is being opened with no system caching for data reads and writes. This flag does not affect hard disk caching or memory mapped files.
There are strict requirements for successfully working with files opened with CreateFile using the FILE_FLAG_NO_BUFFERING flag, for details see File Buffering.
Each experiment much use a 'work' file and a 'done' file. Work file is opened exclusively and done file copied to a place on the network. A application on the receiving machine would feed that files into a database. If explorer try to move or copy the work file, it will receive a 'Access denied' error.
'Work' file would become 'done' after a certain period (say, 6/12/24 hours or what ever period). So it create another work file (the name must contain the timestamp) and send the 'done' through the network ( or a human can do that, what is you are doing actually if I understand your text correctly).
Copying a file while in use is asking for it being corrupted.
Write data to a buffer file in an obscure directory and copy the data to the 'public' data file periodically (every 10 points for instance), thereby reducing writes and also providing a backup
Write data points discretely, i.e. open and close the filehandle for every data point write - this reduces the amount of time the file is being accessed provided the time between data points is low

Resources