How can I extend the length of a memory-mapped file? - delphi

In Delphi 7, I open a file with CreateFileMapping then get a pointer by using MapViewOfFile.
How can I expand the memory and add some characters to the memory and have it saved to that file?
I have already opened the file with appropriate modes (fmOpenReadWrite, PAGE_READWRITE),
and if I overwrite the characters, it gets saved to the file, but I need to add extra values in the middle of the file.

If the file mapping is backed by an actual file and not a block of memory, then you can resize the file in one of two ways:
call CreateFileMapping() with a size that exceeds the current file size. The file will be resized to match the new mapping.
use SetFilePointer() and SetEndOfFile() to resize the file directly, then call CreateFileMapping() with the new size.
Both conditions are described in the documentation for CreateFileMapping().

You cannot resize file mapping created with CreateFileMapping when it's already created. See earlier discussion on the topic: Windows: Resize shared memory .

Related

How to include a photo in Moderncv Casual

Well, to start with, I don't know much about Latex. I am failing to include a picture in to the document using "Moderncv Casual". A lot of the CV's and cover letter's template using:
\photo[64pt][0.4pt]{filename}
What's the deal with this? Is it not just to type the pictures's filename, compile, and the picture should be added to the document?
That's exactly it. The \photo macro is set up in such a way that it stores your input and makes it part of the CV title (set with \makecvtitle).
The reasoning behind this is to provide the end-user with a generic command to would capture a picture. However, depending on the template used, this picture may appear on the left/right/middle (or wherever). The generic input abstracts this placement from the rest of the code.
Specific to the command \photo; it is defined inside the class moderncv.cls file as:
\NewDocumentCommand{\photo}{O{64pt}O{0.4pt}m}
{\def\#photowidth{#1}\def\#photoframewidth{#2}\def\#photo{#3}}
An input like
\photo[64pt][0.4pt]{filename}
defines the photo to be kept in \#photo - it references the image file filename (with an image extension) - to have a width of 64pt (stored in \#photowidth) and frame width 0.4pt (stored in \#photoframewidth).

how to overwrite, or delete the file, used by writefile() calls?

I use the following to save screen output to a file
writefile("file.txt"),
tex(expression),
closefile()
The above sends the output of the tex() to the file automatically. which is all and well and what I want. (side-point: It also sends an annoying NIL line each time to the file, which I had to parse put later).
Now, when running the above code again, the file is appended to, which is not what I want. I want to either overwrite the file each time, or if there is a way to delete the file, so I can call delete on it before.
I looked at help and not able to find a command to delete a file, and I also see no option to tell writefile() to overwrite the file?
Is there an option or way around this? I am on windows 7, Maxima version: 5.36.1
Lisp: SBCL 1.2.7
I guess you are trying to capture the output of tex into a file. If so, here are a couple of other ways to do it:
tex (expr, destination);
where destination is either a file name (which is appended) or a stream, as created by opena or openw and closed by close. By the way, destination could be false, in which case tex returns a string.
with_stdout (destination, tex (expr));
where again destination is either a file name (which is appended or clobbered, as determined by the global flag file_output_append) or a stream.
with_stdout could be useful if you want to mix in some output not generated by tex, e.g., print("% some commentary");.

HDFS Flume sink - Roll by File

Is it possible for HDFS Flume sink to roll whenever a single file (from a Flume source, say Spooling Directory) ends, instead of rolling after certain bytes (hdfs.rollSize), time (hdfs.rollInterval), or events (hdfs.rollCount)?
Can Flume be configured so that a single file is a single event?
Thanks for your input.
Reagarding your first question, it is not possible due to the sinks logic is disconnected from the sources logic. I mean, a sink only sees events being put into the channel which must be processed by him; the sink does not know if an event is the first or the last regarding a file.
Of course, you could try to create your own source (or extend an existing one) in order to add a header to the event with a value meaning "this is the last event". Then, another custom sink could behave depending on such a header: for instance, if the header is not set, then the events are not persisted but stored in memory until the header is seen; then all the information is persisted in the final backend as a bach. Other possibility is that custom sink persists the data in a file until the header is seen; then the file is closed and another one is opened.
Regarding your second question, it depends on the sink. The spooldir source behaves based on the deserializer parameter; by default its value is LINE, what means:
Specify the deserializer used to parse the file into events. Defaults to parsing each line as an event. The class specified must implement EventDeserializer.Builder.
But other custom Java classes can be configured, as said above; for instance, a deserialized for the whole file.
You can set rollsize to a small number combined with BlobDeserializer to load file by file instead of combining into blocks. This is really helpful when you have unsplittable binary files such as PDF or gz files.
This is part of the configuration that is relevant:
#Set deserializer to BlobDeserializer and set the maximum blob size to be 1GB.
#Notice that the blobs have to fit in memory so this doesn't work for files that cannot fit in memory.
agent.sources.spool.deserializer = org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder
agent.sources.spool.deserializer.maxBlobLength = 1000000000
#Set rollSize to 1024 to avoid combining multiple small files into one part.
agent.sinks.hdfsSink.hdfs.rollSize = 1024
agent.sinks.hdfsSink.hdfs.rollCount = 0
agent.sinks.hdfsSink.hdfs.rollInterval = 0
The answer to the question "Can Flume be configured so that a single file is a single event?" is yes.
Yo only have to configure the following property to be 1:
hdfs.rollCount = 1
I'm looking for a solution for your first question, because sometimes the file is too big and it's needed to split the file in several chunks.
You can use any event headers in hdfs.path. ( https://flume.apache.org/FlumeUserGuide.html#hdfs-sink )
If you are using Spooling Directory Source, you can enable putting the file name in the events using fileHeaderKey or basenameHeaderKey ( https://flume.apache.org/FlumeUserGuide.html#spooling-directory-source ).
Can Flume be configured so that a single file is a single event?
It could be, however it is not recommended. The underlying implementation (protobuf) limits file sizes to 64m. Flume events are to be small in size due to its architecture and design. (Fault-tolerance, etc.)

Delphi overwrite file and wrong modified date time

I'd like to get a file last modified time in Delphi.
Normally something like FileAge() would do the trick, only the problem is: if I overwrite *File A* with File B using CopyFile, File A's modified date is not updated with current overwrite time as it should(?)
I get that: CopyFile also copy file attributes, but I really need to get the modified date that also works when a file is overwritten.
Is there such function? My whole application relies on modification time to decide whether or not I should proceed with files!
EDIT Just to clarify: I'm only monitoring the files. It's not my application who's modifying them.
The documentation for CopyFile says:
File attributes for the existing file are copied to the new file.
Which means that you cannot use base your program on the last modified attribute of the file, or indeed any attribute of the file. Indeed there are all sorts of ways for the last modified attribute of the file to change. It can in fact go backwards in time.
Instead I suggest that you use ReadDirectoryChangesW to keep track of modifications. That will allow you to receive notifications whenever a file is modified. You can write your program in an event based manner based on the ReadDirectoryChangesW API.
If you can't use ReadDirectoryChangesW and the file attributes, then you'll have to base your decisions on the contents of the file.

How do I create a zip file of a given compressed size in rails

I have a pile of records that need to be converted to XML then zipped up into a file, so I can send it on to a server that is expecting said records.
The problem I have, is that the server can only accept files that are smaller than a given amount.. Lets say for argument sake 10 Megs
require 'zip/zip'
Zip::ZipOutputStream.open("tmp/myfile_#{Process.pid}.zip") do |zos|
i_xml.each_with_index do |xml, index|
zos.put_next_entry("#{index}.xml")
zos << xml
end
end
The code above creates the zip file perfectly.. but I don't see how I can get the compressed size.
I can give some lea-way for the zip header and stuff.. So once I can tell how big my output is, I could tinker. It's just getting that size seems not in the cards for this class.
Note: I've tried installing zipRuby because it's has a compressed size method, but that just leads me down another rabbit hole.. Native extensions and such.
Can't see anything in the Zip library to do this, sorry.
Consider, if you can:
pushing further with getting zipRuby to compile
breaking the finished zip file into fixed-size chunks with simple File.read statements and putting the chunks back together at the server.
limiting the size of the zip file by limiting the number of files added, e.g. add files until the file size limit is exceeded, then remove the last added file and add it to a new zip file

Resources