I found a way to write avi from BMP files:
http://www.delphi3000.com/articles/article_2770.asp?SK=
I want to write avi from array or TList of TBitmaps?
The key portion of the code you linked to is below, where IList is a TStrings with the names of all the files to include in the animation.
for i := 0 to IList.Count - 1 do begin
AssignFile(BFile, IList[i]);
Reset(BFile, 1);
Seek(BFile, m_bfh.bfOffBits);
BlockRead(BFile, m_MemBits[0], m_Bih.biSizeImage);
Seek(BFile, SizeOf(m_Bfh));
BlockRead(BFile, m_MemBitMapInfo[0], length(m_MemBitMapInfo));
CloseFile(BFile);
if AVIStreamWrite(psCompressed, i, 1, #m_MemBits[0],
m_Bih.biSizeImage, AVIIF_KEYFRAME, 0, 0) <> 0 then begin
ShowMessage('Error during Write AVI File');
break;
end;
end;
It reads portions of the file from disk and writes them to the AVI stream. The important part is that it reads from the files. The in-memory representation of a TBitmap doesn't necessarily match with the representation of a file. However, it's easy to adapt the given code to temporarily store the bitmaps in a memory stream; the stream will match what the layout of the file would be. Suppose IList is now an array of TBitmap, as you suggested. Then we could use this:
var
ms: TMemoryStream;
ms := TMemoryStream.Create;
try
for i := 0 to Length(IList) - 1 do begin
IList[i].SaveToStream(ms);
ms.Position := m_bfh.bfOffBits;
ms.ReadBuffer(m_MemBits[0], m_Bih.biSizeImage);
ms.Position := SizeOf(m_Bfh);
ms.ReadBuffer(m_MemBitMapInfo[0], Length(m_MemBitMapInfo));
ms.Clear;
if AVIStreamWrite(psCompressed, i, 1, #m_MemBits[0],
m_Bih.biSizeImage, AVIIF_KEYFRAME, 0, 0) <> 0 then begin
ShowMessage('Error during Write AVI File');
break;
end;
end;
finally
ms.Free;
end;
There's code earlier in your cited example that reads the first file in the list to populate the various records and size the arrays used here, but you should be able to make the same changes there as I have done to the code shown here.
Related
with below example i am counting number of lines in .dfm file and the count is coming wrong because .dfm is saved in binary format.
if i open .dfm file and do right click and set text dfm to checked and the count is coming correctly. Below is the code
function TForm1.FindNumberOfLinesInFile(FileName: String): Integer;
var
contents : TStringList;
filestream : TFileStream;
outStream : TMemoryStream;
begin
try
try
Result := 0;
contents := TStringList.Create;
if edtFileToSearch.Text = '.dfm' then
begin
contents.LoadFromFile(FileName);
//i am binary
if pos('OBJECT', Uppercase(contents[0])) = 0 then // Count is coming wrong with this
begin
contents.Clear;
fileStream := TFileStream.Create(FileName, fmShareDenyNone);
outStream := TMemoryStream.Create;
try
ObjectResourceToText(filestream,outStream);
outStream.Position := 0;
Contents.LoadFromStream(outStream);
finally
FreeAndNil(outStream);
end;
end
else
begin
fileStream := TFileStream.Create(FileName, fmShareDenyNone);
Contents.LoadFromStream(fileStream);
end;
end
else
begin
fileStream := TFileStream.Create(FileName, fmShareDenyNone);
Contents.LoadFromStream(filestream);
end;
Result := contents.Count;
finally
FreeAndNil(fileStream);
FreeAndNil(contents);
end;
except
on e: Exception do Result := -1;
end;
end;
i have 2 questions
1)how to set text dfm value to checked in all dfm files(i have around 1000 dfm files)?
2)how load binary file correctly and count number of lines?
Delphi comes with a command line tool to do this, named convert. Open up a command prompt and ensure that your Delphi bin directory is in the PATH. Then type:
C:\projects\myprocject> convert
The output will be something like this:
Delphi Form Conversion Utility Version 5.0
Copyright (c) 1995,99 Inprise Corporation
Usage: convert.exe [-i] [-s] [-t | -b]
-i Convert files in-place (output overwrites input)
-s Recurse subdirectories
-t Convert to text
-b Convert to binary
So, you should be able to write:
C:\projects\myprocject> convert -i -s -t *.dfm
to effect the change required.
David's answer addresses the first of your questions: You can convert all of your existing binary DFM's to text using the command line tool provided with Delphi.
As well as addressing your immediate problem this is also highly recommended as it will make it much easier (i.e. possible at all!) to visually diff changes to your DFM files in version control.
As for the second part, if for some reason you still want or need to handle binary DFM files in your code is to use the TestStreamFormat() function to determine whether a stream is a valid resource stream and whether it is binary or text format, before calling ObjectResourceToText() function only if required.
This helper function to return the contents of a specified filename (of a DFM) into a supplied TStrings (e.g. a TStringlist) demonstrates this and might simplify things for you:
procedure GetDfmIntoStrings(aFilename: String; aStrings: TStrings);
var
istrm, ostrm: TStream;
begin
ostrm := NIL;
istrm := TFileStream.Create(aFilename, fmOpenRead or fmShareDenyNone);
try
case TestStreamFormat(istrm) of
sofBinary : begin
ostrm := TStringStream.Create('');
ObjectResourceToText(istrm, ostrm)
end;
sofText : ostrm := istrm;
else
raise EFilerError.Create(aFilename + ' is not a valid resource stream (DFM)');
end;
ostrm.Position := 0;
aStrings.LoadFromStream(ostrm);
finally
if ostrm <> istrm then
ostrm.Free;
istrm.Free;
end;
end;
I would like to use SaveToStream to save a ClientDataSet ALONG WITH OTHER MATERIAL. Here is a short sample:
filename := ChangeFileExt(Application.ExeName, '.dat');
FS := TFileStream.Create(filename, fmCreate);
CDS.SaveToStream(FS);
ShowMessage('After save, position is ' + IntToStr(FS.Position));
{now write a longint}
L := 1234;
siz := SizeOf(L);
Write(L, siz);
FS.Free;
But when I try to load this back in using LoadFromStream, and I again display the position after the ClientDataSet has been loaded, I see that the position is now 4 bytes AFTER the clientdataset was originally saved. It seems that CDS.LoadFromStream just plows ahead and consumes whatever follows it. As a result, when I then try to read the longint, I get an end of file error.
It is not sufficient to just use the CDS.SaveToStream at the end of creating a file, because what I'd really like to do is to save TWO clientdatasets to the file, one after the other, plus other material.
Ideas? Thanks.
[NB, this solution is essentially doubling up the work that (TLama's suggestion) "ReadDataPacket/WriteDataPacket" already does internally. I would use TLama's approach i.e. sub-class TClientDataSet to expose the above protected methods, and use the WriteSize parameter.]
Save the datasets to a temporary stream and then copy that to your destination stream with size information:
procedure InternalSaveToStream(AStream: TStream);
var
ATempStream: TMemoryStream;
ASize: Int64;
begin
ATempStream := TMemoryStream.Create;
// Save first dataset:
DataSet1.SaveToStream(ATempStream, dfBinary);
ASize := ATempStream.Size;
AStream.WriteData(ASize);
ATempStream.Position := 0;
AStream.CopyFrom(ATempStream, ALength);
ATempStream.Clear;
// Save second dataset:
DataSet2.SaveToStream(ATempStream, dfBinary);
ASize := ATempStream.Size;
AStream.WriteData(ASize);
ATempStream.Position := 0;
AStream.CopyFrom(ATempStream, ALength);
ATempStream.Clear;
FreeAndNil(ATempStream);
end;
To read back, first read the size and then copy that section of your source to a temporary stream again and load your dataset from that:
procedure InternalLoadFromStream(AStream: TStream);
var
ATempStream: TMemoryStream;
ASize: Int64;
begin
ATempStream := TMemoryStream.Create;
// Load first datset:
AStream.Read(ASize,SizeOf(ASize));
ASize := ATempStream.Size;
ATempStream.CopyFrom(AStream,ASize);
ATempStream.Position := 0;
DataSet1.LoadFromStream(ATempStream);
//...etc.
end;
the global target is
using a part of file to get checksum to find duplicated movie and mp3 files,
for this goal i have to get a part of file and generate the md5 because whole file size is up to 25 gigs in some cases,if i found duplicates then i will do a complete md5 for avoid any mistake of wrong file deletion
i dont have any problem i generating md5 from stream , it will be done with indy components
so
for first part
i have to copy first 1mb of a file
so i maked this function
but the memory stream is empty for all checkes!
function splitFile(FileName: string): TMemoryStream;
var
fs: TFileStream;
ms : TMemoryStream;
begin
fs := TFileStream.Create(FileName, fmOpenRead or fmShareDenyWrite) ;
ms := TMemoryStream.Create;
fs.Position :=0;
ms.CopyFrom(fs, 1048576);
result := ms;
end;
how can i fix this? or where is my problem?
update1 - (dirty test) :
this code return error stream read error also memo2 show some string but memo3 is empty!!
function splitFile(FileName: string): TMemoryStream;
var
fs: TFileStream;
ms : TMemoryStream;
begin
fs := TFileStream.Create(FileName, fmOpenRead or fmShareDenyWrite) ;
ms := TMemoryStream.Create;
fs.Position :=0;
form1.Memo2.Lines.LoadFromStream(fs);
ms.CopyFrom(fs,1048576);
ms.Position := 0;
form1.Memo3.Lines.LoadFromStream(ms);
result := ms;
end;
the complete code
function splitFile(FileName: string): TMemoryStream;
var
fs: TFileStream;
ms : TMemoryStream;
i,BytesToRead : integer;
begin
fs := TFileStream.Create(FileName, fmOpenRead or fmShareDenyWrite);
ms := TMemoryStream.Create;
fs.Position :=0;
BytesToRead := Min(fs.Size-fs.Position, 1024*1024);
ms.CopyFrom(fs, BytesToRead);
result := ms;
// fs.Free;
// ms.Free;
end;
function streamFile(FileName: string): TFileStream;
var
fs: TFileStream;
ms : TMemoryStream;
begin
fs := TFileStream.Create(FileName, fmOpenRead or fmShareDenyWrite) ;
result := fs;
end;
function GetFileMD5(const Stream: TStream): String; overload;
var MD5: TIdHashMessageDigest5;
begin
MD5 := TIdHashMessageDigest5.Create;
try
Result := MD5.HashStreamAsHex(Stream);
finally
MD5.Free;
end;
end;
function getMd5HashString(value: string): string;
var
hashMessageDigest5 : TIdHashMessageDigest5;
begin
hashMessageDigest5 := nil;
try
hashMessageDigest5 := TIdHashMessageDigest5.Create;
Result := IdGlobal.IndyLowerCase ( hashMessageDigest5.HashStringAsHex ( value ) );
finally
hashMessageDigest5.Free;
end;
end;
procedure TForm1.Button1Click(Sender: TObject);
var
Path,hash : String;
SR : TSearchRec;
begin
if od1.Execute then
begin
Path:=ExtractFileDir(od1.FileName); //Get the path of the selected file
DirList:=TStringList.Create;
try
if FindFirst(Path+'\*.*', faArchive , SR) = 0 then
begin
repeat
if (SR.Size>10240) then
begin
hash := GetFileMD5(splitFile(Path+'\'+SR.Name));
end
else
begin
hash := GetFileMD5(streamFile(Path+'\'+SR.Name));
end;
memo1.Lines.Add(hash+' | '+SR.Name +' | '+inttostr(SR.Size));
application.ProcessMessages;
until FindNext(SR) <> 0;
FindClose(SR);
end;
finally
DirList.Free;
end;
end;
end;
output:
D41D8CD98F00B204E9800998ECF8427E | eslahat.docx | 13338
D41D8CD98F00B204E9800998ECF8427E | EXT-3000-Data-Sheet.pdf | 682242
D41D8CD98F00B204E9800998ECF8427E | faktor khate ekhtesasi firoozpoor.pdf | 50091
D41D8CD98F00B204E9800998ECF8427E | FileZilla_3.9.0.5_win32-setup.exe | 6057862
D41D8CD98F00B204E9800998ECF8427E | FileZilla_3.9.0.6_win32-setup.exe | 6126536
11210486C9E54E12DA9DF687792257EA | get_stats_of_all_members_of_mu(1).php | 6227
11210486C9E54E12DA9DF687792257EA | get_stats_of_all_members_of_mu.php | 6227
D41D8CD98F00B204E9800998ECF8427E | GOMAUDIOGLOBALSETUP.EXE | 6855616
D41D8CD98F00B204E9800998ECF8427E | harvester-master(1).zip | 54255
D41D8CD98F00B204E9800998ECF8427E | harvester-master.zip | 54180
Here is a procedure that I quickly wrote for you which would alow you to read part of file (chunk) into a memory stream.
The reason why I made this into a procedure and not function is so that it is posible to reuse same memory stream for diferent chunks. This way you avoid all those memory alocations/dealocations and also reduce the chance of introducing the memory leak.
In order to be able to do so you need to pas the memory stream handle to the procedure as variable parameter.
I also adad two more parameters. One for specifying the chunk size (amount of data that you want to read from file) and chunk number.
I also made some rudimentatry safeguards to tell you when you want to read a chunk that is beyond file scope. And also the ability to automatically reduce the size of the last chunk since not all file sizes are multiples of oyur chunk size (in your case not all files are exactly X megabytes in size where X is any valid integer).
procedure readFileChunk(FileName: string; var MS: TMemoryStream; ChunkNo: Integer; ChunkSize: Int64);
var fs: TFileStream;
begin
fs := TFileStream.Create(FileName, fmOpenRead or fmShareDenyWrite);
if ChunkSize * (ChunkNo-1) <= fs.Size then
begin
fs.Position := ChunkSize * (ChunkNo-1);
if fs.Position + ChunkSize <= fs.Size then
ms.CopyFrom(fs, ChunkSize)
else
ms.CopyFrom(fs, fs.Size - fs.Position);
end
else
MessageBox(Form2.WindowHandle, 'File does not have so many chunks', 'WARNING!', MB_OK);
fs.Free;
end;
You use this procedure by calling:
readFileChunk(FileName,MemoryStream,ChunkNumber,ChunkSize);
Make sure you have already created the memory stream before calling this procedure.
Also if you want to reuse the same memory stream multiple times don't forget to set its postion to 0 before calling this procedure othevise new data will be added to the end of the stream and in turn keep increasing the memory stream size.
UPDATE:
After doing some trials I found out that the problem resides in your GetFileMD5 method.
I can't explain exactly why this is happening but if you pass a TMemoryStream to TStream parameter, the TStream parameters simply doesent accept it so the MD5 hashing algorithm the treats it as empty handle.
When I went and changed the parameter type to TMemoryStream instead the code worked but you no longer could pass a TFileStream to GetFileMD5 method anymore so it broke hash generation from entire files that worked before.
SOLUTION:
So after doing some more digging I have a GREAT news for you.
You don't even need to use TMemoryStreams. The "HashStreamAsHex" function can accept two optional parameters which alows you to define the starting point of your data and the size of data block from which you wanna generate the MD5 hash string. And this also works with TFileStream.
So in order to generate MD5 hash string from just small part of your file call this:
MD5.HashStreamAsHex(Stream,StartPosition,DataSize);
StartPositon specifies the inital offset into the stream for the hashing operation. When StartPosition contains a positive non-zero value, the stream position is moved to the specified offset prior to calculating the hash value. When StartPosition contains the value -1, the current position of the stream is used as the initial offset into the specified stream.
DataSize indicates the number of bytes from the stream to include in the hashing operation. When DataSize contains a negative value (<0), the bytes remaining from the current stream position are used for the hashing operation. Otherwise, the number of bytes in DataSize is used. If DataSize is larger than the size of the stream, the smaller of the two values is used for the operation.
In your case for getting the MD5 hash from the first MegaByte you would call:
MD5.HashStreamAsHex(Stream,0,1024*1024);
Now I belive you can modify the rest of your code to get this working as you want to. If not do tell where it stopped and I will help you.
I'm assuming that your code does not raise an exception. If it did you surely would have mentioned that. I also assume that the file is large enough for your attempted read.
Your code does copy. If the call to CopyFrom does not raise an exception then the memory stream contains the first 1024000 bytes of the file.
However, after the call to CopyFrom, the memory stream's pointer is at the end of the stream so if you read from it you will not be able to read anything. Perhaps you need to move the stream pointer to the beginning:
ms.Position := 0;
And then read from the memory stream.
1MB = 1024*1024, FWIW.
Update
Probably my assumptions above were incorrect. It seems likely that your code raises an exception because you attempt to read beyond the end of the file.
What you really seem to be wanting to do is to read as much of the first part of the file as possible. That's a two-liner.
BytesToRead := Min(Source.Size-Source.Position, 1024*1024);
Dest.CopyFrom(Source, BytesToRead);
i want to process a text file line by line. In the olden days i loaded the file into a StringList:
slFile := TStringList.Create();
slFile.LoadFromFile(filename);
for i := 0 to slFile.Count-1 do
begin
oneLine := slFile.Strings[i];
//process the line
end;
Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1).
The i tried using the native, and recommended, file I/O routines provided by Delphi:
var
f: TextFile;
begin
Reset(f, filename);
while ReadLn(f, oneLine) do
begin
//process the line
end;
Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream:
slFile := TStringList.Create;
stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone);
slFile.LoadFromStream(stream);
stream.Free;
for i := 0 to slFile.Count-1 do
begin
oneLine := slFile.Strings[i];
//process the line
end;
So now even though i've gained no locks being held, i'm back to loading the entire file into memory.
Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock?
i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's.
i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly.
i just want Reset with fmShareDenyNone!
With recent Delphi versions, you can use TStreamReader. Construct it with your file stream, and then call its ReadLine method (inherited from TTextReader).
An option for all Delphi versions is to use Peter Below's StreamIO unit, which gives you AssignStream. It works just like AssignFile, but for streams instead of file names. Once you've used that function to associate a stream with a TextFile variable, you can call ReadLn and the other I/O functions on it just like any other file.
You can use this sample code:
TTextStream = class(TObject)
private
FHost: TStream;
FOffset,FSize: Integer;
FBuffer: array[0..1023] of Char;
FEOF: Boolean;
function FillBuffer: Boolean;
protected
property Host: TStream read FHost;
public
constructor Create(AHost: TStream);
destructor Destroy; override;
function ReadLn: string; overload;
function ReadLn(out Data: string): Boolean; overload;
property EOF: Boolean read FEOF;
property HostStream: TStream read FHost;
property Offset: Integer read FOffset write FOffset;
end;
{ TTextStream }
constructor TTextStream.Create(AHost: TStream);
begin
FHost := AHost;
FillBuffer;
end;
destructor TTextStream.Destroy;
begin
FHost.Free;
inherited Destroy;
end;
function TTextStream.FillBuffer: Boolean;
begin
FOffset := 0;
FSize := FHost.Read(FBuffer,SizeOf(FBuffer));
Result := FSize > 0;
FEOF := Result;
end;
function TTextStream.ReadLn(out Data: string): Boolean;
var
Len, Start: Integer;
EOLChar: Char;
begin
Data:='';
Result:=False;
repeat
if FOffset>=FSize then
if not FillBuffer then
Exit; // no more data to read from stream -> exit
Result:=True;
Start:=FOffset;
while (FOffset<FSize) and (not (FBuffer[FOffset] in [#13,#10])) do
Inc(FOffset);
Len:=FOffset-Start;
if Len>0 then begin
SetLength(Data,Length(Data)+Len);
Move(FBuffer[Start],Data[Succ(Length(Data)-Len)],Len);
end else
Data:='';
until FOffset<>FSize; // EOL char found
EOLChar:=FBuffer[FOffset];
Inc(FOffset);
if (FOffset=FSize) then
if not FillBuffer then
Exit;
if FBuffer[FOffset] in ([#13,#10]-[EOLChar]) then begin
Inc(FOffset);
if (FOffset=FSize) then
FillBuffer;
end;
end;
function TTextStream.ReadLn: string;
begin
ReadLn(Result);
end;
Usage:
procedure ReadFileByLine(Filename: string);
var
sLine: string;
tsFile: TTextStream;
begin
tsFile := TTextStream.Create(TFileStream.Create(Filename, fmOpenRead or fmShareDenyWrite));
try
while tsFile.ReadLn(sLine) do
begin
//sLine is your line
end;
finally
tsFile.Free;
end;
end;
If you need support for ansi and Unicode in older Delphis, you can use my GpTextFile or GpTextStream.
As it seems the FileMode variable is not valid for Textfiles, but my tests showed that multiple reading from the file is no problem. You didn't mention it in your question, but if you are not going to write to the textfile while it is read you should be good.
What I do is use a TFileStream but I buffer the input into fairly large blocks (e.g. a few megabytes each) and read and process one block at a time. That way I don't have to load the whole file at once.
It works quite quickly that way, even for large files.
I do have a progress indicator. As I load each block, I increment it by the fraction of the file that has additionally been loaded.
Reading one line at a time, without something to do your buffering, is simply too slow for large files.
I had same problem a few years ago especially the problem of locking the file. What I did was use the low level readfile from the shellapi. I know the question is old since my answer (2 years) but perhaps my contribution could help someone in the future.
const
BUFF_SIZE = $8000;
var
dwread:LongWord;
hFile: THandle;
datafile : array [0..BUFF_SIZE-1] of char;
hFile := createfile(PChar(filename)), GENERIC_READ, FILE_SHARE_READ or FILE_SHARE_WRITE, nil, OPEN_EXISTING, FILE_ATTRIBUTE_READONLY, 0);
SetFilePointer(hFile, 0, nil, FILE_BEGIN);
myEOF := false;
try
Readfile(hFile, datafile, BUFF_SIZE, dwread, nil);
while (dwread > 0) and (not myEOF) do
begin
if dwread = BUFF_SIZE then
begin
apos := LastDelimiter(#10#13, datafile);
if apos = BUFF_SIZE then inc(apos);
SetFilePointer(hFile, aPos-BUFF_SIZE, nil, FILE_CURRENT);
end
else myEOF := true;
Readfile(hFile, datafile, BUFF_SIZE, dwread, nil);
end;
finally
closehandle(hFile);
end;
For me the speed improvement appeared to be significant.
Why not simply read the lines of the file directly from the TFileStream itself one at a time ?
i.e. (in pseudocode):
readline:
while NOT EOF and (readchar <> EOL) do
appendchar to result
while NOT EOF do
begin
s := readline
process s
end;
One problem you may find with this is that iirc TFileStream is not buffered so performance over a large file is going to be sub-optimal. However, there are a number of solutions to the problem of non-buffered streams, including this one, that you may wish to investigate if this approach solves your initial problem.
my application opens files does transformations and saves the data out to another file..or possible the same file.. the file size changes but i dont know how big or small its gona be untill i see the data inside the first file..
At the moment i load the file into a dynamic array do all that i need to do in there then save it back... this was looking good untill i got to my testing stage where i found transforming multi gigabyte files on a system with 128mb ram caused some issues...LOL
here is my code..
procedure openfile(fname:string);
var
myfile: file;
filesizevalue:integer;
begin
AssignFile(myfile,fname);
filesizevalue := GetFileSize(fname);
Reset(myFile, 1);
SetLength(dataarray, filesizevalue);
BlockRead(myFile, dataarray[0], filesizevalue);
CloseFile(myfile);
end;
what i need is direct file access to minimise ram usage.. thats what i think i need/
is this what i need can it be done in delphi
I'd look at using a TFileStream, perhaps one with buffering, but you need to show what you are doing with the data really because it is hard to determine the best strategy. As gabr says, one option is memory mapped files, the code of which is in his link but since it is my code, I'll add it here too!
procedure TMyReader.InitialiseMapping(szFilename : string);
var
// nError : DWORD;
bGood : boolean;
begin
bGood := False;
m_hFile := CreateFile(PChar(szFilename), GENERIC_READ, 0, nil, OPEN_EXISTING, 0, 0);
if m_hFile <> INVALID_HANDLE_VALUE then
begin
m_hMap := CreateFileMapping(m_hFile, nil, PAGE_READONLY, 0, 0, nil);
if m_hMap <> 0 then
begin
m_pMemory := MapViewOfFile(m_hMap, FILE_MAP_READ, 0, 0, 0);
if m_pMemory <> nil then
begin
htlArray := Pointer(Integer(m_pMemory) + m_dwDataPosition);
bGood := True;
end
else
begin
// nError := GetLastError;
end;
end;
end;
if not bGood then
raise Exception.Create('Unable to map token file into memory');
end;
You can also map parts of file directly into memory. That's definitely the most direct way. See What is the fastest way to Parse a line in Delphi for an example.
If problem permits, you can use BlockRead and BlockWrite to read chunk of input file, process it and then write that chunk to output file. Something like this:
AssignFile(inFile,inFname);
AssignFile(outFile,outFname);
repeat
BlockRead(inFile, buff, SizeOf(buff), bytesRead);
ProcessBuffer(buff);
BlockWrite(outFile, buff, bytesRead, bytesWritten);
until (bytesRead = 0) or (bytesWritten <> bytesRead);
Code presumes that you won't change size of buffer while processing it. If size of file changes, then you should change last two lines of example code.
I prefer to use tFileStream for this kind of processing. In this example I am assuming there is a constant ArraySize which is set to the size of a single array element. For example, if your "array" is an array of integer then it would be set to:
ArraySize := SizeOf( Integer );
which would set the ArraySize to 4.
Function LoadPos(inFIlename:string;ArrayPos:Int64;var ArrayBuff) : boolean;
var
fs : tFileStream;
begin
result := false;
fs := tFileStream.Create(inFilename,fmOpenRead);
try
// seek to the array position
fs.Seek( ArrayPos * ArraySize, soFromBeginning);
// load the element
result := fs.Read( ArrayBuff, ArraySize ) = ArraySize;
finally
fs.free;
end;
end;
The only problem with this approach is it only works for fixed size structures, variable length strings require a different approach.
I don't think you'll get a "more direct" file access. Do you use all the data in the file? Otherwise you could perhaps use a stream and load only the data needed into memory. But if you use all the data, there's only one solution IMHO: read the file in chunks. But that highly depends on the kind of transformation you want to apply. If the transformation is not local (so that combined data elements are all in the same chunk), you gonna have problems.