I'm trying to work out whether I can persist the data stored in a TFDMemTable after closing the dataset without having to save it to a file.
I checked the TResourceOptions.Persistent but this will only save to the file name specified in TResourceOptions.PersistentFileName at runtime. You can save data at design time in the dfm if you leave the filename blank, but this isn't useful.
I also looked at the .SaveToStream/LoadFromStream but again that only saves/loads to a file specified in TResourceOptions.PersistentFileName, I was hoping I could just keep it in a local memory stream.
I have the DevExpress components which I know theirs can persist the data, but I'm trying to use the FDAC REST examples which have built in functionality for transferring the tables as JSON.
Am I missing a setting somewhere that will allow me to persist the data, or does anyone have a method to do it?
The following works fine for me:
procedure TForm1.Button5Click(Sender: TObject);
var
MS : TMemoryStream;
begin
// Requires TFDStanStorageBinLink on form/datamodule
MS := TMemoryStream.Create;
try
FDMemTable1.SaveToStream(MS);
FDMemTable1.Close;
// sometime later
MS.Position := 0;
FDMemTable1.LoadFromStream(MS);
finally
MS.Free;
end;
end;
Environment: Delphi 2010, Indy10
I know I am not alone with the problem...
File is Still in Use Error 32 How can I free it?
But after some days of fight I give up.
My code
var
multi: TIdMultipartFormDataStream;
ss: TStringStream;
FHTTP: TIdHTTP;
FHTTP := TidHTTP.Create(nil);
Multi:=TidMultiPartFormDataStream.Create;
Multi.AddFormField('eventid', TSettingsManager.GetAppSettings('EventID'));
Multi.AddFormField('password', TSettingsManager.GetAppSettings('EventPassword'));
Multi.AddFile('file', filename, 'image/jpeg'); //GetMIMETypeFromFile(fileName));
try
FHTTP.Post(TSettingsManager.GetAppSettings('WEBServer') + '/upload', Multi, ss);
finally
Multi.Clear;
FreeAndNil(Multi);
FreeAndNil(FHTTP);
end;
IOUtils.TFile.Delete(filename);
I have exception "The file is in use" when I try to delete the file.
How should I act to get file free and then delete it?
There is no error in the code shown.
Some potential memory leaks can be seen, which can be fixed easily with proper usage of try .. finally.
Also iirc you can create an instance of TIdHTTP with no owner in a shorter way. Instead of:
HTTP := TIdHTTP.Create(nil);
you can just leave away the argument:
HTTP := TIdHTTP.Create;
I have a service application which I will be soon implementing a log file. Before I start writing how it saves the log file, I have another requirement that a small simple form application should be available to view the log in real-time. In other words, if the service writes something to the log, not only should it save it to the file, but the other application should immediately know and display what was logged.
A dirty solution would be for this app to constantly open this file and check for recent changes, and load anything new. But this is very sloppy and heavy. On the other hand, I could write a server/client socket pair and monitor it through there, but it's a bit of an overload I think to use TCP/IP for sending one string. I'm thinking of using the file method, but how would I make this in a way that wouldn't be so heavy? In other words, suppose the log file grows to 1 million lines. I don't want to load the entire file, I just need to check the end of the file for new data. I'm also OK with a delay of up to 5 seconds, but that would contradict the "Real-time".
The only methods of reading/writing a file which I am familiar with consist of keeping file open/locked and reading all contents of the file, and I have no clue how to only read portions from the end of a file, and to protect it from both applications attempting to access it.
What you are asking for is exactly what I do in one of my company's projects.
It has a service that hosts an out-of-process COM object so all of our apps can write messages to a central log file, and then a separate viewer app that uses that same COM object to receive notifications directly from the service whenever the log file changes. The COM object lets the viewer know where the log file is physically located so the viewer can open the file directly when needed.
For each notification that is received, the viewer checks the new file size and then reads only the new bytes that have been written since the last notification (the viewer keeps track of what the previous file size was). In an earlier version, I had the service actually push each individual log entry to the viewer directly, but under heavy load that is a lot of traffic to sift through, so I ended up taking that feature out and let the viewer handle reading the data instead, that way it can read multiple log entries at one time more efficiently.
Both the service and the viewer have the log file open at the same time. When the service creates/opens the log file, it sets the file to read/write access with read-only sharing. When the viewer opens the file, it sets the file to read-only access with read/write sharing (so the service can still write to it).
Needless to say, both service and viewer are run on the same machine so they can access the same local file (no remote files are used). Although the service does have a feature that forwards log entries via TCP/IP to a remote instance of the service running on another machine (then the viewer running on that machine can see them).
Our Open Source TSynLog class matches most of your needs - it's already stable and proven (used in real world applications, including services).
It features mainly fast logging (with a set of levels, not a hierarchy of level), exception interception with stack trace, and custom logging (including serialization of objects as JSON within the log).
You have even some additional features, like customer-side method profiler, and a log viewer.
Log files are locked during generation: you can read them, not modify them.
Works from Delphi 5 up to XE2, fully Open Source and with daily updates.
This may sound like a completely nutty answer but..
I use Gurock Softwares Smart Inspect.. http://www.gurock.com/smartinspect/
its great because you can send pictures, variables whatever and it logs them all, so while you want text atm, its a great for watching your app real time even on remote machines.. it can send it to a local file..
It maybe a useful answer to your problem, or a red herring - its a little unconventional but the additional features it has you may feel worth incorporating later (such as its great for capturing info should something go horribly wrong)
Years ago I wrote a circular buffer binary-file trace logging system, that avoided the problem of an endlessly growing file, while giving me the capabilities that I wanted, such as being able to see a problem if I wanted to, but otherwise, being able to just ignore the trace buffer.
However, if you want a continuous online system, then I would not use files at all.
I used files because I really did want file-like persistence and no listener app to have to be running. I simply wanted the file solution because I wanted the logging to happen whether anybody was around to "listen" right now, or not, but didn't use an endlessly growing text log because I was worried about using up hundreds of megs on log files, and filling up my 250 megabyte hard drive. One hardly has concerns like that in the era of 1 tb hard disks.
As David says, the client server solution is best, and is not complex really.
But you might prefer files, as I did, in my case way back. I only launched my viewer app as a post-mortem tool that I ran AFTER a crash. This was before there was MadExcept or anything like it, so I had some apps that just died, and I wanted to know what had happened.
Before my circular buffer, I would use a debug view tool like sys-internals DebugView and OutputDebugString, but that didn't help me when the crash happened before I launched DebugView.
File-based logging (binary) is one of the few times I allowed myself to create binary files. I normally hate hate hate binary files. But you just try to make a circular buffer without using a fixed length binary record.
Here's a sample unit. If I was writing this now instead of in 1997, I would have not used a "File of record", but hey, there it is.
To extend this unit so it could be used to be the realtime viewer, I would suggest that you simply check the datetime stamp on the binary file and refresh every 1-5 seconds (your choice) but only when the datetime stamp on the binary trace file has changed. Not hard, and not exactly a heavy load on the system.
This unit is used for the logger and for the viewer, it is a class that can read from, and write to, a circular buffer binary file on disk.
unit trace;
{$Q-}
{$I-}
interface
uses Classes;
const
traceBinMsgLength = 255; // binary record message length
traceEOFMARKER = $FFFFFFFF;
type
TTraceRec = record
index: Cardinal;
tickcount: Cardinal;
msg: array[0..traceBinMsgLength] of AnsiChar;
end;
PTraceBinRecord = ^TTraceRec;
TTraceFileOfRecord = file of TTraceRec;
TTraceBinFile = class
FFilename: string;
FFileMode: Integer;
FTraceFileInfo: string;
FStorageSize: Integer;
FLastIndex: Integer;
FHeaderRec: TTraceRec;
FFileRec: TTraceRec;
FAutoIncrementValue: Cardinal;
FBinaryFileOpen: Boolean;
FBinaryFile: TTraceFileOfRecord;
FAddTraceMessageWhenClosing: Boolean;
public
procedure InitializeFile;
procedure CloseFile;
procedure Trace(msg: string);
procedure OpenFile;
procedure LoadTrace(traceStrs: TStrings);
constructor Create;
destructor Destroy; override;
property Filename: string read FFilename write FFilename;
property TraceFileInfo: string read FTraceFileInfo write FTraceFileInfo;
// Default 1000 rows.
// change storageSize to the size you want your circular file to be before
// you create and write it. Remember to set the value to the same number before
// trying to read it back, or you'll have trouble.
property StorageSize: Integer read FStorageSize write FStorageSize;
property AddTraceMessageWhenClosing: Boolean
read FAddTraceMessageWhenClosing write FAddTraceMessageWhenClosing;
end;
implementation
uses SysUtils;
procedure SetMsg(pRec: PTraceBinRecord; msg: ansistring);
var
n: Integer;
begin
n := length(msg);
if (n >= traceBinMsgLength) then
begin
msg := Copy(msg, 1, traceBinMsgLength);
n := traceBinMsgLength;
end;
StrCopy({Dest} pRec^.msg, {Source} PAnsiChar(msg));
pRec^.msg[n] := Chr(0); // ensure nul char termination
end;
function IsBlank(var aRec: TTraceRec): Boolean;
begin
Result := (aRec.msg[0] = Chr(0));
end;
procedure TTraceBinFile.CloseFile;
begin
if FBinaryFileOpen then
begin
if FAddTraceMessageWhenClosing then
begin
Trace('*END*');
end;
System.CloseFile(FBinaryFile);
FBinaryFileOpen := False;
end;
end;
constructor TTraceBinFile.Create;
begin
FLastIndex := 0; // lastIndex=0 means blank file.
FStorageSize := 1000; // default.
end;
destructor TTraceBinFile.Destroy;
begin
CloseFile;
inherited;
end;
procedure TTraceBinFile.InitializeFile;
var
eofRec: TTraceRec;
t: Integer;
begin
Assert(FStorageSize > 0);
Assert(Length(FFilename) > 0);
Assign(FBinaryFile, Filename);
FFileMode := fmOpenReadWrite;
Rewrite(FBinaryFile);
FBinaryFileOpen := True;
FillChar(FHeaderRec, sizeof(TTraceRec), 0);
FillChar(FFileRec, sizeof(TTraceRec), 0);
FillChar(EofRec, sizeof(TTraceRec), 0);
FLastIndex := 0;
FHeaderRec.index := FLastIndex;
FHeaderRec.tickcount := storageSize;
SetMsg(#FHeaderRec, FTraceFileInfo);
Write(FBinaryFile, FHeaderRec);
for t := 1 to storageSize do
begin
Write(FBinaryFile, FFileRec);
end;
SetMsg(#eofRec, 'EOF');
eofRec.index := traceEOFMARKER;
Write(FBinaryFile, eofRec);
end;
procedure TTraceBinFile.Trace(msg: string);
// Write a trace message in circular file.
begin
if (not FBinaryFileOpen) then
exit;
if (FFileMode = fmOpenRead) then
exit; // not open for writing!
Inc(FLastIndex);
if (FLastIndex > FStorageSize) then
FLastIndex := 1; // wrap around to 1 not zero! Very important!
Seek(FBinaryFile, 0);
FHeaderRec.index := FLastIndex;
Write(FBinaryFile, FHeaderRec);
FillChar(FFileRec, sizeof(TTraceRec), 0);
Seek(FBinaryFile, FLastIndex);
Inc(FAutoIncrementValue);
if FAutoIncrementValue = 0 then
FAutoIncrementValue := 1;
FFileRec.index := FAutoIncrementValue;
SetMsg(#FFileRec, msg);
Write(FBinaryFile, FFileRec);
end;
procedure TTraceBinFile.OpenFile;
begin
if FBinaryFileOpen then
begin
System.CloseFile(FBinaryFile);
FBinaryFileOpen := False;
end;
if FileExists(FFilename) then
begin
// System.FileMode :=fmOpenRead;
FFileMode := fmOpenRead;
AssignFile(FBinaryFile, FFilename);
System.Reset(FBinaryFile); // open in current mode
System.Seek(FBinaryFile, 0);
Read(FBinaryFile, FHeaderRec);
FLastIndex := FHeaderRec.index;
FTraceFileInfo := string(FHeaderRec.Msg);
FBinaryFileOpen := True;
end
else
begin
InitializeFile; // Creates the file.
end;
end;
procedure TTraceBinFile.LoadTrace(traceStrs: TStrings);
var
ReadAtIndex: Integer;
Safety: Integer;
procedure NextReadIndex;
begin
Inc(ReadAtIndex);
if (ReadAtIndex > FStorageSize) then
ReadAtIndex := 1; // wrap around to 1 not zero! Very important!
end;
begin
Assert(Assigned(traceStrs));
traceStrs.Clear;
if not FBinaryFileOpen then
begin
OpenFile;
end;
ReadAtIndex := FLastIndex;
NextReadIndex;
Safety := 0; // prevents endless looping.
while True do
begin
if (ReadAtIndex = FLastIndex) or (Safety > FStorageSize) then
break;
Seek(FBinaryFile, ReadAtIndex);
Read(FBinaryFIle, FFileRec);
if FFileRec.msg[0] <> chr(0) then
begin
traceStrs.Add(FFileRec.msg);
end;
Inc(Safety);
NextReadIndex;
end;
end;
end.
Look at this article.
TraceTool 12.4: The Swiss-Army Knife of Trace
My suggestion would be to implement your logging in such a way that the log file "rolls over" on a daily basis. E.g. at midnight, your logging code renames your log file (e.g. MyLogFile.log) to a dated/archive version (e.g. MyLogFile-30082012.log), and starts a new empty "live" log (e.g. again MyLogFile.log).
Then it's simply a question of using something like BareTail to monitor your "live"/daily log file.
I accept this may not be the most network-efficient approach, but it's reasonably simple and meets your "live" requirement.
I have this delphi code that basically download a file (using Delphi 2010 + Indy 10.5.8 r4743), everything works just fine, except that when I download 100Mb (for example), it seems that Indy assign the full size (ie. a file with 100MB of dummy content(*) is instantly created), then download the file.
In the end the 100MB is correctly downloaded, but since the download process is done in the background using a hidden EXEecutable, I based my code to rely on the temporary file size to update the main UI
with IdHTTP do
begin
if FileExists(LocalFile) then
iLength := FileSize2(LocalFile)
else
iLength := 0;
DoExit := False;
try
try
repeat
if ExitApp then
Exit;
if Not FileExists(LocalFile) then
AFileStream := TFileStream.Create(LocalFile, fmCreate)
else
begin
// File already exist, resume download
AFileStream := TFileStream.Create(LocalFile, fmOpenReadWrite);
DoExit := (AFileStream.Size >= iLength);
if (Not DoExit) then
AFileStream.Seek(Max(0, AFileStream.Size - 4096), soFromBeginning);
end;
iRangeEnd := AFileStream.Size + 50000;
if (iRangeEnd < iLength) then
Request.Range := IntToStr(AFileStream.Position) + '-' + IntToStr(iRangeEnd)
else
begin
Request.Range := IntToStr(AFileStream.Position) + '-';
DoExit := True;
end;
PostTime := Now;
Get(TheURL, AFileStream);
IsError := Not (ResponseCode in [200, 206]);
until DoExit;
Disconnect;
except
IsError := True;
end; // try/except
finally
FreeAndNil(AFileStream);
end; // try/finally
end; // with
My question is Is there a way to avoid this behavior from indy? I know I can use the OnWork event, but then I will need to keep track of file names.
Ideally, I'd also like to avoid IPC (kinda overkill + I don't want to use that, say every second, for multiple downloads, I very much prefer to use the file size as indication of download progress as it gives more freedom when updating the UI)
(*) I assume it's dummy content since I need between 60-100 seconds on my current internet connection speed to really get the file
Yes, TIdHTTP pre-allocates the full file size if it knows the size ahead of time, based on the HTTP response headers. This is an optimization technique. Pre-allocating the file avoids unnecessary file system overhead when writing larger files, since the file does not have to keep growing over time, hunting for available sectors on the HDD, slowing down the writing process. There is currently no way to disable the pre-allocation, it is hard-coded behavior in Indy's internals. So relying on the file's actual size as a progress indicator will not work for you. You are going to have to communicate the actual progress information from your background app to the main app.
Actually i'm using the TIdHTTP component for download a file from internet. i'm wondering if is possible pause and resume the download using this component o maybe another indy component.
this is my current code, this works ok for download a file (without resume), but . now i want pause the download close my app ,and when my app restart then resume the download from the last position saved.
var
Http: TIdHTTP;
MS : TMemoryStream;
begin
Result:= True;
Http := TIdHTTP.Create(nil);
MS := TMemoryStream.Create;
try
try
Http.OnWork:= HttpWork;//this event give me the actual progress of the download process
Http.Head(Url);
FSize := Http.Response.ContentLength;
AddLog('Downloading File '+GetURLFilename(Url)+' - '+FormatFloat('#,',FSize)+' Bytes');
Http.Get(Url, MS);
MS.SaveToFile(LocalFile);
except
on E : Exception do
Begin
Result:=False;
AddLog(E.Message);
end;
end;
finally
Http.Free;
MS.Free;
end;
end;
the following code worked to me. It downloads the file by chunks:
procedure Download(Url,LocalFile:String;
WorkBegin:TWorkBeginEvent;Work:TWorkEvent;WorkEnd:TWorkEndEvent);
var
Http: TIdHTTP;
quit:Boolean;
FLength,aRangeEnd:Integer;
begin
Http := TIdHTTP.Create(nil);
fFileStream:=nil;
try
try
Http.OnWork:= Work;
Http.OnWorkEnd := WorkEnd;
Http.Head(Url);
FLength := Http.Response.ContentLength;
quit:=false;
repeat
if not FileExists(LocalFile) then begin
fFileStream := TFileStream.Create(LocalFile, fmCreate);
end
else begin
fFileStream := TFileStream.Create(LocalFile, fmOpenReadWrite);
quit:= fFileStream.Size >= FLength;
if not quit then
fFileStream.Seek(Max(0, fFileStream.Size-4096), soFromBeginning);
end;
try
aRangeEnd:=fFileStream.Size + 50000;
if aRangeEnd < fLength then begin
Http.Request.Range := IntToStr(fFileStream.Position) + '-'+ IntToStr(aRangeEnd);
end
else begin
Http.Request.Range := IntToStr(fFileStream.Position) + '-';
quit:=true;
end;
Http.Get(Url, fFileStream);
finally
fFileStream.Free;
end;
until quit;
Http.Disconnect;
except
on E : Exception do
Begin
//Result:=False;
//AddLog(E.Message);
end;
end;
finally
Http.Free;
end;
end;
Maybe the HTTP RANGE header can help you here. Have a look at archive.org's copy of http://www.west-wind.com/Weblog/posts/244.aspx for more info on resuming HTTP downloads:
(2004-02-07) A couple of days ago somebody on the Message Board asked an interesting question about how to provide resumable HTTP downloads. My first response to this question was that this isn't possible since HTTP is a stateless protocol that has no concept of file pointers and thus can't resume an HTTP download.
However it turns out HTTP 1.1 does have the ability to specify ranges in downloads by using the Range: header in the Http header sent form the client. You can do things like:
Range: 0-10000
Range: 100000-
Range: -100000
which download the first 100000 bytes, everything over 100000 bytes or the last 100000 bytes. There are more combinations but the first two are the ones that are of interest for a resumable download.
To demonstrate this feature I used wwHTTP (in Web Connection/VFP) to download a first 400k chunk of a file into a file with HTTPGetEx which is meant to simulate an aborted download. Next I do a second request to pick up the existing file and download the remainder:
#INCLUDE wconnect.h
CLEAR
CLOSE DATA
DO WCONNECT
LOCAL o as wwHTTP
lcDownloadedFile = "d:\temp\wwipstuff.zip"
*** Simulate partial output
lcOutput = ""
Text=""
tnSize = 0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",#Text,#tnSize,"Range: bytes=0-400000"+CRLF,lcDownloadedFile)
o.Httpclose()
lcOutput = Text
? LEN(lcOutput)
*** Figure out how much we downloaded
lnOpenAt = FILESIZE(lcDownloadedFile)
*** Do a partial download starting at this byte count
Text=""
tnSize =0
o = CREATEOBJECT("wwHTTP")
o.HttpConnect("www.west-wind.com")
? o.httpgetex("/files/wwipstuff.zip",#Text,#tnSize,"Range: bytes=" + TRANSFORM(lnOpenAt) + "-" + CRLF)
o.Httpclose()
? LEN(Text)
*** Read the existing partial download and append current download
lcOutput = FILETOSTR(lcDownloadedFile) + TEXT
? LEN(lcOutput)
STRTOFILE(lcOutput,lcDownloadedFile)
RETURN
Note that this approach uses a file on disk, so you have to use HTTPGetEx (with Web Connection). The second download can also be done to disk if you choose, but things will get tricky if you have multiple aborts and you need to piece them together. In that case you might want to try to keep track of each file and add a number to it, then combine the result at the very end.
If you download to memory using WinInet (which is what wwHTTP uses behind the scenes) you can also try to peel out the file from the Temporary Internet Files cache. Although this works I suspect this process will become very convoluted quickly so if you plan on providing the ability to resume I would highly recommend that you write your output to file yourself using the approach above.
Some additional information on WinInet and some of the requirements for this approach to work with it are described here: http://www.clevercomponents.com/articles/article015/resuming.asp.
The same can be done with wwHTTP for .Net by adding the Range header to the wwHTTP:WebRequest.Headers object.
(Randy Pearson) Say you don't know what the file size is at the server. Is there a way to find this out, so you can know how many chunks to request, for example? Would you send a HEAD request first, or does the header of the GET response tell you the total size also?
(Rick Strahl) You have to read the Content-Length: header to get the size of the file downloaded. If you're resuming this shouldn't matter - you just use Range: (existingsize)- to get the rest. For chunky downloads you can read the content length and only download the first x bytes. This gets tricky with wwHTTP - you have to make individual calls with HTTPGetEx and set the tnBufferSize parameter to the chunk size to retrieve to have it stop after the size is reached.
(Randy Pearson) Follow-up: It looks like a compliant server would send you enough to know the size. If it provides chunks it should reply with something like:
Content-Range: 0-10000/85432
so you could (if desired) extract that and use it in a loop to continue with intelligent chunk requests.
Also look here https://forums.embarcadero.com/message.jspa?messageID=219481 for TIdHTTP related discussion on the same topic:
(at least partly as per tfilestream.seek and offset confusion)
if FileExists(dstFile) then
begin
Fs := TFileStream.Create(dstFile, fmOpenReadWrite);
try
Fs.Seek(Max(0, Fs.Size-1024), soFromBeginning);
// alternatively:
// Fs.Seek(-1024, soFromEnd);
Http.Request.Range := IntToStr(Fs.Position) + '-';
Http.Get(Url, Fs);
finally
Fs.Free;
end;
end;