How to get a content of file which is being written by an application? - delphi

This application always create a file when you activate a function (lets say, a log file). This file cannot be opened during the running - but I need its content before application closes (another process uses it, so I cant even view it). Is there a way to "hook" it somehow?
Im working with Delphi, but I accept any other solution.
So, summary, I need to know what file application created (it always creates other, but in the same directory) and the content it wrote. Any help appreciated.

I found a workaround:
copy the file, and operate on the cloned one:
http://www.howtogeek.com/howto/windows-vista/backupcopy-files-that-are-in-use-or-locked-in-windows/

Related

Write files changes made when opening file through windows explorer in a mapped drive

I'm implementing a WebDAV file server using the ITHit WebDave engine, i have the following problem
When i list the files and open one of them i get the ReadAsync method called, i provide the content and the file is opened correctly
However any changes i make to the file can't be saved, i get an error saying
A device attached to the system is not working
I looked at the file system samples and implemented support based on the FileSystemStorage.AspNetCore sample,
From what can understand the WriteAsync method is used when creating new files, should i expect for WriteAsync to also be called for file edits that need to be changed?
Am i wrong in the assumption that DavFile.WriteAsync will be called with a stream for the updated content?
If WriteAsync is not the right location to save updates to a file, could you provide some guidance on the process of saving changes to existing files?
Edited to add:
Now i can see that after i dismiss the first error about the device not working i get the standard save dialog box, if i click save it asks me if i want to overwrite the existing file, after accepting to overwrite then WriteAsync is called and i can update the file contents
I'm not quite sure why it would first tell me there's an error and then still allow me to write the file but only as replacement to the original file
Thanks for your help
Fixed, i found that there were issues with the ILockAsync implementation, reviewing the FileSystemStorage sample helped fix the issue with locking files before writing or updating properties

Trying to open an application with parameter via an Application Protocol Handler

I am currently trying to figure out an issue with an Application Protocol Handler I've created. Following the directions listed on MSDN (http://msdn.microsoft.com/en-us/library/aa767914%28v=vs.85%29.aspx), I was able to register my application, PDF Annotator, to open via a URL. The issue I am experiencing is when I try to pass a parameter along with the call. The application will open, but the file parameter that gets passed is not opening within the application.
My registry key is verbatim as dictated by MSDN. My HTML code is as follows:
PDFAnnotator:C:\path\to\file\file.pdf
The way I understood the protocol handler is it takes the URL and tries to launch it via the command line. That being said, I am able to open my pdf file in PDFAnnotator with following command in the prompt:
PDFAnnotator.exe C:\path\to\file\file.pdf
I've tried formatting the file path in the HTML differently thinking that would be the issue too. Has anyone else come across this issue or something similar?
Obligatory Update for future generations (http://xkcd.com/979/):
The reason I was doing this is because half of the PDFs my application handled would be editable while the other half were read-only. I was trying to keep the read-only ones in browser with the Acrobat plugin (I'm targeting chrome only) while the protocol would allow me to set the links of the editable ones to open with Annotator. I tried, on whim, to reverse this (setting the default to Annotator and creating a protocol for Acrobat). I did this, first by trying Acrobat's URI Scheme (acrobat://), which didn't work outside of opening Acrobat. Then, I tried creating a protocol for Acrobat. When that fired off, it gave me an error stating the path was wrong for the file name, path name, or volume. So, progress? I'm giving up on this for now as other priorities have come up, but hopefully this helps somebody down the road.

How to avoid intermittent Errno::ETXTBSY exceptions?

During part of a request in a Rails application, I copy a directory from one place to another, think of it like a working area. Sometimes this copy operation results in "Errno::ETXTBSY" exceptions being thrown. I can't seem to pin down the case that causes it, any tips to detect the case or avoid it altogether?
I've made sure the destination directory is uniquely named, so it shouldn't be a case of 2 processes attempting to write to the same place. Beyond that I'm out of ideas.
ETXTBSY means that you're trying to open for writing a file which is currently being executed as a program, or that you're trying to execute a file which is currently open for writing. Since you say you're copying files, not executing them it seems likely it's the former, not the later.
You say you're targeting a unique new destination, but my guess is that's not entirely true and you're actually targeting an existing directory and one of the files you're attempting to overwrite is currently open as an executable text segment of a running process.
You haven't posted any code, so it's hard to comment specifically. I suggest you add enough logging so you know exactly what file(s) are being processed and specifically, the source and destination path that throws the exception. Then you could use lsof to see what process may have that file open.
One way to avoid the problem if you are overwriting a currently open executable, is to first unlink the target file. The running process will still have the old inode mapped and proceed merrily using the deleted file, but your open for write will then create a new file which won't conflict.

Need help opening printer spool shadow file (.SHD) that is locked

I'm interested in some information inside a shadow file (.shd) located inside the windows print spooling directory "C:\Windows\System32\spool\PRINTERS". Every time a print job is started, a spool file (.spl) and a shadow file (.shd) are created in that directory. So far I have been successful in detecting when a print job has started, and have been able to pause that print job. If you don't pause the job, the files eventually make their way to the printer and then are deleted by windows.
My problem is. I cannot open the .SHD files because they are locked in such a way that you can not read them while they are open by the sprint spooler. I've even tried going to the file in windows explorer and simply copying the file to another file, and that didn't work either. The .SPL spool files I can open though. I simply wait, and fairly quickly the spooler release that file. For the shadow file though, it permanently holds on to this file. Unfortunately, its the one I need.
The line of code I'm using specifically to open the file is as follows:
m_spoolJobStream = new FileStream(spoolFilePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
The IOException I get is:
The process cannot access the file 'C:\Windows\system32\spool\PRINTERS\FP00083.SHD' because it is being used by another process.
So yes, it is being used by another process. Its being used by the window's print spooler service. But I don't think there is anything I can do about that. All I want to do is read the file. I don't want to make any changes to it. Is there anything I can do here or am I just screwed?
Check the option: "Keep printed documents" (if you have HP printer) and then see your spool file folder, both shadow and spool files would be there.
Well, I did not find a way around this problem. I suspect there is no solution for this and it is by design. However I did find another way to get the information I wanted (at least it seems so thus far).
I'm using the FindNextPrinterChangeNotification() routine out of the winspool.drv library. This guy returns a pointer to a PRINTER_NOTIFY_INFO structure, which in turn contains an array of PRINTER_NOTIFY_INFO_DATA structures. Within that array, there is an element with its "Field" member marked as "JOB_NOTIFY_FIELD_DEVMODE". This element contains a fairly large structure of type DEVMODE. The structure is explained by M$ here http://msdn.microsoft.com/en-us/library/dd183565%28v=vs.85%29.aspx . This structure looks like it contains what I'm looking for and apparently is wrapped up in the .SHD file anyways according to this page http://www.undocprint.org/formats/winspool/shd. I'd like to know what else is in that .SHD file, but I still can't open it because its locked while the job is paused, and I suspect that it stays locked until the job is complete. Oh well, I think my new solution is more elegant anyways.
Just make sure you pause the job in the spool on BOTH your box and the server, then you should be able to copy/open/move the shd file just like you can the spl file. Worked for me, anyway...
This works for me:
- Hang your printer (e.g. jam the paper)
- Print and observe .SHD and .SPL being created
- Stop Print Spooler
- Open the file
The problem might be the FileShare.ReadWrite parameter. You're asking to read and write on the file and maybe that's why you get an error. You should try asking for read-only permission.

How can I make a server log file available via my ASP.NET MVC website?

I have an ASP.NET MVC website that works in tandem with a Windows Service that processes file uploads. For easy maintenance of the site, I'd like the log file for the Windows Service to be accessible (to me, only) via the website, so that I can hit http://myserver/logs/myservice to view the contents of the log file. How can I do that?
At a guess, I could either have the service write its log file in a "Logs" folder at the top level of the site, or I could leave it where it is and set up a virtual directory to point to it. Which of these is better - or is there another, better way?
Wherever the file is stored, I can see that there's going to be another problem. I tried out the first option (Logs folder in my website), but when I try to access the file via HTTP I get an error:
The process cannot access the file 'foo' because it is being used by another process.
Now, I know from experience that my service keeps the file locked for writing while it's running, but that I can still open the file in Notepad to view the current contents. (I'm surprised that IIS insists on write access, if that's what's happening).
How can I get around that? Do I really have to write a handler to read the file and serve it to the browser myself? Or can I fix this with configuration or somesuch?
PS. I'm using IIS7 if that helps.
Unfortunately I'm afraid you'll have to write a handler that will open the file, and return it to the client.
I've written an IIS Manager extension that displays server log files, and what I've noticed that even the simple
System.IO.File.OpenRead("")
can still run in the same problem, and return the same error.. It was kind of confusing.
In the end I used
System.IO.File.Open("", FileMode.Open, FileAccess.Read, FileShare.ReadWrite)
and I could easily open the file while the server was writing logs to it :)
I think the virtual directory is an "okay" solution, if you add the directory (application) with READ ONLY rights + perhaps "BROWSE directory" too (so you can see the folder contents rendered by the IIS).
(But once you do that, you should consider that you also anonymous access to that folder - unless you enable authentication, so watch out for "secret" contents of the logfiles that you might expose? just a thought.)
Another approach, I prefer myself, is to make a MVC/ASP.NET page that does the lookup in the folder by normal code, so that you 100% can filter whatever data is shown in the HTML.
You can open the files as TextStream's and in Read Only mode.
If it's a problem to gain access to the logfolder, I would use the virtual directory with READ ONLY access and then program something that renders the logfiles as HTML on my screen and with my detail levels. Perhaps even add some sort of "login" first. But it all depends on your security levels and contents of logfiles.
is this meaningfull to you? if not, please explain more, as I've been through this thought a few times already for similar situations.

Resources