I want to extract process image name in a minifilter file system driver.
I am doing something like this, but I get a BSOD. Maybe a mistake in buffer allocation?
status = ZwQueryInformationProcess( NtCurrentProcess(),
ProcessImageFileName,
buffer,
returnedLength,
&returnedLength );
Using ZwQueryInformationProcess is not a recommended way to get process information.
The best method is to use PsSetCreateProcessNotifyRoutine and PsSetLoadImageNotifyRoutine and prepare your own list of processes.
Related
I start using a TFileStream and TStreamWriter to write simple text logfiles (instead of old Writeln(T,....)). And I have multiple applicatiosn writing to the same logfile.
Each appplication has its own TFileStream of course and they each open the file like this
FFileStream:=TFileStream.Create(LogName, fmOpenReadWrite+fmShareDenyNone)
FExporter:=TStreamWriter.Create(FFilestream, TEncoding.UTF8);
FExporter.NewLine:=#$0A;
FExporter.AutoFlush:=TRUE;
and write to the file with
FExporter.BaseStream.Seek(0, soFromEnd);
FExporter.Write('['+DateToStr(Now, FDateTimeFormat)+'] ['+TimeToStr(Now, FDateTimeFormat)+'] [#'+Lead0(GetCurrentThreadId, 5)+']: '+EntryText);
FExporter.WriteLine;
the result is somewhat "unsatisfactory" as the lines are displaced, empty lines in between and does not seem to work.
HOW would I do that correctly?
Writing multiples lines at the same time in multiples process may result in unexpected continue, because parallels execution.
You should assure that you are writing a block continually so WriteLine shoud be send inside the write using lineBreak at the end.
So the way you can write should be:
FExporter.BaseStream.Seek(0, soFromEnd);
FExporter.Write('['+DateToStr(Now, FDateTimeFormat)+'] ['+TimeToStr(Now, FDateTimeFormat)+'] [#'+Lead0(GetCurrentThreadId, 5)+']: '+EntryText + System.slineBreak);
//FExporter.WriteLine;
Update1:
As the link Oliver posted, sometime it can not work if the message size to be written is bigger than the OS file sector and, at that very moment, other process also try to write a message. Thus in this case the result content might be mixed.
So doing what I first purpose you would increase the probability to have the desired result, but may not be the solution in 100% of the cases.
To be 100% sure of writing continuous log in a single file, using multiples process, you should create a log process to receive a message from the others and to be the only responsible for writing synchronized log throughout threads.
I'm trying to pipe the output(logs) of a program to a Go program which aggregates/compress the output and uploads to S3. The command to run the program is "/program1 | /logShipper". The logShipper is written in Go and it's simply read from os.Stdin and write to a local file. The local file will be processed by another goroutine and upload to S3 periodically. There are some existing docker log drivers but we are running the container on a fully managed provider and the log processing charge is pretty expensive, so we want to bypass the existing solution and just upload to S3.
The main logic of the logShipper is simply read from the os.Stdin and write to some file. It's work correctly when running on the local machine but when running in docker the goroutine blocked at reader.ReadString('\n') and never return.
go func() {
reader := bufio.NewReader(os.Stdin)
mu.Lock()
output = openOrCreateOutputFile(&uploadQueue, workPath)
mu.Unlock()
for {
text, _ := reader.ReadString('\n')
now := time.Now().Format("2006-01-02T15:04:05.000000000Z")
mu.Lock()
output.file.Write([]byte(fmt.Sprintf("%s %s", now, text)))
mu.Unlock()
}
}()
I did some research online but not find why it's not working. One possibility I'm thinking is might docker redirect the stdout to somewhere so the PIPE not working the same way as it's running on a Linux box? (As looks like it can't read anything from program1) Any help or suggestion why it not working is welcome. Thanks.
Edit:
After doing more research I realized it's a bad practice to handle the logs in this way. I should more rely on the docker's log driver to handle the log aggregate and shipping. However, I'm still interested to find out why it's not read anything from the PIPE source program.
I'm not sure about the way the Docker handles output, but I suggest that you extract the file descriptor with os.Stdin.Fd() and then resort to using golang.org/x/sys/unix package as follows:
// Long way, for short one jump
// down straight to it.
//
// retrieve the file descriptor
// cast it to int, because Fd method
// returns uintptr
fd := int(os.Stdin.Fd())
// extract file descriptor flags
// it's safe to drop the error, since if it's there
// and it's not nil, you won't be able to read from
// Stdin anyway, unless it's a notice
// to try again, which mostly should not be
// the case
flags, _ := unix.FcntlInt(fd, unix.F_GETFL, 0)
// check if the nonblocking reading in enabled
nb := flags & unix.O_NONBLOCK != 0
// if this is the case, just enable it with
// unix.SetNonblock which is also a
// -- SHORT WAY HERE --
err = unix.SetNonblock(fd, true)
The difference between the long and a short way is that the long way will definitely tell you, if the problem is in the nonblocking state absence or not.
If this is not the case. Then I have no other ideas personally.
I'm using LabVIEW and its VISA capabilities to control a Keithley 2635A source meter. Whenever I try to identify the device, it works just fine, both in reading and writing.
viWRITE(*IDN?) /* VISA subVI to send the command to the machine */
viREAD /* VISA subVI to read output */
However, as soon as I set the voltage (or current), it does so. Then I send the command to perform a measurement, but I'm not able to read that data, with the error
VISA: (Hex 0xBFFF0015) Timeout expired before operation completed.
After that, I can not read the *IDN? output either anymore.
The source meter is connected to the PC via a National Instrument GPIB-USB-HS adaptor.
EDIT: I forgot to add, this happens in the VISA Interactive Control program as well.
Ok, apparently the documentation is not very clear. What the smua.measure.X() (where X is the needed parameter) command does is, of course, writing the measurement outcome on a buffer. In order to read that buffer, however, the simple viREAD[] is not sufficient.
So basically the answer was to simply add a print command: this way I have
viWRITE[print(smua.measure.X())];
viREAD[]
And I don't have the error anymore. Not sure why such a command is needed, but that's that. Thank you all for your time answering me.
As #Tom Blodget mentions in the comments, the machine may not have any response to read after you set the voltage. The *IDN? string is both command and query. That is, you will write the command *IDN? and read the result. Some commands do not have any response to read. Here's a quick test to see if you should be reading from the instrument. The following code is in python; I made up the GPIB command to set voltage.
sm = SourceMonitor()
# Prints out IDN
sm.query('*IDN?')
# Prints out current voltage (change this to your actual command)
sm.query('SOUR:VOLT?')
# Set a new voltage
sm.write('SOUR:VOLT 1V')
# Read the new voltage
sm.query('SOUR:VOLT?')
Note that question-marked GPIB commands and the query are used when you expect to get a response from the instrument. The instrument won't give a response for the write command. Query is a combination of write(...) and read(...). If you're using LabView, you may have to write the write and read separately.
If you need verification that the machine received your instruction and acted on it, most instruments have the following common commands:
*OPC? query to see if the operation is complete
SYST:ERR? query to see if any error was generated
Add a question mark ? to the end of the GPIB command used to set the voltage
I am using Verilator to incorporate an algorithm written in SystemVerilog into an executable utility that manipulates I/O streams passed via stdin and stdout. Unfortunately, when I use the SystemVerilog $display() function, the output goes to stdout. I would like it to go to stderr so that stdout remains uncontaminated for my other purposes.
How can I make this happen?
Thanks to #toolic for pointing out the existence of $fdisplay(), which can be used thusly...
$fdisplay(STDERR,"hello world"); // also supports formatted arguments
IEEE Std 1800-2012 states that STDERR should be pre-opened, but it did not seem to be known to Verilator. A workaround for this is:
integer STDERR = 32'h8000_0002;
Alternatively, you can create a log file handle for use with $fdisplay() like so...
integer logfile;
initial begin
$system("echo 'initial at ['$(date)']'>>temp.log");
logfile = $fopen("temp.log","a"); // or open with "w" to start fresh
end
It might be nice if you could create a custom wrapper that works like $display but uses your selected file descriptor (without specifying it every time). Unfortunately, that doesn't seem to be possible within the language itself -- but maybe you can do it with the DPI, see DPI Display Functions (I haven't gotten this to work so far).
I generate a very large .csv file from a database using the method outlined in
https://stackoverflow.com/a/13456219/141172
It works fine, up to a point. When the exported file is too large, I get an OutOfMemoryException.
If I turn off output buffering by modifying that code like this:
protected override void WriteFile(System.Web.HttpResponseBase response)
{
response.BufferOutput = false; // <--- Added this
this.Content(response.OutputStream);
}
the file download completes. However, it is several orders of magnitude slower than when output buffering was enabled (measured for the same file with buffering true/false, on localhost).
I understand that is slower, but why would it slow to a relative crawl? Is there anything I can do to improve processing speed?
UPDATE
It would also be an option to use File(Stream stream, String contentType) as suggested in the comments. However, I'm not sure how to create stream. The data is dynamically assembled based on a DB query, and a MemoryStream would run out of contiguous physical memory. Suggestions are welcome.
UPDATE 2
It was suggested in the comments that alternately reading from the database and writing to the stream is causing a degradation. I modified the code to perform the stream writing in a separate thread (using the producer/consumer pattern). There is no appreciable difference in performance.
I don't know what ASP.NET and IIS are doing exactly with output streaming but maybe too small chunks are being uses. Hook in a BufferedStream with a very big buffer, like 4MB.
According to your comments it worked. Now, tune down the buffer size to save memory and have a smaller working set. Good for cache.
As a subjective comment I'm disappointed that this is even necessary. IIS should use the right buffers automatically which is extremely easy with TCP connections.
EDIT FROM OP
Here is the code derived from this answer
public ActionResult Export()
{
// Domain specific stuff here
return new FileGeneratingResult("MyFile.txt", "text/text",
stream => this.StreamExport(stream), false);
}
private void StreamExport(Stream stream)
{
using (BufferedStream bs = new BufferedStream(stream, 256*1024))
using (StreamWriter sw = new StreamWriter(bs))
foreach (var stuff in MyData())
{
sw.Write(stuff);
}
}
In Eric's latest update, he mentioned using another thread. I too had this problem for implementing database exports. Here is some example code for the solution I used:
Handling with temporary file stream