I've developed a Delphi service that writes a log to a file. Each entry is written to a new line. Once this logfile reaches a specific size limit, I'd like to trim the first X lines from the beginning of the file to keep its size below the specified limit. I've found some code here on SO which demonstrates how to delete chunks of data from the BOF, but how do I go about deleting full randomly sized lines and not given chunks?
Related
The Spyder IDE logs the commands from the console in ~/.config/spyder-py3/history.py. However, it only stores about 1000 lines of history. How do I increase this limit?
I followed the advice in this post and increased the buffer size to 5000. However, it does not increase the length of the history file, (and I think it changes only the console buffer size).
In short, I am looking for the equivalent of HISTFILESIZE in .bashrc. I do not mind even if the buffer size remains short (say 500, the default size), i.e. I don't care about HISTSIZE equivalent of .bashrc.
P.S. there is another empty file in the config path: ~/.config/spyder-py3/history_internal.py . Don't know if that matters.
I have a netlogo model, for which a run takes about 15 minutes, but goes through a lot of ticks. This is because per tick, not much happens. I want to do quite a few runs in an experiment in behaviorspace. The output (only table output) will be all the output and input variables per tick. However, not all this data is relevant: it's only relevant once a day (day is variable, a run lasts 1095 days).
The result is that the model gets so slow running experiments via behaviorspace. Not only would it be nicer to have output data with just 1095 rows, it perhaps also causes the experiment to slow down tremendously.
How to fix this?
It is possible to write your own output file in a BehaviorSpace experiment. Program your code to create and open an output file that contains only the results you want.
The problem is to keep BehaviorSpace from trying to open the same output file from different model runs running on different processors, which causes a runtime error. I have tried two solutions.
Tell BehaviorSpace to only use one processor for the experiment. Then you can use the same output file for all model runs. If you want the output lines to include which model run it's on, use the primitive behaviorspace-run-number.
Have each model run create its own output file with a unique name. Open the file using something like:
file-open (word "Output-for-run-" behaviorspace-run-number ".csv")
so the output files will be named Output-for-run-1.csv etc.
(If you are not familiar with it, the CSV extension is very useful for writing output files. You can put everything you want to output on a big list, and then when the model finishes write the list into a CSV file with:
csv:to-file (word "Output-for-run-" behaviorspace-run-number ".csv") the-big-list
)
I'm only outputting my parsed data into a mongodb from logstash, but it there any way to tell when the logs are finished parsing, so that I can kill logstash? As a lot of logs are being processed, I cannot stdout my data.
Since you are using a file input, there should be a .sincedb file somewhere. That file keeps track of how many lines have already been parsed. As far as I understand it, it is structured this way:
INODE_NUMBER CURRENT_LINE_NUMBER
The inode number identifies a file (so if you are parsing several files or if your file is being rolled over, there will be several lines). The other number is like a bookmark for logstash to remember what it already read (in case you would proceed the same file in several times). So basically, when this number stops moving up, this should mean that logstash is done parsing the file.
Alternatively if you have no multiline filter set up, you could simply compare the number of lines the file has to the number of records in mongodb.
Third possibility, you can setup another output, not necessarily stdout, this could be for example a pipe to a script that will simply drop the data and print a message when it got nothing new after some time, or some other alternative, see the docs.
I have this file, it contains:
"AAAAAAA"
I want to add "11111" to the file above. I tried two different calls, BOTH with seekToFileOffset:0:
fileHandleForWritingAtPath:
"11111AA"
Some items at the front parts of the file are truncated (gone)
I also tried:
fileHandleForUpdatingAtPath:
It also ended with:
"11111AA"
You have two choices, depending on your skill level
rewrite the file with a new name, delete the original file, and rename the newly written file to the original name.
rewrite the file in place. For example, using a 1K buffer, you do this by starting at the end, read the last 1K, and rewrite at the same location+an offset corresponding to the number of bytes to insert. Repeat for all previous data. When you get to the front of the file, you will have moved all the data by the desired offset and can then write the new data.
I am writing a Tcl script which will be used on an embedded device. The value of a variable in this script will be coming from a text file on the system. My concern is that if the source file is too big this may crash the device as there may not be enough memory to store the entire file. I wonder if the size of the variable can be limited so when feeding the variable it does not exhaust the entire amount of memory.
Also, if possible to limit the size of the variable will it still be filled with as much information as possible from the source file, even if the entire file cannot be fed into the variable?
You can limit the size of the variable by specifying the number of characters to read from the file. For example:
set f [open file.dat r]
set var [read $f 1024]
This code will read up to 1024 characters from the file (you'll get less than 1024 characters if the file is shorter than that, naturally).
ISTR, the size limit of a string representation of any variable in v8.5 is limited to 2 GiB. But as Eric already said, in your situation you should not blindly read the file into a variable, but rather either process the contents of the file in chunks or at least first estimate its size using file stat and then read it, if the size is OK (but note that this approach, of course, contains a race condition as the file can grow between the check and the read, but in your case this might or might not be a problem).