log4j2 - keep last 7days of log file - log4j2

To keep the last 3days of log file with each file size up to 10MB, how to configure in log4j2.yml file?
I tried,
filePattern: ${log}/${app}-archive/${app}-%d{MM-dd-yyyy}-%i.log"
...
Policies:
TimeBasedTriggeringPolicy:
interval: 1
modulate: true
SizeBasedTriggeringPolicy:
size: 10 MB
DefaultRolloverStrategy:
delete:
basePath: "${log}/${app}-archive"
maxDepth: 1
IfFileName:
glob: "*.log"
IfLastModified:
age: 3d
and it creates only up to 7 archives on the same day and delete old files even though it was today's log. Is there a way to keep as much as files if its lastModified < 3d?
like app-04-09-2021-8.log, app-04-09-2021-9.log,....app-04-09-2021-39.log and so on.
Please, guide me.

By default DefaultRolloverStrategy will keep at most the value of the max configuration attribute, 7 by default, per time based rollover interval, daily in your use case as indicated in your file pattern, ${app}-%d{MM-dd-yyyy}-%i.log - the max attribute only applies only to the %i counter.
Provide a bigger value for that attribute, the value you consider appropriate depending on your log patterns. For example:
DefaultRollOverStrategy:
max: 100
delete:
basePath: "${log}/${app}-archive"
maxDepth: 1
IfFileName:
glob: "*.log"
IfLastModified:
age: 3d
Please, see the relevant documentation.

Related

Is there any way to convert .log file generated from BUS master (CAN tool) to .asc file (Vector support) with out bus master?

I am trying to make a script which convert BUS master .log files into .asc files without bus master. The files are just column adjustment but I am stuck with different time format used in the files' format.
Example: 11:29:26:2229 (.log) = 41366.222900 (.asc)
Please help me out to understand the .asc file time format.
In the header of the .asc file, the start time and date is given in the very first line. E.g.:
date Mon Okt 24 10:01:03 2022
The timestamps of the individual CAN messages (and other events) are just offsets to the start time in seconds. I.e. given the header above, the timestamp
123.456
Would be 123 seconds and 456 milliseconds after 10:01:03, which is 10:03:06.456
Details about the .asc format can be found in the Doc folder in CANoe's installation directory.
.asc time format = hour * 3600 + min * 60 + sec.

Measure the duration of x amount of requests while using K6

I would like to use K6 in order to measure the time it takes to proces 1.000.000 requests (in total) by an API.
Scenario
Execute 1.000.000 (1 million in total) get requests by 50 concurrent users/theads, so every user/thread executes 20.000 requests.
I've managed to create such a scenario with Artillery.io, but I'm not sure how to create the same one while using K6. Could you point me in the right direction in order to create the scenario? (Most examples are using a pre-defined duration, but in this case I don't know the duration -> this is exactly what I want to measure).
Artillery yml
config:
target: 'https://localhost:44000'
phases:
- duration: 1
arrivalRate: 50
scenarios:
- flow:
- loop:
- get:
url: "/api/Test"
count: 20000
K6 js
import http from 'k6/http';
import {check, sleep} from 'k6';
export let options = {
iterations: 1000000,
vus: 50
};
export default function() {
let res = http.get('https://localhost:44000/api/Test');
check(res, { 'success': (r) => r.status === 200 });
}
The iterations + vus you've specified in your k6 script options would result in a shared-iterations executor, where VUs will "steal" iterations from the common pile of 1m iterations. So, the faster VUs will complete slightly more than 20k requests, while the slower ones will complete slightly less, but overall you'd still get 1 million requests. And if you want to see how quickly you can complete 1m requests, that's arguably the better way to go about it...
However, if having exactly 20k requests per VU is a strict requirement, you can easily do that with the aptly named per-vu-iterations executor:
export let options = {
discardResponseBodies: true,
scenarios: {
'million_hits': {
executor: 'per-vu-iterations',
vus: 50,
iterations: 20000,
maxDuration: '2h',
},
},
};
In any case, I strongly suggest setting maxDuration to a high value, since the default value is only 10 minutes for either executor. And discardResponseBodies will probably help with the performance, if you don't care about the response body contents.
btw you can also do in k6 what you've done in Artillery, have 50 VUs start a single iteration each and then just loop the http.get() call 20000 times inside of that one single iteration... You won't get a very nice UX that way, the k6 progressbars will be frozen until the very end, since k6 will have no idea of your actual progress inside of each iteration, but it will also work.

How can I read a directory on iso9660 from the path table when the table does not include size?

According to the spec for the structure of an iso9660 / ecma119, the path table contains records for each path, including the location of the starting sector and its name, but not its size. I can find the directory entry, but don't know how many sectors (normally 2048 bytes) it contains. Is it one? Two? Six?
If I "walk the directory tree", each directory entry includes the referenced location and size, so I can know how many bytes (essentially, how many sectors, since a directory must use entire sectors) to read. However, the path table only includes the starting location, and not the size, leaving me not knowing how many bytes to read.
In an example iso I have (ubuntu-18.04.1-live-server-amd64.iso fwiw), the root directory entry in the primary volume descriptor shows:
Root Directory:
Directory Record Length: 34
Extended Attribute Length: 0
Location of Extent: 20 $00000014 00:00:20
Data Length: 2048 $00000800
Recording Date and Time: 23:39:04 07/25/2018 GMT 0
File Flags: $02 visible regular dir non-record no-perms single-extent
File Unit Size: 0
Interleave Gap Size: 0
Volume Sequence Number: 1
File Identifier: . (current directory)
Since it says the Data Length is 2048, I know to read just one sector.
However, the root directory entry in the path table shows:
Path Record Length: 10 $0A
Extended Attribute Length: 0 $00
Location of Extent: 20 $00000014 00:00:20
Parent Directory Number: 1 $0001
File Identifier: . (current directory)
It also points to sector 20, but doesn't tell me how many sectors it uses, leaving me guessing.
Yes, unused bytes in a sector should be all 0x00, so if I read in a sector, read records, and then come to one whose first byte (length) is 0x00, then I know I have reached the end of records, but that has three issues:
If that were the canonical way, why bother including size in the directory entry?
If it includes 2 or 3 sectors, it is more efficient for me to read them all at once than one at a time.
If I have a directory whose records precisely fill a sector, without some size attribute, I don't know if the next sector is supposed to be read as an entry, or if the directory ended here.
Basically, I know how to read the ordered path table to get the directory entry, but don't know how to use that to know how many sectors to read for the directory itself. I could, in theory, read the parent to get the entry for this directory to know the size, but that adds a seek and read and pretty much defeats the purpose of the path table.
Ah, I figured it out. Because the directory entries always start with a directory entry for the directory itself, and the data length always is bytes 10-17 (10-13 for little-endian, 13-17 for big-endian), you can just read bytes 10-17 from the beginning of the sector and get the size. Still not as efficient as putting it in the path table itself - no idea why they did not - but it works.

how to form fields section in xfd file

I'm having problem to form the field section's structure into xfd files after analyse by issuing commnad "vutil32.exe -i -kx pogl.dad". I hope somebody could help me out how to form out field structure as highlighted in below. I've uploaded sample of my file known as "pglc.dad" i hope soneone could guide me how to form .xfd file from his expert skills and guide me.Thanks
Result from vutil32.exe
file size: 250880
record size (min/max): 121/1024 compressed(80%)
# of keys: 4
key size: 16:02 31:03 56:03 15
key offset: 0 0 0 1
duplicates okay: N N N N
block size: 512
blocks per granule: 1
tree height: 4/2/2.7
# of nodes: 200
# of deleted nodes: 1
total node space: 101800
node space used: 67463 (66%)
user count: 0
Key Dups Seg-1 Seg-2 Seg-3 Seg-4 Seg-5 Seg-6
(sz/of) (sz/of) (sz/of) (sz/of) (sz/of) (sz/of)
0 N 1/0 15/1
1 N 1/0 15/66 15/1
2 N 1/0 40/81 15/1
3 N 15/1
Here is my further construction of .xfd file.
XFD,02,PGLC,PGLC
00300,00041,004
1,0,013,00000
01
PGSTAT
3,0,004,00004,020,00021,004,00000
3
PGSTAT
PGDESC
PGLINE
3,0,004,00004,008,00013,004,00000
03
PGSTAT
PGDESC
PGLINE
1,0,012,00021
01
PGSTAT
000
0150,00150,00003 =================>> How can i form this field section.
00000,00013,16,00016,+00,000,000,PGSTAT
00000,00001,16,00001,+00,000,000,PGDESC
00001,00015,16,00015,+00,000,000,PGLINE
here is the link for my pglc.dad : http://files.engineering.com/getfile.aspx?folder=080fdad6-b1d5-4a37-8dd0-b89f9a985c69&file=PGLC.DAD
Thanks appopriate to someone could helps.
I know the XFD format intimately as I have written a couple of parsers of this file format in both Perl and Cobol.
Having said that, I would strongly recommend that you do not try to hand craft an XFD file from scratch.
If you have an AcuCobol (MicroFocus) compiler, and the source of the file's SELECT and FD definitions, then you can create a very small Cobol program that has just the SELECT and FD definitions and then compile the program using:
ccbl32.exe -Fx <program>
That will create an XFD file for the indexed file definition. Note, you can specify a directory for the created XFD file using the -Fo <directory> option.
If you don't have the source of the file definitions, then you are just going to be guessing what and where the fields are. The indexed file by itself will not tell you that information. I can see from extracting the data in your file (using the vutil -e option) that the file contains binary data as well as text, so without knowing exactly what PICture those fields are (COMP-?) you will be struggling to figure out the structure of those fields.

Make Sphinx quiet (non-verbose)

I'm using Sphinx through Thinking Sphinx in a Ruby on Rails project. When I create seed data and all the time, it's quite verbose, printing this:
using config file '/Users/pupeno/projectx/config/development.sphinx.conf'...
indexing index 'user_delta'...
collected 7 docs, 0.0 MB
collected 0 attr values
sorted 0.0 Mvalues, 100.0% done
sorted 0.0 Mhits, 99.6% done
total 7 docs, 159 bytes
total 0.042 sec, 3749.29 bytes/sec, 165.06 docs/sec
Sphinx 0.9.8.1-release (r1533)
Copyright (c) 2001-2008, Andrew Aksyonoff
for every record that is created or so. Is there a way to suppress that output?
There is actually a setting to stop this - you'll want to set it at the end of your environment.rb file:
ThinkingSphinx.suppress_delta_output = true
In Thinking Sphinx v3 and newer, this has changed, and this setting is now managed through config/thinking_sphinx.yml. Repeat for each appropriate environment:
development:
quiet_deltas: true
Run sphinx with --quiet flag. I'm not using TS, though, so I don't know how to add make TS to use this flag. hth.

Resources