Serilog - RollingFile Sink rolling fails based on the size - serilog

I am using Serilog.Sinks.File with version 3.2.0. and I would like to roll-over logs based on the size. Currently, my 'fileSizeLimitBytes' is set 2000 bytes. When the log file size reaches the limit set in 'fileSizeLimitBytes', it does not roll-over fails and fails to log the messages. How can I roll-over the log file based on the size?
logging.json
"WriteTo": [
{
"Name": "Console",
"Args": {
"outputTemplate": "[{Timestamp:HH:mm:ss} {Level}][{ThreadId}] {SourceContext}{NewLine}{Message:lj}{NewLine}{Exception}{NewLine}"
}
},
{
"Name": "File",
"Args": {
"path": "Logs\\Test.log",
"formatter":"Serilog.Formatting.Json.JsonFormatter, Serilog",
"rollingInterval": "Day",
"restrictedToMinimumLevel": "Debug",
"retainedFileCountLimit": 5 ,
"fileSizeLimitBytes": 2000
}
}

I believe you also need to specify rollOnFileSizeLimit: true.

Related

Can't read data from Firebase Realtime database by issuing a GET request. Always returns null

Task is simple, but I couldn't find any solution though.
Here is the request I'm sending https://graf-24561-default-rtdb.firebaseio.com/graf-24561-default-rtdb.json
Rules for reading and writing:
{
"rules": {
".read": "now < 1651165200000", // 2022-4-29
".write": "now < 1651165200000", // 2022-4-29
}
}
Data in code:
[
{
"name": "0002 М ( мрамор) 8м пленка с\/м\/20 DEKORON ",
"price": 209.7
},
{
"name": "0007 М ( мрамор) 8м пленка с\/м\/20 DEKORON ",
"price": 209.7
},
{
"name": "0008-2 А (дуб темный) 8м пленка с\/м \/20",
"price": 232.84
},
{
"name": "0008-3 А (темн.махагон) 8м пленка с\/м \/20 ",
"price": 209.7
}
]
The graf-24561-default-rtdb that you see in the root of the JSON in the screenshot is the name of your database, and is not part of the data structure.
So to get the entire database, the URL would be:
https://graf-24561-default-rtdb.firebaseio.com/.json
It may read a bit weird with that .json at the end, but is the correct syntax.

Artifactory and Jenkins - get file with newest/biggest custom property

I have generic repository "my_repo". I uploaded files there from jenkins with to paths like my_repo/branch_buildNumber/package.tar.gz and with custom property "tag" like "1.9.0","1.10.0" etc. I want to get item/file with latest/newest tag.
I tried to modify Example 2 from this link ...
https://www.jfrog.com/confluence/display/JFROG/Using+File+Specs#UsingFileSpecs-Examples
... and add sorting and limit the way it was done here ...
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language#ArtifactoryQueryLanguage-limitDisplayLimitsandPagination
But im getting "unknown property desc" error.
The Jenkins Artifactory Plugin, like most of the JFrog clients, supports File Specs for downloading and uploading generic files.
The File Specs schema is described here. When creating a File Spec for downloading files, you have the option of using the "pattern" property, which can include wildcards. For example, the following spec downloads all the zip files from the my-local-repo repository into the local froggy directory:
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"target": "froggy/"
}
]
}
Alternatively, you can use "aql" instead of "pattern". The following spec, provides the same result as the previous one:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"target": "froggy/"
}
]
}
The allowed AQL syntax inside File Specs does not include everything the Artifactory Query Language allows. For examples, you can't use the "include" or "sort" clauses. These limitations were put in place, to make the response structure known and constant.
Sorting however is still available with File Specs, regardless of whether you choose to use "pattern" or "aql". It is supported throw the "sortBy", "sortOrder", "limit" and "offset" File Spec properties.
For example, the following File Spec, will download only the 3 largest zip file files:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "froggy/"
}
]
}
And you can do the same with "pattern", instead of "aql":
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "local/output/"
}
]
}
You can read more about File Specs here.
(After answering this question here, we also updated the File Specs documentation with these examples).
After a lot of testing and experimenting i found that there are many ways of solving my main problem (getting latest version of package) but each of way require some function which is available in paid version. Like sort() in AQL or [RELEASE] in REST API. But i found that i still can get JSON with a full list of files and its properties. I can also download each single file. This led me to solution with simple python script. I can't publish whole but only the core which should bu fairly obvious
import requests, argparse
from packaging import version
...
query="""
items.find({
"type" : "file",
"$and":[{
"repo" : {"$match" : \"""" + args.repository + """\"},
"path" : {"$match" : \"""" + args.path + """\"}
}]
}).include("name","repo","path","size","property.*")
"""
auth=(args.username,args.password)
def clearVersion(ver: str):
new = ''
for letter in ver:
if letter.isnumeric() or letter == ".":
new+=letter
return new
def lastestArtifact(response: requests):
response = response.json()
latestVer = "0.0.0"
currentItemIndex = 0
chosenItemIndex = 0
for results in response["results"]:
for prop in results['properties']:
if prop["key"] == "tag":
if version.parse(clearVersion(prop["value"])) > version.parse(clearVersion(latestVer)):
latestVer = prop["value"]
chosenItemIndex = currentItemIndex
currentItemIndex += 1
return response["results"][chosenItemIndex]
req = requests.post(url,data=query,auth=auth)
if args.verbose:
print(req.text)
latest = lastestArtifact(req)
...
I just want to point that THIS IS NOT permanent solution. We just didnt want to buy license yet only because of one single problem. But if there will be more of such problems then we definetly buy PRO subscription.

With Google Cloud Speech-to-text, why do I get different results for the same audio file, depending on which bucket do I put it into?

I am trying to use Google Cloud Speech-to-text, using the client libraries, from a node.js environment, and I see something I don't understand: I get a different result for the same example audio file, and the same configuration, depending on whether I am using it from the original sample bucket, or from my own bucket.
There are the requests and responses:
The baseline is Google's own test data file, available here: https://storage.googleapis.com/cloud-samples-tests/speech/brooklyn.flac
Request:
{
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"sampleRateHertz": 16000,
"enableAutomaticPunctuation": true
},
"audio": {
"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"
}
}
Response:
{
"results": [
{
"alternatives": [
{
"transcript": "How old is the Brooklyn Bridge?",
"confidence": 0.9831430315971375
}
]
}
]
}
So far, so good. But, if I download this audio file, re-upload it to my own bucket, and do the same, then:
Request:
{
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"sampleRateHertz": 16000,
"enableAutomaticPunctuation": true
},
"audio": {
"uri": "gs://goe-transcript-creation/brooklyn.flac"
}
}
Response:
{
"results": [
{
"alternatives": [
{
"transcript": "how old is",
"confidence": 0.8902621865272522
}
]
}
]
}
As you can see this is the same request. The re-uploaded audio data is here: https://storage.googleapis.com/goe-transcript-creation/brooklyn.flac
This the exact same file as in the first example... not a bit of difference.
Still, the results are different; I only get half of the sentence.
What am I missing here? Thanks.
Update 1:
The same thing happens with the CLI tool, too:
$ gcloud ml speech recognize gs://cloud-samples-tests/speech/brooklyn.flac --language-code=en-US
{
"results": [
{
"alternatives": [
{
"confidence": 0.98314303,
"transcript": "how old is the Brooklyn Bridge"
}
]
}
]
}
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US
ERROR: (gcloud.ml.speech.recognize) INVALID_ARGUMENT: Invalid recognition 'config': bad encoding..
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US --encoding=FLAC
ERROR: (gcloud.ml.speech.recognize) INVALID_ARGUMENT: Invalid recognition 'config': bad sample rate hertz.
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US --encoding=FLAC --sample-rate=16000
{
"results": [
{
"alternatives": [
{
"confidence": 0.8902483,
"transcript": "how old is"
}
]
}
]
}
It's also interesting that when pulling the audio from the other bucket, I need to specify encoding and sample rate, otherwise it doesn't work... but it's not necessary when I am using the original test bucket.
Update 2:
If I don't use Google Cloud Storage, but upload the data directly in the speech-to-text request, it works as intended:
$ gcloud ml speech recognize brooklyn.flac --language-code=en-US
{
"results": [
{
"alternatives": [
{
"confidence": 0.98314303,
"transcript": "how old is the Brooklyn Bridge"
}
]
}
]
}
So the problem doesn't seems to be with the recognition itself, but accessing the audio data. The obvious guess would be that maybe it's the fault of the uploading, and the data is somehow corrupted along the way?
We can verify that by pulling the data from the cloud, and comparing with the original. It doesn't seem to be broken.
So maybe it's a problem when the S-T-T service is accessing the storage service? But why with one bucket only? Or is it some kind of file metadata problem?

Get values from JSON in Ruby

I am trying to get the VolumeId and State of the Volume attached to the machines using aws API .
Code
#!/usr/local/bin/ruby
require "aws-sdk"
require "rubygems"
list=Aws::EC2::Client.new(region: "us-east-1")
volume=list.describe_volumes()
volumes=%x( aws ec2 describe-volumes --region='us-east-1' )
puts volumes
Below is the sample output of the command
aws ec2 describe-volumes --region='us-east-1' .
Please help to get VolumeID and state from the below
Sample Output of API(JSON):
{
"Volumes": [
{
"AvailabilityZone": "us-east-1d",
"Attachments": [
{
"AttachTime": "2015-02-02T07:31:36.000Z",
"InstanceId": "i-bca66353",
"VolumeId": "vol-892a2acd",
"State": "attached",
"DeleteOnTermination": true,
"Device": "/dev/sda1"
}
],
"Encrypted": false,
"VolumeType": "gp2",
"VolumeId": "vol-892a2acd",
"State": "in-use",
"Iops": 100,
"SnapshotId": "snap-df910966",
"CreateTime": "2015-02-02T07:31:36.380Z",
"Size": 8
},
]
}
for getting just the volume_ids ->
JSON.parse(volumes)['Volumes'].map{|v|v["VolumeId"]}
for getting just the states ->
JSON.parse(volumes)['Volumes'].map{|v|v["state"]}
for getting a hash/map with volume-ids as keys and their states as values ->
JSON.parse(volumes)['Volumes'].map{|v| [v["VolumeId"],v["state"]] }.to_h

TimeBasedTriggeringPolicy log4j2 not rolling over when new day happens

I have configured Log4j2 with the following configuration, but TimeBasedTriggeringPolicy is not working, I am getting new day logs in older day's logs, can you please help. It is rolling over properly with size and it is also creating any number of files since I have specified "nomax" attribute for DefaultRollOverStrategy, only TimeBasedTriggeringPolicy is not working. It writes logs of a day in the previous day's log file. It does create the new log file with the latest date, but some logs have been logged in to the previous day's log. May be getting problem in naming the files properly I do not know.
{
"Configuration": {
"Properties": {
"Property": [
{
"name": "application",
"value": "myapp"
}
]
},
"Appenders": {
"Console": {
"name": "Console-Appender",
"target": "SYSTEM_OUT",
"PatternLayout": {
"pattern": "%d{yyyy-MM-dd HH:mm:ss,SSS} ${application} %-5level %marker %t %c{5} %msg%n"
},
"ThresholdFilter": { "level": "error" }
},
"RollingFile": [
{
"name": "File-Appender",
"fileName":"${sys:log.path}/${application}.log",
"filePattern":"${sys:log.path}/${application}-%d{yyyy-MM-dd}-%i.log",
"PatternLayout": {
"pattern": "%d{yyyy-MM-dd HH:mm:ss,SSS} ${application} %-5level %marker %t %c{5} %msg%n"
},
"Policies": {
"TimeBasedTriggeringPolicy": {"interval":"1", "modulate":"true" },
"SizeBasedTriggeringPolicy": { "size": "5 KB" }
},
"DefaultRolloverStrategy": {"fileIndex":"nomax"}
}
]
},
"loggers": {
"logger":{
"name": "com.mycompany",
"level": "${sys:log.level}",
"AppenderRef": { "ref": "File-Appender"}
},
"root": {
"level": "error",
"AppenderRef": { "ref": "Console-Appender" }
}
}
}
}
How about the CronTriggeringPolicy? This will trigger a rollover for you every day at 12AM:
{"CronTriggeringPolicy": {"schedule": "0 0 0,12 * * ?"}
That way, it won't rely on the file pattern and should definitely roll your logs over per your cron interval. I also don't know how you determined that the previous days logs were included in the rollover, but make sure your system time is in sync.
Adding :
"OnStartupTriggeringPolicy": {"minSize":"0"}
to the policies solved the problem.
From log4j2 documentation:
The OnStartupTriggeringPolicy policy causes a rollover if the log file is older than the current JVM's start time and the minimum file size is met or exceeded.
OnStartupTriggeringPolicy Parameters:
minSize: long: The minimum size the file must have to roll over. A size of zero will cause a roll over no matter what the file size is. The default value is 1, which will prevent rolling over an empty file

Resources