Serilog LogEntries sink deprecated - serilog

The LogEntries sink for Serilog has been deprecated and we are using this in our production applications. Does anyone have any suggestions to resolve this issue?
https://github.com/serilog-archive/serilog-sinks-logentries

Related

How to forward logs from docker container to Graylog server without pre-formatting?

I have a Docker container that sends its logs to Graylog via udp.
Previously I just used it to output raw messages, but now I've come up with a solution that logs in GELF format.
However, Docker just puts it into "message" field (screen from Graylog Web Interface):
Or in plain text:
{
"version":"1.1",
"host":"1eefd38079fa",
"short_message":"Content root path: /app",
"full_message":"Content root path: /app",
"timestamp":1633754884.93817,
"level":6,
"_contentRoot":"/app",
"_LoggerName":"Microsoft.Hosting.Lifetime",
"_threadid":"1",
"_date":"09-10-2021 04:48:04,938",
"_level":"INFO",
"_callsite":"Microsoft.Extensions.Hosting.Internal.ConsoleLifetime.OnApplicationStarted"
}
GELF-driver is configured in docker-compose file:
logging:
driver: "gelf"
options:
gelf-address: "udp://sample-ip:port"
How to make Docker just forward these already formatted logs?
Is there any way to process these logs and append them as custom fields to docker logs?
The perfect solution would be to somehow enable gelf log driver, but disable pre-processing / formatting since logs are already GELF.
PS. For logs I'm using NLog library, C# .NET 5 and its NuGet package https://github.com/farzadpanahi/NLog.GelfLayout
In my case, there was no need to use NLog at all. It was just a logging framework which no one attempted to dive into.
So a better alternative is to use GELF logger provider for Microsoft.Extensions.Logging: Gelf.Extensions.Logging - https://github.com/mattwcole/gelf-extensions-logging
Don't forget to disable GELF for docker container if it is enabled.
It supports additional fields, parameterization of the formatted string (parameters in curly braces {} become the graylog fields) and is easily configured via appsettings.json
Some might consider this not be an answer since I was using NLog, but for me -- this is a neat way to send customized logs without much trouble. As for NLog, I could not come up with a solution.

Does Docker Splunk logging driver support indexer ack

Docker splunk driver is used in my application. Here is the configuration.
splunk-url: "https://splunk-server:8088"
splunk-token: "token-uuid"
splunk-index: "my_index"
My token of splunk have index acknowledgement enabled, such that Http Event Collector (HEC) requires X-Splunk-Request-Channel in header.
I am sure that event can be sent via a HTTP client like postman to HEC with the header, but I cannot find the configuration option from docker splunk driver to set it.
Given that splunk index ack is required by my organisation. Is there any workaround?
cheers

Wildfly Swarm: Environment specific configuration of Keycloak Backend

Given is a JavaEE application on wildfly that uses keycloak as authentication backend, configured in project-stages.yml:
swarm:
deployment:
my.app.war:
web:
login-config:
auth-method: KEYCLOAK
The application will be deployed in different environments using a Gitlab-CD-Pipeline. Therefore keycloak specifics must be configured per environment.
By now the only working configuration that I found is adding a keycloak.json like (the same file in every environment):
{
"realm": "helsinki",
"bearer-only": true,
"auth-server-url": "http://localhost:8180/auth",
"ssl-required": "external",
"resource": "backend"
}
According to the Wildfly-Swarm Documentation it should be possible to configure keycloak in project-stages.yml like:
swarm:
keycloak:
secure-deployments:
my-deployment:
realm: keycloakrealmname
bearer-only: true
ssl-required: external
resource: keycloakresource
auth-server-url: http://localhost:8180/auth
But when I deploy the application, no configuration is read:
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) KeycloakServletException initialization
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) using /WEB-INF/keycloak.json
2018-03-08 06:29:03,542 WARN [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) No adapter configuration. Keycloak is unconfigured and will deny all requests.
2018-03-08 06:29:03,545 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) Keycloak is using a per-deployment configuration.
If you take a look at the source of the above class, it looks like the only way to get around is to provide a KeycloakConfigResolver. Does Wildfly-Swarm provide a resolver that reads the project-stages.yml?
How can I configure environment-specific auth-server-urls?
A workaround would be to have different keycloak.json-Files, but I would rather use the project-stages.yml.
I have a small WildFly Swarm project which configures Keycloak exclusively via project-defaults.yml here: https://github.com/Ladicek/swarm-test-suite/tree/master/wildfly/keycloak
From the snippets you post, the only thing that looks wrong is this:
swarm:
keycloak:
secure-deployments:
my-deployment:
The my-deployment name needs to be the actual name of the deployment, same as what you have in
swarm:
deployment:
my.app.war:
If you already have that, then I'm afraid I'd have to start speculating: which WildFly Swarm version you use? Which Keycloak version?
Also you could specify the swarm.keycloak.json.path property in your yml:
swarm:
keycloak:
json:
path: path-to-keycloak-config-files-folder/keycloak-prod.json
and you can dynamically select a yml file config during startup of the application with the -Dswarm.project.stage option.
Further references:
cheat sheet: http://design.jboss.org/redhatdeveloper/marketing/wildfly_swarm_cheatsheet/cheat_sheet/images/wildfly_swarm_cheat_sheet_r1v1.pdf
using multiple swarm project stages (profiles) example: https://github.com/thorntail/thorntail/tree/master/testsuite/testsuite-project-stages/src/test/resources
https://docs.thorntail.io/2018.1.0/#_keycloak

JMX output in graph format

Is it possible to show JMX Bean output in a graph?
I am currently using Javamelody but it doesn't show JMX Bean output in graph. Tried using Splunk but I have no idea, how does it gets integrated with tomcat server?
Why not use VisualVM ? That will chart JMX variables.
Integration is offered by Splunk Add-on for Java Management Extensions

ruby socket log server

We use the default ruby logging module to log the errors. We use the delayed_job which is using many worker processes. So we could not manage the log files.
We need the ruby based log server with rolling file appender and archive facility so that we can push the logs to the log server and let the log server to manage the logging task.
Do we have ruby based solution or other recommended solutions to manage this problem?
Have you looked at Ruby's syslog in the standard library? Normally the docs are non-existent, but the Rails docs seem to cover it, kinda. http://stdlib.rubyonrails.org/libdoc/syslog/rdoc/index.html
Otherwise you can find out some info by looking through the syslog files themselves and reading http://glu.ttono.us/articles/2007/07/25/ruby-syslog-readme which is what I did when I started using it.
You don't say what OS you are on, but Mac OS and Linux have a very powerful syslog built in, which Ruby's syslog piggybacks on so you should be able to send to the system's syslog and have it split out your stream, roll the files, forward them, and do lots of other stuff with them.

Resources