Docker GELF logging additional fields - docker

I am trying to make my docker-compose file write its logging to a Graylog server, using the GELF protocol. This works fine, using the following configuration (snippet of docker-compose.yml):
logging:
driver: gelf
options:
gelf-address: ${GELF_ADDRESS}
The Graylog server receives the messages I log in the JBoss instance in my Docker container. It also adds some extra GELF fields, like container_name and image_name.
My question is, how can I add extra GELF fields myself? I want it to pass _username as an extra field. I have this field available in my MDC context.
I could add the information to the message by using a formatter (Conversion Pattern) in my CONSOLE logger, by adding the following to this logger:
%X{_user_name}
But this is not what I want, as it will be in the GELF message field, not added as seperate extra field.
Any thoughts?

It does seem impossible in the current docker-compose version (1.8.0) to include the extra fields.
I ended up removing any logging configuration from the docker-compose file and instead integrate the GELF logging in the docker container's application. Since I am using JBoss AS 7, I have used the steps as described here: http://logging.paluch.biz/examples/jbossas7.html
To log the container id, I have added the following configuration:
<custom-handler name="GelfLogger" class="biz.paluch.logging.gelf.jboss7.JBoss7GelfLogHandler" module="biz.paluch.logging">
<level name="INFO" />
<properties>
<property name="host" value="udp:${GRAYLOG_HOST}" />
<property name="port" value="${GRAYLOG_PORT}" />
<property name="version" value="1.1" />
<property name="additionalFields" value="dockerContainer=${HOSTNAME}" />
<property name="includeFullMdc" value="true" />
</properties>
Field dockerContainer is substituted by the HOSTNAME environment variable on the docker container and contains the containerId. The other placeholders are substituted by docker-compose environment variables.
By including the full MDC, I was able to put the username (and some other fields) as an additional GELF field. (For more information about MDC, see http://logback.qos.ch/manual/mdc.html)

Related

For keycloak-19.0.3-legacy docker image, how can I enable redirect_uri legacy functionality?

I need to set the flags to enable the default redirect_uri behavior for keycloak 19.0.3-legacy.
However, nothing I've tried so far has worked.
We're using the standalone-ha.xml configuration file. (not sure if this is the could be the right place to configure this.)
I need to set the following flags:
spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true
spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
https://www.keycloak.org/docs/19.0.3/upgrading/#openid-connect-logout-prompt
https://www.keycloak.org/docs/latest/upgrading/#openid-connect-logout
However, I run a standalone instance and don't run using kc.sh.
I've tried setting environment variables without success:
KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true
KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
and
KEYCLOAK_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true
KEYCLOAK_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
and
SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true
SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
and
LEGACY_LOGOUT_REDIRECT_URI=true
SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
I've also tried to adding to a config file, but it doesn't seem to have been picked up from where it was put in the Dockerfile.
Dockerfile:
COPY conf.d/keycloak.conf /opt/jboss/keycloak/conf/keycloak.conf
and
COPY conf.d/keycloak.conf /opt/keycloak/conf/keycloak.conf
keycloak.conf
spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true
spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
and
suppress-logout-confirmation-screen=true
legacy-logout-redirect-uri=true
I also tried adding it to thedocker-entrypoint.sh parameters:
exec /opt/jboss/tools/docker-entrypoint.sh $# -Dspi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true -Dspi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
and
--This one won't even start up. It fails stating that the parameters are invalid.
exec /opt/jboss/tools/docker-entrypoint.sh $# --spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true --spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
Update 1/24/23
Tried updating standalone-ha.xml, but it seems to have been ignored:
<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
<web-context>auth</web-context>
<providers>
<provider>
classpath:${jboss.home.dir}/providers/*
</provider>
<provider>
module:org.keycloak.storage.ldap.LDAPSyncOnly
</provider>
</providers>
<master-realm-name>master</master-realm-name>
<scheduled-task-interval>900</scheduled-task-interval>
<theme>
<staticMaxAge>2592000</staticMaxAge>
<cacheThemes>false</cacheThemes>
<cacheTemplates>false</cacheTemplates>
<welcomeTheme>${env.KEYCLOAK_WELCOME_THEME:keycloak}</welcomeTheme>
<default>${env.KEYCLOAK_DEFAULT_THEME:keycloak}</default>
<dir>${jboss.home.dir}/themes</dir>
</theme>
... Bunch of other spi tags. ...
<spi name="login-protocol">
<provider name="openid-connect" enabled="true">
<properties>
<property name="suppress-logout-confirmation-screen" value="true"/>
<property name="legacy-logout-redirect-uri" value="true"/>
</properties>
</provider>
</spi>
</subsystem>
Useful links:
https://github.com/keycloak/keycloak/blob/10b7475b0431ed380d45b840578bc666ecb3263a/services/src/main/java/org/keycloak/protocol/oidc/OIDCLoginProtocolFactory.java#L106-L121
Shows the warning message that will print to the logs if this is set correctly.
https://www.keycloak.org/server/configuration#_example_configuring_the_db_url_host_parameter
Shows alternate ways to configure keycloak.
https://github.com/keycloak/keycloak-containers/tree/19.0.3
https://quay.io/repository/keycloak/keycloak?tab=tags
We figured it out.
By adding the following CLI commands we can properly update the high availability config file to enable the legacy flag.
embed-server --server-config=standalone-ha.xml --std-out=echo
/subsystem=keycloak-server/spi=login-protocol:add
/subsystem=keycloak-server/spi=login-protocol/provider=openid-connect:add(enabled=true)
/subsystem=keycloak-server/spi=login-protocol/provider=openid-connect:write-attribute(name=properties.legacy-logout-redirect-uri,value=true)
/subsystem=keycloak-server/spi=login-protocol/provider=openid-connect:write-attribute(name=properties.suppress-logout-confirmation-screen,value=true)
stop-embedded-server
I don't know why this worked but manually editing the standalone-ha.xml config didn't.

Log4j2 environment lookup does not find variable in AWS Linux EC2 instance

I have a Tomcat WAR project running in AWS Elastic Beanstalk EC2 instances. I have configured the instances to ensure that they have an environment variable CLUSTER_NAME. I can verify that the variable is available in the EC2 instance.
[ec2-user#ip-10* ~]$ cat /etc/environment
export CLUSTER_NAME=sandbox
ec2-user#ip-10* ~]$ echo $CLUSTER_NAME
sandbox
This variable is looked up in a Log4j2 XML file like this:
<properties>
<property name="env-name">${env:CLUSTER_NAME}</property>
</properties>
The env-name property is used in a Coralogix appender like this:
<Coralogix name="Coralogix" companyId="--" privateKey="--"
applicationName="--" subSystemName="${env-name}">
<PatternLayout>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS}{GMT+0}\t%p\t%c\t%m%n</pattern>
</PatternLayout>
</Coralogix>
I see that this lookup is not working, as the env-name is just shown as ${env:CLUSTER_NAME} in Coralogix dashboard. The value works if I hardcode it.
What can be done to fix this lookup? There are several related questions for this, but they seem to refer to log4j1.x. https://stackoverflow.com/a/22296362. I have ensured that this project uses log4j2.
The solution was to add the CLUSTER_NAME variable in the /etc/profile.d/env.sh. This variable is available in the log4j2.xml with the following lookup.
<property name="env-name">
${env:CORALOGIX_CLUSTER_NAME}
</property>
I am still not clear of the difference between adding a variable to /etc/environment vs /etc/profile.d/env.sh.

accessing plugin variables in td-agent/fluentd config

I've got a fluentd config that i'm using to push logs to cloudwatch logs, and i'd like to extract a certain piece of the container name to use as the log group name. I see that i can embed ruby expressions in my config, but i can't figure out how to access the "container_name" variable from inside that embedded expression. is this possible?
this is my config, which works but uses the raw container_name value as the log group name.
<match container.**>
#type cloudwatch_logs
region us-west-2
log_group_name container_name
log_stream_name "#{File.open('/etc/machine-id').read.strip()}"
auto_create_stream true
retention_in_days 7
</match>
this is what i want to do, but container_name is not defined in the embedded ruby expression
<match container.**>
#type cloudwatch_logs
region us-west-2
log_group_name "#{container_name.match(/\w+/)[0]}"
log_stream_name "#{File.open('/etc/machine-id').read.strip()}"
auto_create_stream true
retention_in_days 7
</match>
is this possible?

How do I use environment variables in an attribute name in jboss' standalone.xml?

I understand that within a field, I can pull an environment variable with the syntax of ${env.VARIABLE_NAME}, however, whenever I try to do so within an attribute name, jboss throws an error.
What I have done, and works
<datasource jndi-name="java:/jdbc/database" pool-name="database" enabled="true" use-java-context="true">
<connection-url>${env.DS_CONNECTION_URL}</connection-url>
<driver>${env.DS_DRIVER}</driver>
</datasource>
What I want to do, which is failing
<console-handler name="CONSOLE">
<formatter>
<named-formatter name="${env.FORMATTER}"/>
</formatter>
</console-handler>
I have also tried starting without the surrounding quotes. I have created a child xml element with the value of name and the environment variable, but that has also failed.
I expect the environment variable FORMATTER to be used as the name, but instead I get the following error on attempting to start jboss.
java.lang.IllegalArgumentException: Formatter "${env.FORMATTER}" is not found
Expressions are not allowed for the named-formatter attribute. In most cases it doesn't make much sense as the formatter would have to be defined and cannot have a dynamic name.
If you look at the model description documentation you can see which attributes support expressions.
How to pass env variables to the attributes of other formatter's properties?
<meta-data>
<property name="ENV" value="${env.MY_ENVIRONMENT}"/>
</meta-data>
is not working when running
docker run -p 8081:9990 -p 8080:8080 -e MY_ENVIRONMENT="DEV" --name c1 c1img:1.1.2
I still see
..."ENV":"${env.MY_ENVIRONMENT}"...

Messages from Gelf4Net are not stored in Graylog2

I have an Ubuntu server with Elasticsearch, MongoDB, and Graylog2 running in Azure, and I have an asp.net mvc4 application I am trying to send logs from. (I am using Gelf4Net / Log4Net as the logging component). To cut to the chase, nothing is being logged.
(skip to the update to see what is wrong)
The setup
1 Xsmall Ubuntu VM running the needed software for graylog2
everything is running as a daemon
1 Xsmall cloud service with the MVC4 app (2 instnaces)
A virtual network setup so they can talk.
So what have I tried?
From the linux box the follow command will cause a message to be logged echo "<86>Dec 24 17:05:01 foo-bar CRON[10049]: pam_unix(cron:session):" |
nc -w 1 -u 127.0.0.1 514
I can change the IP address to use the public IP and it works fine as well.
using this powershell script I can log the same message from my dev machine as well as the production web server
Windows firewall turned off and it still doesn't work.
I can log to a FileAppender Log4Net, so I know Log4Net is working.
tailing the graylog2.log shows nothing of interest. Just a few warning about my plugin directory
So I know everything is working, but I can't get the Gelf4Net appender to work. I'm a loss here. Where can I look? Is there something I am missing
GRAYLOG2.CONF
#only showing the connection stuff here. If you need something else let me know
syslog_listen_port = 514
syslog_listen_address = 0.0.0.0
syslog_enable_udp = true
syslog_enable_tcp = false
web.config/Log4Net
//application_start() has log4net.Config.XmlConfigurator.Configure();
<log4net >
<root>
<level value="ALL" />
<appender-ref ref="GelfUdpAppender" />
</root>
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
<remoteAddress value="public.ip.of.server"/>
<remotePort value="514" />
<layout type="Gelf4net.Layout.GelfLayout, Gelf4net">
<param name="Facility" value="RandomPhrases" />
</layout>
</appender>
</log4net>
update
for some reason it didn't occur to me to run graylog in debug mode :) Doing so shows this message.
2013-04-09 03:00:56,202 INFO : org.graylog2.inputs.syslog.SyslogProcessor - Date could not be parsed. Was set to NOW because allow_override_syslog_date is true.
2013-04-09 03:00:56,202 DEBUG: org.graylog2.inputs.syslog.SyslogProcessor - Skipping incomplete message.
So it is sending an incomplete message. How can I see what is wrong with it?
I was using the wrong port (DOH!)
I should have been using the port specified in graylog2.config / gelf_listen_port = 12201
so my web.config/log4net/gelf appender should have had
<appender name="GelfUdpAppender" type="Gelf4net.Appender.GelfUdpAppender, Gelf4net">
...
<remotePort value="12201" />
...
</appender>
For anyone who may have the same problem, make sure Log4Net reloads the configuration after you change it. I don't have it set to watch the config file for changes, so it took me a few minutes to realize that I was using the wrong port. When I changed it from 514 to 12201 the first time, messages still weren't getting though. I had to restart the server for Log4Net to pick up the new config, and then it started to work.

Resources