I have a Tomcat WAR project running in AWS Elastic Beanstalk EC2 instances. I have configured the instances to ensure that they have an environment variable CLUSTER_NAME. I can verify that the variable is available in the EC2 instance.
[ec2-user#ip-10* ~]$ cat /etc/environment
export CLUSTER_NAME=sandbox
ec2-user#ip-10* ~]$ echo $CLUSTER_NAME
sandbox
This variable is looked up in a Log4j2 XML file like this:
<properties>
<property name="env-name">${env:CLUSTER_NAME}</property>
</properties>
The env-name property is used in a Coralogix appender like this:
<Coralogix name="Coralogix" companyId="--" privateKey="--"
applicationName="--" subSystemName="${env-name}">
<PatternLayout>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS}{GMT+0}\t%p\t%c\t%m%n</pattern>
</PatternLayout>
</Coralogix>
I see that this lookup is not working, as the env-name is just shown as ${env:CLUSTER_NAME} in Coralogix dashboard. The value works if I hardcode it.
What can be done to fix this lookup? There are several related questions for this, but they seem to refer to log4j1.x. https://stackoverflow.com/a/22296362. I have ensured that this project uses log4j2.
The solution was to add the CLUSTER_NAME variable in the /etc/profile.d/env.sh. This variable is available in the log4j2.xml with the following lookup.
<property name="env-name">
${env:CORALOGIX_CLUSTER_NAME}
</property>
I am still not clear of the difference between adding a variable to /etc/environment vs /etc/profile.d/env.sh.
Related
I need to set the flags to enable the default redirect_uri behavior for keycloak 19.0.3-legacy.
However, nothing I've tried so far has worked.
We're using the standalone-ha.xml configuration file. (not sure if this is the could be the right place to configure this.)
I need to set the following flags:
spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true
spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
https://www.keycloak.org/docs/19.0.3/upgrading/#openid-connect-logout-prompt
https://www.keycloak.org/docs/latest/upgrading/#openid-connect-logout
However, I run a standalone instance and don't run using kc.sh.
I've tried setting environment variables without success:
KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true
KC_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
and
KEYCLOAK_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true
KEYCLOAK_SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
and
SPI_LOGIN_PROTOCOL_OPENID_CONNECT_LEGACY_LOGOUT_REDIRECT_URI=true
SPI_LOGIN_PROTOCOL_OPENID_CONNECT_SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
and
LEGACY_LOGOUT_REDIRECT_URI=true
SUPPRESS_LOGOUT_CONFIRMATION_SCREEN=true
I've also tried to adding to a config file, but it doesn't seem to have been picked up from where it was put in the Dockerfile.
Dockerfile:
COPY conf.d/keycloak.conf /opt/jboss/keycloak/conf/keycloak.conf
and
COPY conf.d/keycloak.conf /opt/keycloak/conf/keycloak.conf
keycloak.conf
spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true
spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
and
suppress-logout-confirmation-screen=true
legacy-logout-redirect-uri=true
I also tried adding it to thedocker-entrypoint.sh parameters:
exec /opt/jboss/tools/docker-entrypoint.sh $# -Dspi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true -Dspi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
and
--This one won't even start up. It fails stating that the parameters are invalid.
exec /opt/jboss/tools/docker-entrypoint.sh $# --spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true --spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
Update 1/24/23
Tried updating standalone-ha.xml, but it seems to have been ignored:
<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
<web-context>auth</web-context>
<providers>
<provider>
classpath:${jboss.home.dir}/providers/*
</provider>
<provider>
module:org.keycloak.storage.ldap.LDAPSyncOnly
</provider>
</providers>
<master-realm-name>master</master-realm-name>
<scheduled-task-interval>900</scheduled-task-interval>
<theme>
<staticMaxAge>2592000</staticMaxAge>
<cacheThemes>false</cacheThemes>
<cacheTemplates>false</cacheTemplates>
<welcomeTheme>${env.KEYCLOAK_WELCOME_THEME:keycloak}</welcomeTheme>
<default>${env.KEYCLOAK_DEFAULT_THEME:keycloak}</default>
<dir>${jboss.home.dir}/themes</dir>
</theme>
... Bunch of other spi tags. ...
<spi name="login-protocol">
<provider name="openid-connect" enabled="true">
<properties>
<property name="suppress-logout-confirmation-screen" value="true"/>
<property name="legacy-logout-redirect-uri" value="true"/>
</properties>
</provider>
</spi>
</subsystem>
Useful links:
https://github.com/keycloak/keycloak/blob/10b7475b0431ed380d45b840578bc666ecb3263a/services/src/main/java/org/keycloak/protocol/oidc/OIDCLoginProtocolFactory.java#L106-L121
Shows the warning message that will print to the logs if this is set correctly.
https://www.keycloak.org/server/configuration#_example_configuring_the_db_url_host_parameter
Shows alternate ways to configure keycloak.
https://github.com/keycloak/keycloak-containers/tree/19.0.3
https://quay.io/repository/keycloak/keycloak?tab=tags
We figured it out.
By adding the following CLI commands we can properly update the high availability config file to enable the legacy flag.
embed-server --server-config=standalone-ha.xml --std-out=echo
/subsystem=keycloak-server/spi=login-protocol:add
/subsystem=keycloak-server/spi=login-protocol/provider=openid-connect:add(enabled=true)
/subsystem=keycloak-server/spi=login-protocol/provider=openid-connect:write-attribute(name=properties.legacy-logout-redirect-uri,value=true)
/subsystem=keycloak-server/spi=login-protocol/provider=openid-connect:write-attribute(name=properties.suppress-logout-confirmation-screen,value=true)
stop-embedded-server
I don't know why this worked but manually editing the standalone-ha.xml config didn't.
I want to refer/specify (syntax) branch which is set as an environment variable for Jenkins shared library which will be provide during docker container.
For Example:
#Library(['my-shared-library', BRANCH_NAME])
Tried using ${BRANCH_NAME} ${env.BRANCH_NAME}
I will provide BRANCH_NAME as environment variable in docker-compose.yml
Also i want to get the env variable effected in org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml
like if i set PIPELINE_VERSION as env variable
<?xml version='1.1' encoding='UTF-8'?>
<org.jenkinsci.plugins.workflow.libs.GlobalLibraries plugin="workflow-cps-global-lib#2.15">
<libraries>
<org.jenkinsci.plugins.workflow.libs.LibraryConfiguration>
<name>XXXXXXXXXXXX</name>
<retriever class="org.jenkinsci.plugins.workflow.libs.SCMSourceRetriever">
<scm class="jenkins.plugins.git.GitSCMSource" plugin="git#3.12.0">
<id>XXXXXXXXXXXXXXXXXXXXXXXX</id>
<remote>XXXXXXXXXXXXXXXXXXX</remote>
<credentialsId>jXXXXXXXXXXXXXXXXXXXX</credentialsId>
<traits>
<jenkins.plugins.git.traits.BranchDiscoveryTrait/>
</traits>
</scm>
</retriever>
<defaultVersion>${PIPELINE_RELEASE_VERSION}</defaultVersion>
<implicit>true</implicit>
<allowVersionOverride>true</allowVersionOverride>
<includeInChangesets>false</includeInChangesets>
</org.jenkinsci.plugins.workflow.libs.LibraryConfiguration>
</libraries>
</org.jenkinsci.plugins.workflow.libs.GlobalLibraries>
Thanks,
Kusuma
I don't think anyway it possible to make the environment variable available for org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml. But If you use Jenkins Code As Configuration plugins, you can pass the variable from docker-compose and make that available to the config file, and when Jenkins load the config file to prepare config for Jenkins would work.
An example can be found here
I understand that within a field, I can pull an environment variable with the syntax of ${env.VARIABLE_NAME}, however, whenever I try to do so within an attribute name, jboss throws an error.
What I have done, and works
<datasource jndi-name="java:/jdbc/database" pool-name="database" enabled="true" use-java-context="true">
<connection-url>${env.DS_CONNECTION_URL}</connection-url>
<driver>${env.DS_DRIVER}</driver>
</datasource>
What I want to do, which is failing
<console-handler name="CONSOLE">
<formatter>
<named-formatter name="${env.FORMATTER}"/>
</formatter>
</console-handler>
I have also tried starting without the surrounding quotes. I have created a child xml element with the value of name and the environment variable, but that has also failed.
I expect the environment variable FORMATTER to be used as the name, but instead I get the following error on attempting to start jboss.
java.lang.IllegalArgumentException: Formatter "${env.FORMATTER}" is not found
Expressions are not allowed for the named-formatter attribute. In most cases it doesn't make much sense as the formatter would have to be defined and cannot have a dynamic name.
If you look at the model description documentation you can see which attributes support expressions.
How to pass env variables to the attributes of other formatter's properties?
<meta-data>
<property name="ENV" value="${env.MY_ENVIRONMENT}"/>
</meta-data>
is not working when running
docker run -p 8081:9990 -p 8080:8080 -e MY_ENVIRONMENT="DEV" --name c1 c1img:1.1.2
I still see
..."ENV":"${env.MY_ENVIRONMENT}"...
I am trying to make my docker-compose file write its logging to a Graylog server, using the GELF protocol. This works fine, using the following configuration (snippet of docker-compose.yml):
logging:
driver: gelf
options:
gelf-address: ${GELF_ADDRESS}
The Graylog server receives the messages I log in the JBoss instance in my Docker container. It also adds some extra GELF fields, like container_name and image_name.
My question is, how can I add extra GELF fields myself? I want it to pass _username as an extra field. I have this field available in my MDC context.
I could add the information to the message by using a formatter (Conversion Pattern) in my CONSOLE logger, by adding the following to this logger:
%X{_user_name}
But this is not what I want, as it will be in the GELF message field, not added as seperate extra field.
Any thoughts?
It does seem impossible in the current docker-compose version (1.8.0) to include the extra fields.
I ended up removing any logging configuration from the docker-compose file and instead integrate the GELF logging in the docker container's application. Since I am using JBoss AS 7, I have used the steps as described here: http://logging.paluch.biz/examples/jbossas7.html
To log the container id, I have added the following configuration:
<custom-handler name="GelfLogger" class="biz.paluch.logging.gelf.jboss7.JBoss7GelfLogHandler" module="biz.paluch.logging">
<level name="INFO" />
<properties>
<property name="host" value="udp:${GRAYLOG_HOST}" />
<property name="port" value="${GRAYLOG_PORT}" />
<property name="version" value="1.1" />
<property name="additionalFields" value="dockerContainer=${HOSTNAME}" />
<property name="includeFullMdc" value="true" />
</properties>
Field dockerContainer is substituted by the HOSTNAME environment variable on the docker container and contains the containerId. The other placeholders are substituted by docker-compose environment variables.
By including the full MDC, I was able to put the username (and some other fields) as an additional GELF field. (For more information about MDC, see http://logback.qos.ch/manual/mdc.html)
I am attempting to set the location of the system property java.io.tmpdir to something other than the default "/tmp" in my standalone.xml file.
I have added the following element after the <extensions> element:
<system-properties>
<property name="java.io.tmpdir" value="/tmp/wildfly"/>
</system-properties>
However, when I start up wildfly, I see this in the log file:
java.io.tmpdir = /tmp
I don't see anywhere in any of the bin*.conf files or the bin*.sh files that's setting this... what am I missing?
Thanks.