Set Quarkus Logging Category Level via Environment Variables - environment-variables

In Spring, it is possible to set the Logging Category Level via environment variables. I've tried the same in a Quarkus application with the following logger declaration:
package org.my.group.resteasyjackson;
public class JacksonResource {
private static final Logger LOGGER = LoggerFactory.getLogger(JacksonResource.class);
#GET
public Set<Quark> list() {
LOGGER.info("Hello");
return quarks;
}
}
Executing the build artifact with
QUARKUS_LOG_CATEGORY_ORG_MY_LEVEL=WARN java -jar my-artifactId-my-version-runner.jar
will log anything at info level (since it is the default), therefore the "Hello" message.
However, inserting
quarkus.log.category."org.my".level=WARN
in the application.properties file works as desired. Are environment variables in this use case not usable for Quarkus applications?

Just tried with quarkus 1.13.1 and adding extra underscores for the quotes seems to work, try:
QUARKUS_LOG_CATEGORY__ORG_MY__LEVEL=WARN

You can do this:
quarkus.log.category."org.my".level=${LOG_LVL:INFO}
This simply means: use the log-level from my env variable "LOG_LVL" and if this is not present use INFO as default.
You can set this variable either as an env.variable or pass it through as system parameter during startup, but I'm not sure about the syntax as system parameter.

You should be able to use a system property (!= environment variable) like this:
java -Dquarkus.log.category.\"org.my\".level=WARN ...
Note: system properties will overrwite their application.properties, except for a quarkus.profile due to a bug.
No idea if environment variables can overwrite them too. Maybe the " need to be escaped. (I find environment variables brittle, I prefer system properties.)

Related

How to use environment variables in CloudFlare Worker in local development environment

I have a CloudFlare Worker where I have environment variables set in the CF Settings..Environment Variables interface. I also have this wrangler.toml
In my worker's index.js I have code reading the variable REGISTRATION_API_URL. If the code is running in a deployed environment then it injects the value from the CF Settings into REGISTRATION_API_URL just fine.
But if I run
wrangler dev
or
wrangler dev --env local
then REGISTRATION_API_URL is undefined.
Originally I expected that the variable would be populated by the CF Settings values, but they aren't. So I tried the two vars setting in the wrangler.toml I show here but no difference. And I have spent a lot of time searching the docs and the greater web.
Are environment variables supported in a local dev environment? Any workarounds that people have come up with? Currently I am looking for undefined and defining the variable with a hard-coded value, but this is not a great answer.
Using wrangler 1.16.0
Thanks.
The docs could be more clear but if you are using the newer module syntax, the variables will not be available as global variables.
Environmental variables with module workers
When deploying a Module Worker, any bindings will not be available as global runtime variables. Instead, they are passed to the handler as a parameter – refer to the FetchEvent documentation for further comparisons and examples .
Here's an example.
export default {
async fetch(request, env, context) {
return new Response(env.MY_VAR);
},
};
KV namespaces are also available in the same object.
Maybe a bit late, but: no I don't think you can
But: you can always use self["YOUR_ENV_VARIABLE"] to get the value and then go from there (unfortunately the docs don't mention that)
Here is what I personally do in my Workers Site project to get the Release version (usually inserted via pipeline/action and then inserted via HtmlRewriter into the index.html):
const releaseVersion = self["RELEASE_VERSION"] || 'unknown'

Can a workflow step access environment variables provided by an EnvironmentContributingAction?

A custom plugin we wrote for an older version of Jenkins uses an EnvironmentContributingAction to provide environment variables to the execution so they could be used in future build steps and passed as parameters to downstream jobs.
While attempting to convert our build to workflow, I'm having trouble accessing these variables:
node {
// this step queries an API and puts the results in
// environment variables called FE1|BE1_INTERNAL_ADDRESS
step([$class: 'SomeClass', parameter: foo])
// this ends up echoing 'null and null'
echo "${env.FE1_INTERNAL_ADDRESS} and ${env.BE1_INTERNAL_ADDRESS}"
}
Is there a way to access the environment variable that was injected? Do I have to convert this functionality to a build wrapper instead?
EnvironmentContributingAction is currently limited to AbstractBuilds, which WorkflowRuns are not, so pending JENKINS-29537 which I just filed, your plugin would need to be modified somehow. Options include:
Have the builder add a plain Action instead, then register an EnvironmentContributor whose buildEnvironmentFor(Run, …) checks for its presence using Run.getAction(Class).
Switch to a SimpleBuildWrapper which defines the environment variables within a scope, then invoke it from Workflow using the wrap step.
Depend on workflow-step-api and define a custom Workflow Step with comparable functionality but directly returning a List<String> or whatever makes sense in your context. (code sample)
Since PR-2975 is merged, you are able to use new interface:
void buildEnvVars(#Nonnull Run<?, ?> run, #Nonnull EnvVars env, #CheckForNull Node node)
It will be used by old type of builds as well.

Optional Environment Arguments to SCons Builders

I've noticed that calls to Object and Library builders sometimes take optional arguments at the end such as
Object('hello.c', CCFLAGS='-DHELLO')
Object('goodbye.c', CCFLAGS='-DGOODBYE')
Can Object, Library and SharedLibrary all take an arbitrary set of them or are they limited to a specific set of variables? If so this should save our current very large SCons build at work some time I hope.
The C/C++ builders recognize a specific set of arguments, called Construction Variables.
These variables can either be set on the environment or when calling the builder as you do in your question. Its often easier to set them on the environment, thus making the calls to the builders simpler, and then only modify the variables when necessary.
Here is an example:
env = Environment()
# Notice that CPPPATH, CPPDEFINES, LIBS, and LIBPATH dont include the
# compiler flags -I, -D, -l, and -L respectively, SCons will add those
# in a platform independent manner
env.Append(CCFLAGS=['-g', '-O2'])
env.Append(CPPPATH=['some/include/path'])
env.Append(CPPDEFINES=['YOUR_DEFINE'])
env.Append(LIBS=['pthread'])
env.Append(LIBPATH=['some/lib/path'])
# All of these builder calls use the construction
# variables set on the environment above
env.Object('hello.c')
env.Object('goodbye.c')
env.Program('main.cc')
If you want to override a specific variable, you can do the following
env.Object('hello.c', CPPDEFINES='HELLO')
Or, if you want to append to a specific variable, with just one call, you can do the following:
env.Object('hello.c', CPPDEFINES=[env['CPPDEFINES'], 'HELLO'])
What Brady said is mostly correct.
However, you can append any (number of) Environment() variables to the end of any builder. These create an OverrideEnvironment() which is then what is used to run the builder.
If you were to change the value of CCCOM and/or any variable which feeds into the command line for running the compiler then adding those variables to builder call would also have some impact.
If you specify a variable which has no impact on the current builder or even one which is not defined anywhere in SCons or any builders you may have created SCons will not issue a warning or an error.

Erlang application: different environment

What is the recommended way to have different values for application environment variables in an erlang application?
What I mean here is: how do you support different environment in your application (e.g. development, stage, production) in your erlang application? For example I would like tests using a specific fake service on a known host and production code use the real server on a different host.
You can use application config file as well. you can also pass the config as parameter while starting an erlang console that can help you in setting up environment variables. so you are pass test.config or production.config based on environment there by no need to compile the code and start them.
You can find more info here
http://www.erlang.org/doc/man/config.html
Dependency injection.
test_setup() -> [ {host,"http://..."}, ... ].
prod_setup() -> [ {host,"http://..."}, ... ].
test_start() -> start(test_setup()).
prod_start() -> start(prod_setup()).
start(Config) -> ... .
Alternately, policy modules. Make a policy whose interface matches the stuff you need, then pass in the name of the module containing the policy you want. Think ETS/DETS.

Groovy, STS, and debug info, information, or symbols

I'm trying to include the debug information or symbols in my Groovy code so that I can use the Spring Security annotations with SpEL to access an annotated method's arguments by name. For example:
#PreAuthorize("hasPermission(#id, 'View')")
public void doSomething(Integer id)
{
....
}
Everything works fine when I use the STS 'run-test' command, which uses the Groovy RunTest script. By that I mean I can access a method's argument by name. However, whenever I try to use the 'run-app' command, the debug information is not included.
I looked at the RunTest script and the script explicitly calls the Java Compiler with the debug option set to true.
How can I enable debug information for my development and production environments? Do I need to modify the Groovy script to call the Java compiler on the Groovy code or is there any easier way?
Never found an elegant solution to this. Instead I just used filters as the parameters that were being passed to my methods were being extracted from the URL by Grails.

Resources