Custom logger not being used in Jenkins pipeline - jenkins

The methods from the below Groovy class are invoked by some other pipeline script classes about which I don't know. All the println statements have been replaced by logger.info.
class ConfigurationPluginInitBase implements Plugin<Project> {
private static final Logger logger = LoggerFactory.getLogger(ConfigurationPluginInitBase.class)
.
.
.
protected void configureDependenciesResolution(Project project) {
.
.
.
logger.info("Configuring Dependencies Resolution")
logger.info('Does the buildInfo.json exist? {}' , file.exists())
logger.info('The list of dependencies should be rewritten: {}' ,rewriteDependency)
/*Added this as there was no other way to see what happened to the logger instance*/
println 'Is the logger instance created at all???' + logger
.
.
.
logger.info('List: {}' , listToUpdate)
}
}
log4j2-test.properties
status = error
name = PropertiesConfig
filters = threshold
filter.threshold.type = ThresholdFilter
filter.threshold.level = debug
appenders = console
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c:- %m%n
loggers = console
logger.console.name = ConsoleLog
logger.console.level = debug
logger.console.additivity = false
logger.console.appenderRef.console.ref = STDOUT
rootLogger.level = info
rootLogger.appenderRef.stdout.ref = STDOUT
The output(only relevant part shown below) on the Jenkins job console:
.
.
.
.
Download http://artifactory.net:8081/artifactory/Migration_R148_VR/tools.gradle.plugin/BuildPublishReleasePlugin/v4.0.0.37af2ff/ivy-v4.0.0.37af2ff.xml
Download http://artifactory.net:8081/artifactory/Migration_R148_VR/tools.gradle.plugin/BuildPublishReleasePlugin/v4.0.0.37af2ff/BuildPublishReleasePlugin-v4.0.0.37af2ff.jar
//Printed way before the actual logger statements, when the above artifact is //downloaded from Artifactory for further testing in the pipeline
Is the logger instance created at all???org.gradle.internal.logging.slf4j.OutputEventListenerBackedLogger#efbec93c
apache-commons:commons-collections:null
apache-commons:commons-lang:null
DAP_Framework:DAP_FrameworkExt:null
esapi:esapi:null
opensaml:opensaml:null
openws:openws:null
slf4j:slf4j:null
spring-framework:spring-framework:null
TDE_Ark_Framework:TDE_Ark_Framework:null
TDE_Ark_Infrastructure:TDE_Ark_Infrastructure_CLI:null
velocity:velocity:null
wurfl:wurfl:null
xmlsec:xmlsec:null
xmltooling:xmltooling:null.
.
.
.
[Ripple AlfaClient] Configuring Dependencies Resolution
[Ripple AlfaClient] Does the buildInfo.json exist? true
[Ripple AlfaClient] The list of dependencies should be rewritten: DAP_Framework:DAP_Framework_CLI:1.2.2-integration.adcb14d
[Ripple AlfaClient] List: [DAP_Framework:DAP_Framework_CLI:1.2.2-integration.adcb14d]
.
.
.
The logger that I have configured is probably not invoked
The run-time instance is of OutputEventListenerBackedLogger
Even if I make changes to the logger statements, they don't reflect in the output but the new println that I have added does. This is confusing i.e some changes get reflected while some don't!
I referred to the Gradle logging page and threads like this and this but I am unclear about the root cause.
Note: I am new to Jenkins pipeline, Gradle and Groovy :)

I assume you would like to use Gradle’s logging system for your log output from a Gradle plugin?
In that case I would suggest to create/get the logger instance differently. Either use project.logger.info(…) or create a new Logger like so:
private static final Logger logger = Logging.getLogger(ConfigurationPluginInitBase.class)
Having said that, the reason why your log messages might not show up currently could be that Gradle’s default log level is LIFECYCLE – but you seem to only be logging to INFO. You can try running Gradle with the --info option to see your messages.

Related

printing test container stdout into a file

I am using test container in my project. I am getting stdout in console of each container by using:
container.withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("container"))))
and i am getting output something like this:
[docker-java-stream--1578738495] INFO container - STDOUT: at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
[docker-java-stream--1578738495] INFO container - STDOUT: at org.springframework.boot.loader.Launcher.launch(Launcher.java:108)
[docker-java-stream--1578738495] INFO container - STDOUT: at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
[docker-java-stream--1578738495] INFO container - STDOUT: at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
but i am trying to add stdout into a separate file. I was trying something like this but it's not working.
PrintStream o = new PrintStream(new File("file.txt"));
PrintStream console = System.out;
System.setOut(o);
System.out.println((container.withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("container")))));
System.setOut(console);
I cannot use log4j because this project will be used as a dependency into another project and log4j might create conflict so i need some solution to print stdout into a file if possible. Thank you
You can use any Slf4j Logger as the parameter of the Slf4jConsumer constructor, for example, a logger that writes to a file:
Logger logger = LoggerFactory.getLogger(string);
// create fileAppender
// ...
logger.addAppender(fileAppender);
Slf4jLogConsumer logConsumer = new Slf4jLogConsumer(LOGGER);
container.followOutput(logConsumer);
You can find more information regarding the programmatic configuration of Slf4j loggers and appenders in this SO answer.

What should be the logger.rolling name in log4j2.properties when we have multiple packages?

I have created multiple packages in my maven project and I'm using Junit and Cucumber. I was using log4j before and now I want to migrate it to log4j2. I just searched for the log4j2 properties file format and found the below configurations in the file:
logger.rolling.name = com.example.my.app
logger.rolling.level = debug
logger.rolling.additivity = false
logger.rolling.appenderRef.rolling.ref = RollingFile
What package should I give in the logger.rolling.name when I have multiple packages in my project?
You can use the Root logger as the catch all for packages you don't want to specify and then create loggers for any prefixes you do want. For example, if you have classes in the following packages - com.example.my.app, com.example.your.stuff, com.example.my.stuff - you can configure loggers for each of them or if you configure a logger for com.example.my then but the app and stuff packages will use that. If you configure a logger for com.example then all three packages would use that (unless you have a logger that is a better match).

How to capture logs from workers from a Dask-Yarn job?

I have tried using the following in ~/.config/dask/distributed.yaml and ~/.config/dask/yarn.yaml,
logging-file-config: "/path/to/config.ini"
or
logging:
version: 1
disable_existing_loggers: false
root:
level: INFO
handlers: [consoleHandler]
handlers:
consoleHandler:
class: logging.StreamHandler
level: INFO
formatter: sample_formatter
stream: ext://sys.stderr
formatters:
sample_formatter:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
and then in my function that gets evaluated at the worker:
import logging
from distributed.worker import logger
import dask
from dask.distributed import Client
from dask_yarn import YarnCluster
log = logging.getLogger(__name__)
#dask.delayed
def worker_func(args):
logger.info("This will show up in the worker logs")
log.info("This does not show up in worker logs")
return
if __name__ == "__main__":
dag_1 = {'worker_func': (worker_func, arg_1)}
tasks = dask.get(dag_1, 'load-1')
log.info("This also shows up in logs, and custom formatted)
cluster = YarnCluster()
client = Client(cluster)
dask.compute(tasks)
When I try to view the yarn logs using:
yarn logs -applicationId {application_id}
I do not see the log from log.info inside worker_func, but I do see the logs from distributed.worker.logger and from outside that function on the console. I also tried using client.get_worker_logs(), but that returned an empty dictionary. Is there a way to see customized logs from inside the function that gets evaluated at a worker?
There's a lot going on in this question, so I'm going to answer "How do I configure logging for dask-yarn workers" and hope everything else becomes clear by answering that.
Dask's configuration system is loaded locally on the machine you start a dask cluster from (usually the edge node). This configuration is not distributed to the workers automatically, you're responsible for doing that yourself. You have a few options here:
Have admin/IT put configuration in /etc/dask/ on every node, which will affect all users.
Bundle configuration with your packaged environment. Dask will load configuration from {prefix}/etc/dask/, where prefix is sys.prefix.
For example, if you have a conda environment at /path/to/environment, you'd do the following to bundle the configuration
# Create the configuration directory in the environment
mkdir -p /path/to/environment/etc/dask/
# Add your configuration to this directory
mv config.yaml /path/to/environment/etc/dask/config.yaml
# Package the environment
conda pack -p /path/to/environment -o environment.tar.gz
Any configuration values set in config.yaml will now be available on all the worker nodes. An example configuration file setting some logging configuration would be:
logging:
version: 1
root:
level: INFO
handlers: [consoleHandler]
handlers:
consoleHandler:
class: logging.StreamHandler
level: INFO
formatter: sample_formatter
stream: ext://sys.stderr
formatters:
sample_formatter:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
Logs from completed dask-yarn applications can be retrieved using the YARN cli at
yarn logs -applicationId <application-id>
Logs for running dask-yarn applications can be retrieved using client.get_worker_logs(). Note that these logs will only contain logs written to the distributed.worker logger. You cannot write to your own logger and have them appear in the output of client.get_worker_logs(). To write to this logger, get it via
import logging
logger = logging.getLogger("distributed.worker")
logger.info("Writing with the worker logger")
Any logger appropriately configured to log to stdout or stderr will appear in the logs accessed via the yarn CLI, but only the distributed.worker logger output will also be available to get_worker_logs().
Side note
I have tried using the following in ~/.config/dask/distributed.yaml and ~/.config/dask/yarn.yaml
The name of the config files doesn't matter, dask loads all yaml files in all config directories and merges their contents. For more information please read the configuration docs

symfony/yaml backed symfony/config not parsing environment variables

I have recreated a simple example in this tiny github repo. I am attempting to use symfony/dependency-injection to configure monolog/monolog to write logs to php://stderr. I am using a yaml file called services.yml to configure dependency injection.
This all works fine if my yml file looks like this:
parameters:
log.file: 'php://stderr'
log.level: 'DEBUG'
services:
stream_handler:
class: \Monolog\Handler\StreamHandler
arguments:
- '%log.file%'
- '%log.level%'
log:
class: \Monolog\Logger
arguments: [ 'default', ['#stream_handler'] ]
However, my goal is to read the path of the log files and the log level from environment variables, $APP_LOG and LOG_LEVEL respectively. According to The symphony documentations on external paramaters the correct way to do that in the services.yml file is like this:
parameters:
log.file: '%env(APP_LOG)%'
log.level: '%env(LOGGING_LEVEL)%'
In my sample app I verified PHP can read these environment variables with the following:
echo "Hello World!\n\n";
echo 'APP_LOG=' . (getenv('APP_LOG') ?? '__NULL__') . "\n";
echo 'LOG_LEVEL=' . (getenv('LOG_LEVEL') ?? '__NULL__') . "\n";
Which writes the following to the browser when I use my original services.yml with hard coded values.:
Hello World!
APP_LOG=php://stderr
LOG_LEVEL=debug
However, if I use the %env(VAR_NAME)% syntax in services.yml, I get the following error:
Fatal error: Uncaught UnexpectedValueException: The stream or file "env_PATH_a61e1e48db268605210ee2286597d6fb" could not be opened: failed to open stream: Permission denied in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php:107 Stack trace: #0 /var/www/vendor/monolog/monolog/src/Monolog/Handler/AbstractProcessingHandler.php(37): Monolog\Handler\StreamHandler->write(Array) #1 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(337): Monolog\Handler\AbstractProcessingHandler->handle(Array) #2 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(532): Monolog\Logger->addRecord(100, 'Initialized dep...', Array) #3 /var/www/html/index.php(17): Monolog\Logger->debug('Initialized dep...') #4 {main} thrown in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php on line 107
What am I doing wrong?
Ok you need a few things here. First of all you need version 3.3 of Symfony, which is still in beta. 3.2 was the released version when I encountered this. Second you need to "compile" the environment variables.
Edit your composer.json with the following values and run composer update. You might need to update other dependencies. You can substitute ^3.3 with dev-master.
"symfony/config": "^3.3",
"symfony/console": "^3.3",
"symfony/dependency-injection": "^3.3",
"symfony/yaml": "^3.3",
You will likely have to do this for symfony/__WHATEVER__ if you have other symfony components.
Now in you're code after you load your yaml configuration into your dependency container you compile it.
So after you're lines here (perhaps in bin/console):
$container = new ContainerBuilder();
$loader = new YamlFileLoader($container, new FileLocator(__DIR__ . DIRECTORY_SEPARATOR . '..'));
$loader->load('services.yml');
Do this:
$container->compile(true);
Your IDE's intellisense might tell you compile takes no parameters. That's ok. That's because compile() grabs its args indirectly via func_get_arg().
public function compile(/*$resolveEnvPlaceholders = false*/)
{
if (1 <= func_num_args()) {
$resolveEnvPlaceholders = func_get_arg(0);
} else {
. . .
}
References
Github issue where this was discussed
Pull request to add compile(true)
Using this command after loading your services.yaml file should help.
$containerBuilder->compile(true);
given your files gets also validated by the checks for proper configurations which this method also does. The parameter is $resolveEnvPlaceholders which makes environmental variables accessible to the yaml services configuration.

HHVM+Hacklang: errors/warnings output into browser

Is there any way to tell HHVM to output Hacklang warnings and errors into the browser? Something like PHP does with enabled display_errors, display_startup_errors and error_reporting set to E_ALL
HHVM version:
$ php -v
HipHop VM 3.1.0-dev+2014.04.09 (rel)
Compiler: heads/master-0-g4fc811c64c23a3686f66a2bea80ba47f3eaf9f3d
Repo schema: 79197c935790c0b9c9cb13566c3e727ace368117
I've tried the following config:
$ cat /etc/hhvm/php.ini
; php options
display_startup_errors = On
error_reporting = E_ALL
display_errors = On
; hhvm specific
hhvm.log.level = Warning
hhvm.log.always_log_unhandled_exceptions = true
hhvm.log.runtime_error_reporting_level = 8191
hhvm.mysql.typed_results = false
And :
$ cat /etc/hhvm/server.ini
; php options
pid = /var/run/hhvm/pid
; hhvm specific
hhvm.server.port = 9000
hhvm.server.type = fastcgi
hhvm.server.default_document = index.php
hhvm.log.level = Warning
hhvm.log.always_log_unhandled_exceptions = true
hhvm.log.runtime_error_reporting_level = 8191
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
hhvm.mysql.typed_results = false
hhvm.debug.full_backtrace = true
hhvm.debug.server_stack_trace = true
hhvm.debug.server_error_message = true
hhvm.debug.translate_source = true
tl;dr: You can't.
The thing to keep in mind here is that the typechecker does a static analysis of your code while the PHP errors you talk about show up at runtime. If this was C++, you could compare the Hack typechecker errors with the errors during the compile step - so Hack tells you things that are wrong before the code even runs.
The trick is to use either the vim or emacs plugins which warn you of errors as you save the file, or use hh_client from the terminal, or build a plugin for your favorite IDE (feel free to send pull requests!). hh_client --json gives an easy to parse output if you want to build a plugin for Sublime Text, or Eclipse or whatever you want.
Note that some errors are runtime errors, while some aren't. Function args as well as return types should throw exceptions at runtime for the latest HHVM build for example. The problem there is that you only see those errors when you hit a certain code-path. The beauty of Hack is that it errors for all the problems in your code, even if it's a code-path you may not test at runtime.

Resources