My containerized tomcat web application doesn't see configured log4j2.xml - docker

My web application works fine with the created log4j2.xml file on an aws ec2 instance. But now I containerized it and it's running in ECS Fargate. I can see catalina logs in CloudWatch but not application specific logs that I configured in log4j2.xml file. log4j2.xml is located in a specific path like /var/webapp/conf and I've put the path in catalina.properties as shared.loader=/var/webapp/conf. Also, I see this ERROR in my catalina logs:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
Note: I don't want to change tomcat default logging. I'm just trying to send my application logs to the console as well, so I can see all the logs in one CloudWatch log stream.

Configuration for log4j logging driver is not being recognised by your Fargate Task. The reason being, with Fargate tasks we can only setup some specific logging drivers via the Task Definition.
Amazon ECS task definitions for Fargate support the awslogs, splunk, firelens, and fluentd log drivers for the log configuration.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I recommend to use CloudWatch log driver:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html

Related

Docker Tomcat logging to catalina.out and to console

Docker tomcat container saves the startup log of the tomcat application in catalina.out file (last line would INFO: Server startup in 136607 ms). But rest of the activity of the tomcat app is logged onto to console and can be viewed with docker logs container_id.
Is there a way to log to file and console as well? I need the activity log inside catalina.out.2021.log in the tomcat container so I can run a script that collects analyze the logs and process it and sends email and It needs to run inside the container.
Tomcat is started with custom logging properties file /usr/local/tomcat/conf/logging.properties but the output is ending up on console and not in file.
In the image you are using (hobsonspipe/centos-tomcat7:latest) the server is started with:
-Djava.util.logging.config.file=/etc/tomcat/logging.properties
So you need to modify the /etc/tomcat/logging.properties file instead of the one used in your question. This file already does what you want: it sends all Tomcat logs to the standard output and /var/log/catalina.<date>.log, except log messages from the applications (if they use the SevletContext#log method) which go to /var/log/localhost.<date>.log.
If your application doesn't use neither ServletContext#log or java.util.logging, you must configure the specific logging framework elsewhere.

Unable to access datasource mbeans via jmx in wildfly swarm/thorntail

I'm trying to enable JMX for my wildfly swarm component. I'm used to seeing several mbeans for a variety of wildfly subsystems, I'm specifically interested in the data source mbeans.
I've pasted a snippet below, I've got the jmx fraction and I have statistics-enabled set to true. When thorntail is running I can connect to the JVM via JMX, but I am cannot see any datasource mbeans. Is there something else that needs to be enabled for them to show up?
The app is currently on swarm 2018.2.0.Final
swarm:
jmx:
expression-expose-model.domain-name: RemoteJMX
jmx-remoting-connector:
use-management-endpoint: true
resolved-expose-model.domain-name: RemoteJMX
show-model: true
datasources:
data-sources:
MyDataSourceName:
driver-name: com.microsoft.sqlserver
connection-url: jdbc:xyz
statistics-enabled: true
First of all, WildFly Swarm 2018.2.0.Final is very old. In the meantime, WildFly Swarm got renamed to Thorntail; you can automatically migrate by running mvn io.thorntail:thorntail-maven-plugin:2.5.0.Final:migrate-from-wildfly-swarm.
And then: if you connect to JMX, do you see any WildFly MBeans at all? I mean, is the problem with datasources only, or is it more general?
During boot, you should see JMX-related log messages, such as JMX not configured for remote access or JMX configured for remote connector: implicitly using ... interface. Do you see any of them?
Finally, it seems you want JMX exposed on the management port. Do you have a dependency on the management fraction?

log file handling with docker syslog logging driver

Is there a way to pick up the log messages which are logged to a log file, when using the syslog log driver of Docker?
Whatever I write to sysout are getting picked by Rsyslog but anything logged to a log file is not picked. I don't see any option in the syslog driver option which could help indicate a log file to be picked up.
Thanks
Dockers logging interface is defined as stdout and stderr, so the best way is to modify the log settings of your process to send any log data to stdout and stderr.
Some applications can configure logging to go directly to syslog. Java processes using log4j are a good example of this.
If logging to file is the only option available, scripts, logstash, fluentd, rsyslog, and syslog-ng can all ingest text files and output syslog. This can either be done inside the a container with an additional service, or using a shared, standardised logging area on each Docker host and running the ingestion from there.

Airflow: Could not send worker log to S3

I deployed Airflow webserver, scheduler, worker, and flower on my kubernetes cluster using Docker images.
Airflow version is 1.8.0.
Now I want to send worker logs to S3 and
Create S3 connection of Airflow from Admin UI (Just set S3_CONN as
conn id, s3 as type. Because my kubernetes cluster is running on
AWS and all nodes have S3 access roles, it should be sufficient)
Set Airflow config as follows
remote_base_log_folder = s3://aws-logs-xxxxxxxx-us-east-1/k8s-airflow
remote_log_conn_id = S3_CONN
encrypt_s3_logs = False
and first I tried creating a DAG so that it just raises an exception immediately after it's running. This works, log can be seen on S3.
So I modified so that the DAG now creates an EMR cluster and waits for it to be ready (waiting status). To do this, I restarted all 4 docker containers of airflow.
Now the DAG looks working, a cluster is started and once it's ready, DAG marked as success. But I could see no logs on S3.
There is no related error log on worker and web server, so I even cannot see what may cause this issue. The log just not sent.
Does anyone know if there is some restriction for remote logging of Airflow, other than this description in the official documentation?
https://airflow.incubator.apache.org/configuration.html#logs
In the Airflow Web UI, local logs take precedence over remote logs. If
local logs can not be found or accessed, the remote logs will be
displayed. Note that logs are only sent to remote storage once a task
completes (including failure). In other words, remote logs for running
tasks are unavailable.
I didn't expect it but on success, will the logs not be sent to remote storage?
The boto version that is installed with airflow is 2.46.1 and that version doesn't use iam instance roles.
Instead, you will have to add an access key and secret for an IAM user that has access in the extra field of your S3_CONN configuration
Like so:
{"aws_access_key_id":"123456789","aws_secret_access_key":"secret12345"}

How do I view the Jenkins server console output on the local filesystem?

I'm using the Jenkins Active Directory plug-in and can't log in after several attempts.
The error message says:
If you are a system administrator and suspect this to be a configuration problem, see the server console output for more details.
Where can I find the server console output (on the local filesystem)?
I presume that it is accessible from the Jenkins web pages, but since I can't log in, that's not much use. I can log in to the (Windows) server where Jenkins is installed - where are they on the server?
The console output you are looking for are not accessible from Jenkins. If you have installed Jenkins as a service, when that service is started three files are created in JENKINS_HOME: jenkins.err.log, jenkins.out.log, and jenkins.wrapper.log. The relevant ones for you are jenkins.err.log and jenkins.out.log.
If you used the default location you can find them in C:\Program Files (x86)\Jenkins.
In more recent versions you can tail the logs under $JENKINS_HOME/support/all_[date].log. It outputs all relevant information when you're modifying settings on the Web console and such.

Resources