I have placed the solace jar files in the below location :
C:\oracle10.3.6\Middleware\user_projects\domains\solace_domain\lib (solace jar files)
If I have the log4j.properties in application level, how this will effect the logging ?
As I start the weblogic server it starts printing the INFO login in the server console ?
Here is my log4j.properites file :
The log4j.properties file should be contained somewhere in the Weblogic server's CLASSPATH. You can enable the Java system property "-Dlog4j.debug" to see debug level log4j logs. This will tell you where the properties file is being read from.
If the following is contained in the correct log4j.properties file, you should only be seeing ERROR and FATAL level Solace JMS and Solace JCSMP logs. You should not be seeing INFO level Solace JMS or Solace JCSMP logs on startup.
log4j.category.com.solacesystems.jms=ERROR
log4j.category.com.solacesystems.jcsmp=ERROR
Related
Docker tomcat container saves the startup log of the tomcat application in catalina.out file (last line would INFO: Server startup in 136607 ms). But rest of the activity of the tomcat app is logged onto to console and can be viewed with docker logs container_id.
Is there a way to log to file and console as well? I need the activity log inside catalina.out.2021.log in the tomcat container so I can run a script that collects analyze the logs and process it and sends email and It needs to run inside the container.
Tomcat is started with custom logging properties file /usr/local/tomcat/conf/logging.properties but the output is ending up on console and not in file.
In the image you are using (hobsonspipe/centos-tomcat7:latest) the server is started with:
-Djava.util.logging.config.file=/etc/tomcat/logging.properties
So you need to modify the /etc/tomcat/logging.properties file instead of the one used in your question. This file already does what you want: it sends all Tomcat logs to the standard output and /var/log/catalina.<date>.log, except log messages from the applications (if they use the SevletContext#log method) which go to /var/log/localhost.<date>.log.
If your application doesn't use neither ServletContext#log or java.util.logging, you must configure the specific logging framework elsewhere.
Error logs don't log in the GCP console.
Warning logs do log as info (so I've been using them to log info messages). E.g.,
test = "hello debug world"
logging.warning("%s", test) # will log as info message in GCP dataflow console
Info logs don't log in the console either.
I'm using Apache Beam Python 3.7 SDK 2.23.0, but this seems to be an old issue.
Also happens by Apache Beam SDK itself, which logs errors silently at times as info.
Any idea what's causing this? Seems to be a bug on the Apache Beam side of things more than an scripting error.
You will have to change the drop down value from Info to some higher log level to see Error or Warning type messages. In the screeenshot the log level is set to Info and you are searching for a string error in the log entries and stackdriver is filtering based on it.
My web application works fine with the created log4j2.xml file on an aws ec2 instance. But now I containerized it and it's running in ECS Fargate. I can see catalina logs in CloudWatch but not application specific logs that I configured in log4j2.xml file. log4j2.xml is located in a specific path like /var/webapp/conf and I've put the path in catalina.properties as shared.loader=/var/webapp/conf. Also, I see this ERROR in my catalina logs:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'log4j2.debug' to show Log4j2 internal initialization logging.
Note: I don't want to change tomcat default logging. I'm just trying to send my application logs to the console as well, so I can see all the logs in one CloudWatch log stream.
Configuration for log4j logging driver is not being recognised by your Fargate Task. The reason being, with Fargate tasks we can only setup some specific logging drivers via the Task Definition.
Amazon ECS task definitions for Fargate support the awslogs, splunk, firelens, and fluentd log drivers for the log configuration.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I recommend to use CloudWatch log driver:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html
I'm using the Jenkins Active Directory plug-in and can't log in after several attempts.
The error message says:
If you are a system administrator and suspect this to be a configuration problem, see the server console output for more details.
Where can I find the server console output (on the local filesystem)?
I presume that it is accessible from the Jenkins web pages, but since I can't log in, that's not much use. I can log in to the (Windows) server where Jenkins is installed - where are they on the server?
The console output you are looking for are not accessible from Jenkins. If you have installed Jenkins as a service, when that service is started three files are created in JENKINS_HOME: jenkins.err.log, jenkins.out.log, and jenkins.wrapper.log. The relevant ones for you are jenkins.err.log and jenkins.out.log.
If you used the default location you can find them in C:\Program Files (x86)\Jenkins.
In more recent versions you can tail the logs under $JENKINS_HOME/support/all_[date].log. It outputs all relevant information when you're modifying settings on the Web console and such.
I am able to configure an agent for window but i have a confusion regarding connectivity between web servers logs with agent.
1: How to connect web server with agent ?
2: while starting flume.bat file. It is generating flume.log file in which i am getting below mentioned Exception.
org.apache.flume.conf.ConfigurationException: No channel configured for sink: hdfssink
at org.apache.flume.conf.sink.SinkConfiguration.configure(SinkConfiguration.java:51)
atorg.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSinks(FlumeConfiguration.java:661)
1.The data flow is as below
your application (or web server) --> source --> channel --> sink
Now, the data can flow from your webserver to the source either by "pull" mechanism or "push" mechanism. In your case, you can either tail the webserver logs or use a spooling source.
2.This looks like a misconfiguration issue. You need to post your config file to figure out the issue