I am running Jenkins from source using mvn jenkins-dev:run. It seems to be logging at the DEBUG level making page load very slow. How do I change the log level of Jenkins. I have tried the usual util.logging properties file, but I can't get it working.
mvn jenkins-dev:run -Dorg.slf4j.simpleLogger.defaultLogLevel=error
Default log level for all instances of SimpleLogger. Must be one of ("trace", "debug", "info", "warn", or "error"). If not specified, defaults to "info".
You can also edit ${MAVEN_HOME}/conf/logging/simplelogger.properties to make it consistent.
More info here: http://maven.apache.org/maven-logging.html
EDIT:
To also control the log level of the Jenkins war itself as the OP wanted i was able to do so using:
mvn jenkins-dev:run -Djava.util.logging.config.file=my-logging.properties
The contents of my-logging.properties are:
handlers = java.util.logging.FileHandler, java.util.logging.ConsoleHandler
.level= SEVERE
Now i only see two INFO messages comming from jetty . One could also configure the jetty logger via the same file if needed, i will not go into more detail about how to do this because i have little experience with it, but if i could take a guess, you would have to manipulate the correct class (eg: org.eclipse.jetty.LEVEL=WARN) used by Jenkins when embedding jetty.
Related
I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.
I do something like this
-javaagent:/usr/local/lib/perfino/perfino.jar=server=ybperfino,name=${HSTNAMESHORT}-${APPNAME},group=${YBENV}/${HSTNAMESHORT},logMBean=10,logFile=${LOG_DIR}/perfinologs/${HSTNAMESHORT}-${APPNAME}.log
basically I want the log files to be created in the log directory for the app not the home directory for the userid
but it seems like the log file isn't being created either with logfile argument or with out !
using java11 if that makes any difference.
Found the answer - I had a competing java agent that was loading before it.
After I changed the order both java agents worked.
Absolutey stumped on this.
I have two controller integration tests that pass successfully. However, when running in Intellij or via gradle check, the JVM never exits. If I comment out the entire integration tests, the JVM exits cleanly.
When debugging any of the integration tests, I can hit pause and see that there are several threads in different states: WAITING, RUNNING, SLEEPING.
The database used in application.yml is purely an in-memory one:
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
Changing this to file based does not fix the problem. Changing DB_CLOSE_ON_EXIT=TRUE does not help either.
I've tried removing #Rollback and even using #Transactional with a timeout, but that doesn't fix it.
Creating an integration test on a fresh project works with no deadlock/hanging/waiting.
I have moved back through revisions to find the changeset where this behaviour started, but the changes were purely in GSPs, Controllers and an additional assertion & test method in one of the integration tests.
The last lines in the logs are:
INFO org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext - Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#73386d72: startup date [Mon May 30 18:48:25 BST 2016]; root of context hierarchy
INFO org.springframework.context.support.DefaultLifecycleProcessor - Stopping beans in phase -2147483648
INFO org.grails.plugins.datasource.TomcatJDBCPoolMBeanExporter - Unregistering JMX-exposed beans on shutdown
INFO org.grails.plugins.datasource.TomcatJDBCPoolMBeanExporter - Unregistering JMX-exposed beans
INFO org.hibernate.tool.hbm2ddl.SchemaExport - HHH000227: Running hbm2ddl schema export
INFO org.hibernate.tool.hbm2ddl.SchemaExport - HHH000230: Schema export complete
I've tried cutting the integration test methods down to one method and the issue still occurs.
The versions I'm using are:
$ ~/apps/grails-3.1.5/bin/grails --version
|Grails Version: 3.1.5
|Groovy Version: 2.4.6
|JVM Version: 1.8.0_92
Windows 10 64bit.
Here's a thread dump.
I have no idea how to debug this further. Any ideas?
I would turn on debug level logging. Also, if you can, I would upgrade grails to something post 3.1.9. (3.1.11 is current as I write this.)
Right around grails v3.1.5 there were configuration inconsistencies between Grails and Hibernate. The grails team was quickly upgrading several interfaces, and they got through it quickly.
The result was that you didn't end up running the configuration that you thought you were. It also affected cache and transaction management.
At the time, I had to create redundant configs to make sure Grails was getting configs at one level, hibernate at another. You don't have to do this anymore, but at the time, I had to use a config like the one listed here.
To find the problem, I turned on debug logging for grails and hibernate and all of my database drivers and waded all of the way through it.
This plugin also helps with the detailed monitoring info:
I am trying to run HermesJMS from soapUI 5.2.1 on Windows7 x64
The preferences and path to hermes config are set correctly.
The problem is that I cannot write to C: drive. So I had to install SOAPUI and Hermes in alternative places. I have changed hermes_home, java_home and hermes_config to my actual paths. Paths do not contain whitespaces etc. When I run hermes.bat from command prompt, it starts correctly.
However, when I try SoapUI -> Project -> right click -> start HermesJMS - nothing happens. Things are as bad that I even couldn't find anything useful neither in soapui nor in hermes logs.
File structure is as follows:
hermes_home = ...\SoapUI-5.2.1\hermesJMS
hermes_config = ...\SoapUI-5.2.1\hermesJMS\cfg
Does anyone have an idea what could be going on? Or for a start where can i find stdout and stderr of a script which starts hermes from SoapUI?
Here are the steps to configure SoapUI with HermesJMS:
Preferences: In SoapUI tool, go to File -> Preferences -> Tools and set the path for HermesJMS, which is mentioned here in the documentation. Then, save the preferences.
Start HermesJMS: Now, select your soapui project. Right click -> Start HermesJMS. At this point, a dialog will be shown requesting user to choose for the hermes configuration directory where it looks for the file called hermes-config.xml. Default location it looks for is under {user.home}\.hermes.
You already mentioned hermesJMS is already configured to connect with TIBCO EMS, so you will be having that file on your system.
Configuring JMS: I believe this may not really applicable for you. But, in case if someone is needed, here are the detailed steps provided, citing the documentation.
-- Here for activemq from the official site.
-- Here for TIBCO EMS. And here, there. Also find some information relevant to EMS connection issues here.
Permissions Issue on C Drive:
There is no constraint from SmartBear that SoapUI needs to be installed in a specific drive on the computer. So, you are free to install the software on your machine where you have the rights to do so.
Does anyone have an idea what could be going on? Or for a start where can i find stdout and stderr of a script which starts hermes from SoapUI?
Best thing you could do is to go the logs to find what is going on. You can find lot of useful information from the logs when the situation requires. SoapUI logs can be found under {user.home} when you invoked it from windows -> start menu. If you start SoapUI from command line (go to SOAPUI_HOME\bin) using soapui.bat script, then you should be able so the log on the console itself also log files can be found in the same directory where you invoked.
This time the above instruction should resolve your issue.
When I run flume using the command :
bin/flume-ng agent --conf conf --conf-file flume.conf --name agentName -Dflume.root.logger=INFO,console
it runs listing all its log data on the console. I would like to store all this log data (flume's log data) in a file. How do I do it?
You need to make a custom build of Flume which uses log4j2.
You configure log4j2 to use a rolling file appender that rolls every minute (or whatever the latency is that you desire) to a spooling directory.
You configure Flume to use a SpoolingDirectorySource against that spooling directory.
You can't use a direct Flume appender (such as what's in log4j2) to log Flume because you will get into deadlock.
You can't use log4j1 with a rolling file appender because it has a concurrency defect which means it may write new messages to an old file and the SpoolingDirectorySource then fails.
I can't remember if I tried the Log4j appender from Flume with this setup. That appender does not have many ways to configure it and I think it will cause you problems if the subsequent agent you're trying to talk to is down.
Another approach might be to patch log4j1 and fix that concurrency defect (there's a variable that needs to be made volatile)
(Yes, setting this up is a little frustrating!)
dont run with -Dflume.root.logger=INFO,console ,then flume will log in ./logs