I upgraded Jenkins from version 2.319.3 to version 2.361.4. After that, I cannot see other scripts under replay on a pipeline build. It was working fine before.
Look at the screenshot below:
I don't see any errors or warnings in the logs when I reload this page.
Jenkins is running on Kubernetes with the following chart version: 3.12.2
Plugins versions are as follows:
allure-jenkins-plugin:2.30.3
ansicolor:1.0.2
antisamy-markup-formatter:155.v795fb_8702324
artifact-manager-s3:670.v0558a_cb_c82c2
aws-global-configuration:106.v106dc1d8d86e
aws-java-sdk:1.12.287-357.vf82d85a_6eefd
azure-ad:313.v14b_f37ff114d
blueocean:1.27.1
build-monitor-plugin:1.13+build.202205140447
cloudbees-bitbucket-branch-source:791.vb_eea_a_476405b
cloudbees-disk-usage-simple:178.v1a_4d2f6359a_8
configuration-as-code-groovy:1.1
configuration-as-code:1569.vb_72405b_80249
copyartifact:681.va_a_298c7f9c01
credentials-binding:523.vd859a_4b_122e6
docker-workflow:1.28
git-client:4.1.0
git:5.0.0
github-api:1.303-400.v35c2d8258028
github-branch-source:1701.v00cc8184df93
gitlab-plugin:1.5.36
http_request:1.14
jobConfigHistory:1176.v1b_4290db_41a_5
junit:1166.va_436e268e972
kubernetes-credentials-provider:1.209.v862c6e5fb_1ef
kubernetes:3834.vdc85747145e6
logfilesizechecker:1.5
logstash:2.5.0205.vd05825ed46bd
matrix-auth:3.1.6
naginator:1.18.2
oic-auth:2.5
prometheus:2.1.1
rebuild:1.34
slack:631.v40deea_40323b
ssh-agent:295.v9ca_a_1c7cc3a_a_
support-core:1244.vceb_57079258a
timestamper:1.21
validating-string-parameter:2.8
warnings-ng:9.11.1
workflow-aggregator:590.v6a_d052e5a_a_b_5
workflow-basic-steps:994.vd57e3ca_46d24
workflow-durable-task-step:1223.v7f1a_98a_8863e
workflow-job:1268.v6eb_e2ee1a_85a
Related
I have added 2 gradle dependencies for parsing the yaml contents and generate a json out of it but I am getting these kind of logs when I am trying to turn up my server.
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka version: 2.3.0
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka commitId: fc1aaa116b661c8a
%PARSER_ERROR[cyan] %PARSER_ERROR[gray] %PARSER_ERROR[highlight] %PARSER_ERROR[magenta] - Kafka startTimeMs: 1604911935830
From the Gradle dependency excluding the 'ch.qos.logback' group 'logback-core' module solved the issue.
I am currently using Jenkins 2.138.1, Oracle Weblogic 12c.
I tried to deploy the war file into Weblogic server by using Jenkins Weblogic Deployer Plugin.
The Jenkins Build will be triggered every midnight 3am.
Sometimes the build is success, but sometimes failed with the error below:
Deployment Plan: null
App root: null
App config: null
Deployment Options: {
isRetireGracefully=true,
isGracefulProductionToAdmin=false,
isGracefulIgnoreSessions=false,
rmiGracePeriod=-1,
retireTimeoutSecs=-1,
undeployAllVersions=false,
archiveVersion=null,
planVersion=null,
isLibrary=false,
libSpecVersion=null,
libImplVersion=null,
stageMode=null,
clusterTimeout=3600000,
altDD=null,
altWlsDD=null,
name=myapp,
securityModel=null,
securityValidationEnabled=false,
versionIdentifier=null,
isTestMode=false,
forceUndeployTimeout=0,
defaultSubmoduleTargets=true,
timeout=0,
deploymentPrincipalName=null,
useExpiredLock=false
}
[BasicOperation.execute():445] : Initiating undeploy operation for app,
myApp, on targets: [BasicOperation.execute():447] : MyMgdSvr1
weblogic.management.provider.EditWaitTimedOutException: Waited 0
milliseconds at weblogic.utils.StackTraceDisabled.unknownMethod()
I think I have found the solution, the issue is actually caused by it cannot obtain the edit lock from Weblogic admin server.
Therefore before I un-deploy and deploy the application, I overwritten the user lock so that my instance is able to obtain the lock.
I'm running a Jenkins 2.25 server on Windows Server 2012. At the moment we're using the Maven Integration Plugin 2.12.1 and the Job DSL Plugin 1.57.
I've written DSL scripts for around 200 existing jobs on our server.
For any jobs that use Maven, either as a build step or as an actual Maven, I'm having a really frustrating issue. When I run the generated jobs, they fail with the following output.
12:17:12 [ERROR] Failed to execute goal com.googlecode.maven-download- plugin:download-maven-plugin:1.3.0:wget (default) on project myprojecy: The parameters 'url' for goal com.googlecode.maven-download-plugin:download-maven-plugin:1.3.0:wget are missing or invalid -> [Help 1]
12:17:12 [ERROR]
12:17:12 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
12:17:12 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
12:17:12 [ERROR]
12:17:12 [ERROR] For more information about the errors and possible solutions, please read the following articles:
12:17:12 [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginParameterException
Initially we thought we identified the issue as being that we were missing the XML snippet, even thought it appeared these settings were as they should be in the UI.
<settings class="jenkins.mvn.DefaultSettingsProvider"/>
<globalSettings class="jenkins.mvn.DefaultGlobalSettingsProvider"/>
<injectBuildVariables>false</injectBuildVariables>
So added this to the scripts:
configure { node ->
node / settings (class: 'jenkins.mvn.DefaultSettingsProvider') {
}
node / globalSettings (class: 'jenkins.mvn.DefaultGlobalSettingsProvider') {
}
node / injectBuildVariables ('false') {
}
}
But the jobs still fail when I try to run them, even though the XML now contained this snippet as expected.
Now two very bizarre things that I can't work out which are clearly related. Firstly, after the jobs fail, if I manually select "configure" for the job, then save it (i.e. don't make any actual changes), the job runs fine forever more (until a seed job us run and then it fails again).
Secondly, in the job config history after I run the seed job, I see the changes made when the seed job runs under the System user. However, within a matter of seconds, every time, another configuration change is recorded under my username, despite the fact that I have not made any changes to the job config - this is independent of me saving the job without making changes, by the way, it happens instantly.
I should add that further inspection suggests to me that there is some default settings for Maven which are not being applied to my DSL generated jobs. When adding the -X switch to the Maven goals, I could see more information about where these jobs are failing. The output is:
15:06:31 [DEBUG] Goal: com.googlecode.maven-download-plugin:download-maven-plugin:1.3.0:wget (default)
15:06:31 [DEBUG] Style: Regular
15:06:31 [DEBUG] Configuration: <?xml version="1.0" encoding="UTF-8"?>
15:06:31 <configuration>
15:06:31 <cacheDirectory>${download.cache.directory}</cacheDirectory>
15:06:31 <checkSignature default-value="false">${checkSignature}</checkSignature>
15:06:31 <failOnError default-value="true"/>
15:06:31 <outputDirectory default-value="${project.build.directory}">D:\data\jenkins\workspace\project\target</outputDirectory>
15:06:31 <outputFileName>${jarsigner.keystore.filename}</outputFileName>
15:06:31 <overwrite>${download.overwrite}</overwrite>
15:06:31 <readTimeOut default-value="0"/>
15:06:31 <retries default-value="2"/>
15:06:31 <session>${session}</session>
15:06:31 <skip default-value="false">${download.plugin.skip}</skip>
15:06:31 <skipCache default-value="false"/>
15:06:31 <unpack default-value="false">false</unpack>
15:06:31 <url>${jarsigner.keystore.url}</url>
15:06:31 </configuration>
Where in the successful run of the job (post fake config change) some of those fields are full, for example a URL for the keystore. This is obviously the problem, but I don't know what to do. As far as I can tell this should be resolved by including the configure block above in the groovy, but somehow my jobs are missing this (but they have it after saving the job again with no changes).
Can anyone see what I am doing wrong here?
The issue is this code in the XML which is automatically generated:
<jvmOptions></jvmOptions>
It seems that despite being empty this is overriding any default Maven options but then when the job gets saved again this is taken out because it is empty. Resolved by adding this to the groovy script:
configure({
it.remove(it / 'jvmOptions')
})
This seems likely to be a bug in the DSL but it's surprising that my colleagues and I have been unable to find any mention of this. Anyway, the above resolved this for me.
Using Eclipse on Windows, a vertx Verticle with a misconfigured cluster.xml shows the following error in the Eclipse console:
11:46:18.536 [hz._hzInstance_1_dev.generic-operation.thread-0] ERROR com.hazelcast.cluster - [192.168.25.8]:5701 [dev] [3.5.2] Node could not join cluster. A Configuration mismatch was detected: Incompatible joiners! expected: multicast, found: tcp-ip Node is going to shutdown now!
11:46:22.529 [vert.x-worker-thread-0] ERROR com.hazelcast.cluster.impl.TcpIpJoiner - [192.168.25.8]:5701 [dev] [3.5.2] com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
This is fine, I know to reconfigure the cluster for multicast. The problem is when I deploy the same code and configuration to Linux, and run it as a fat jar then the same log doesn't show either the hz thread or the vertx worker thread logs. Instead it shows the verticle logs as:
2015-11-05 12:03:09,329 Starting clustered Vertx
2015-11-05 12:03:13,549 ERROR: VerticleService failed to start: java.lang.NullPointerException
So if I run on Linux the log to tell me there's a misconfiguration isn't showing. There's something I am missing in the vertx / maven log config but I don't know what. Maven properties are as follows:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<exec.mainClass>main.java.eiger.isct.service.Verticle</exec.mainClass>
<log4j.configurationFile>log4j2.xml</log4j.configurationFile>
<hazelcast.logging.type>log4j2</hazelcast.logging.type>
</properties>
and I start the fat jar using:
java -Dlog4j.configuration=log4j2.xml -jar Verticle-0.5-SNAPSHOT-fat.jar
How can I get the hz thread and vertx thread to log on Linux?
I've tried adding a vertx-default-jul-logging.properties file below to the maven resources dir but no luck.
com.hazelcast.level=ALL
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
THANKS for your comment.
Vertx has started logging having added
-Djava.util.logging.config.file=../logging.properties
to the java start command and with the default logging.properties like (and this is a nice config for lower level stuff):
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS:%1$tL %4$s %2$s %5$s%6$s%n
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.pattern=../logs/vertx.log
.level=ALL
io.vertx.level=ALL
com.hazelcast.level=ALL
io.netty.util.internal.PlatformDependent.level=ALL
and vertx is logging to ../logs/vertx.log on Linux
I'm trying to use the release 2.0.4 plugin to deploy my war through grails 2.1.1 to artifactory server.
My BuildConfig.groovy has:
grails.project.repos.snap.url = "http://server:8080/artifactory/apps-snapshot-local"
grails.project.repos.snap.username = "user"
grails.project.repos.snap.password = "password"
grails.project.repos.rel.url = "http://server:8080/artifactory/apps-release-local"
grails.project.repos.rel.username = "user"
grails.project.repos.rel.password = "password"
grails.project.repos.default = "rel"
When I just do the "grails maven-deploy" it works and deploys to my rel server as expected. When I try to override the default target through the command line I get failures.
grails maven-deploy --repository=snap
I get this:
| Done creating WAR snap
| POM generated: C:\dev-git\DBUpdateWeb\target/pom.xml.
| Error Error deploying artifact: C:\dev-git\DBUpdateWeb\target\DBUpdateWeb.war (The system cannot find the file specified)
| Error Have you specified a configured repository to deploy to (--repository argument) or specified distributionManagement in your POM?
When I do specify the --repository tag it doesn't generate a war even though it says it did. Any help is appreciated. Thanks in advance.
Try grails maven-deploy "--repository=snap".
Also, specify app.version in application.properties so that the WAR will be standard compliant (1.0-SNAPSHOT for publishing to the snapshots repository and 1.0 for releases), and comment out the grails.project.war.file line in BuildConfig.groovy.