I am using the missing-link http task to push build artifacts to our artifact depository. The problem is that if I push a new artifact, I get an HTTP response code of 201. If I push over an existing artifact, I get a 204. Both of these are valid in my context. But the expected attribute of <http> only accepts a single response code. I tried expected="201,204" and expected="201 204" but I get a NumberFormatException when executing that ant node. Is there a way to allow both 201 and 204 but treat any other responses as a failure?
No, Not directly as you tried. You'd have to modify and rebuild it. I checked the code, expected is an int. as you found it won't take a list.
It would be an ugly workaround but you could set failonunexpected=false.
you could make it less ugly if you continued no matter what the http return was by setting failonunexpected=false and then put the status in a property statusProperty="http.status" which you could use to fail the build if http.status wasn't either 201 or 204. something like:
<condition property="http.status.ok">
<matches pattern="20[14]" string="${http.status}"/>
</condition>
<fail message="Bad http status ${http.status}" unless="http.status.ok"/>
Tell me more about your artifact repository. is it maven or ivy layout? you may be able to leverage Ivy's publish task.
Related
I'm a Java Batch newbie. I deployed a simple batch job that includes a JobListener on WebSphere Liberty 17.0.0.4 (note that I'm using IBM's JSR-352 implementation, not Spring Batch). The batch job itself runs as expected: it reads an input file, does a simple data transformation, and writes to a DB. But in the JobListener's afterJob() method for a successful execution, I see a batch status of STARTED and an exit status of null. (These are the same values I see logged in the beforeJob() method). I expected afterJob() to see a status of COMPLETED unless there were exceptions.
The execution id logged by afterJob() is the same value returned by JobOperator.start() when I kick off the job, so I know I'm getting the status of the correct job execution.
I couldn't find any examples of a JobListener that fetches a batch status, so there's probably a simple error in my JSL, or I'm fetching the batch status incorrectly. Or do I need to explicitly set a status somewhere in the implementation of the step? I'd appreciate any pointers to the correct technique for setting and getting a job execution's final batch status and exit status.
Here's the JSL:
<job ...>
<properties>...</properties>
<listeners>
<listener ref="jobCompletionNotificationListener"/>
</listeners>
<flow id="flow1">
<step id="step1">...</step>
</flow>
</job>
Here's the listener's definition in batch.xml:
<ref id="jobCompletionNotificationListener"
class="com.llbean.batch.translatepersonnames.jobs.JobCompletedListener"/>
Here's the JobListener implementation:
#Dependent
#Named("jobCompletedListener")
public class JobCompletedListener implements JobListener {
...
#Inject
private JobContext jobContext;
#Override
public void afterJob() {
long executionId = jobContext.getExecutionId();
JobExecution jobExecution = BatchRuntime.getJobOperator().getJobExecution(executionId);
BatchStatus batchStatus = jobExecution.getBatchStatus();
String exitStatus = jobExecution.getExitStatus();
logger.info("afterJob(): Job id " + executionId + " batch status = " + batchStatus +
", exit status = " + exitStatus);
...
}
}
I tried adding <end on="*" exit-status="COMPLETED"/> to the <job> and <flow> in the JSL, but that had no effect or resulted in a status of FAILED.
Good question. Let me tack on a couple points to #cheng's answer.
First, to understand why we implemented it this way, consider the case where the JobListener throws an exception. Should that fail the job? In Liberty we decided it should. But if the job already had a COMPLETED status, well, that would imply that it was... completed, and shouldn't be able to fail at that point.
So afterJob() is really more like "end of job" (or you could think of it as "after job steps").
Second, one reason to even ask this question is because you want to know, in the afterJob() method, whether the job executed successfully or not.
Well, in the Liberty implementation at least (which I work on, for IBM), you can indeed see this. A previous failure will have set the BatchStatus to FAILED while a successful (this far) execution will have its status still at STARTED.
(For what it's worth, this was an area we realized could use more attention and standardization a bit late in the 1.0 spec effort, and I hope we can address more in the future.)
If it helps and you're interested, you can see the basic logic in the flow leading up to and including the WorkUnitThreadControllerImpl.endOfWorkUnit call here.
It's because job listener's afterJob method is part of the job execution. So when you call getBatchStatus() inside afterJob() method, the job execution is still going on, and not yet completed, hence the batch status STARTED.
I'm trying to convert xaml builds to TFS 2015 builds and running into a problem where the build seems to almost complete but then trows the following error:
And just in case you can't see the image, the error is:
Finishing task: VSBuild
System.NotSupportedException: The given path's format is not supported.
at System.Security.Permissions.FileIOPermission.QuickDemand(FileIOPermissionAccess access, String fullPath, Boolean checkForDuplicates, Boolean needFullPath)
at Microsoft.TeamFoundation.DistributedTask.Agent.Common.ContextExtensions.GetExpandedPath(ILogServiceContext context, String path, String defaultPathRoot)
at Microsoft.TeamFoundation.DistributedTask.Worker.JobRunner.ResolveInputs(IJobContext context, IJobExtension jobExtension, TaskWrapper task, IDictionary`2 variables)
at Microsoft.TeamFoundation.DistributedTask.Worker.JobRunner.Run(IJobContext jobContext, IJobRequest job, IJobExtension jobExtension, CancellationTokenSource tokenSource)
Worker Worker-44f37a2f-ff1b-43a7-b619-88373a8687c0 finished running job 44f37a2f-ff1b-43a7-b619-88373a8687c0
Finishing Build
It doesn't give me any clues as to where the code is that's causing this error so I really don't know where to look. When I look in the c:\Agent_work\5\a folder it looks like the project is building just fine (although I have no way of verifying that). But all of the files seem to be there including a subdirectory created by an .exe called in the Post-build event on the last .csproj file in the solution which I never even expected to be called and run in TFS 2015! Who knew - that's great! Now if I could just get my build to start working...
I figured it out. It was simply the trailing backslash on the source folder. I had $(build.artifactstagingdirectory)_PublishedWebsites\BOTWSitecoreWeb\ when it should have been $(build.artifactstagingdirectory)_PublishedWebsites\BOTWSitecoreWeb. I just happened to guess this was it - there was no help from the logs.
I'm try to develop a cruise control step which will process database migration scripts and apply them.
I'd like to be able to get hold of a list of the modifications from the SourceControl (to see if any new database changes need to be applied).
Any ideas how I can achieve this? I know that this information is written into the log xml but I was wondering if there is an easy mechanism to get a reference to this from with an Ant builder.
I have investigated writing a custom CC Listener or Builder plugin but neither supply this in the interface.
We have "svn update" as one of the steps in ant builder, and later we use output redirected to the file (ant property also could be used):
<exec executable="svn" dir=".">
<arg line="up"/>
<redirector output="svnup.log" alwayslog="true" append="true"/>
</exec>
<property name="svnup.log" value="svnup.log"/>
this creates file named "svnup.log" in the build folder with output of "svn up" command.
I think I'm going to try to write a custom plugin implementing Publisher
#Override
public void publish(Element cruisecontrolLog) throws CruiseControlException { XMLLogHelper xmlHelper = new XMLLogHelper(cruisecontrolLog);
Set<Modification> modifications = xmlHelper.getModifications();
for (Modification modification : modifications) {
handleModification(modification);
}
}
Or another idea is to use the timestamp flag in the sscm ant task combined with the cclastbuildtimestamp property supplied to the ant builder to produce a list of files changed since last build.
I have a target, comprised of several steps, that sometimes fails. All this target does is report to Sonar so if it fails, it's not catastrophic. How do I get the build to succeed even if this specific target fails?
I've tried some combinations of 'condition', 'or', 'true', and 'sequential', but Ant hasn't liked any of them.
Following is what I have more or less:
<target name='sonar'>
<!-- do some stuff -->
<sonar:sonar key='key' version='version'/>
</target>
The only way I can see this could work is using the slightly outdated yet still useful antcontrib extension. Then you can use a try/catch directive and just echo your error.
http://ant-contrib.sourceforge.net/tasks/tasks/trycatch.html
I'm trying to write an Ant script to retrieve an URL via port tunnelling.
It works great when I use a password (the names xxxx'd out for privacy):
<project default="main">
<target name="main">
<sshsession host="xxxx"
username="xxxx"
password="xxxx">
<LocalTunnel lport="1080" rhost="xxxx" rport="80"/>
<sequential>
<get src="http://localhost:1080/xxxx" dest="/tmp/xxxx"/>
</sequential>
</sshsession>
</target>
</project>
But it doesn't work when I use a keyfile, like this:
<sshsession host="xxxx"
username="xxxx"
keyfile="/Users/xxxx/.ssh/id_dsa"
passphrase="xxxx">
<LocalTunnel lport="1080" rhost="xxxx" rport="80"/>
<sequential>
<get src="http://localhost:1080/xxxx" dest="/tmp/xxxx"/>
</sequential>
</sshsession>
I get this exception:
/tmp/build.xml:8: com.jcraft.jsch.JSchException: Auth cancel
at com.jcraft.jsch.Session.connect(Session.java:451)
at com.jcraft.jsch.Session.connect(Session.java:150)
at org.apache.tools.ant.taskdefs.optional.ssh.SSHBase.openSession(SSHBase.java:223)
I'm sure I'm using the correct keyfile (I've tried using the wrong name, which gives a legitimate FileNotFoundException).
I can successfully ssh from the command line without being prompted for a password.
I'm sure I'm using the correct passphrase for the keyfile.
What's the cause of this error and what can I do about it?
I debugged the code. This was failing because my private key was failing authentication; JSch silently fell back to password authentication, which was canceled, because I didn't specify a password.
JSch error handling sucks a lot. Retrace your steps, regenerate a (separate) private key file, use ssh -i to guarantee you're using the right file, and keep your fingers crossed.
To get the jsch connection to work, you must specify the paths to both the known_hosts file and to the file containing the private key. This is done using the setKnownHosts and addIdentity methods.
jsch.setKnownHosts("/path/to/.ssh/known_hosts");
jsch.addIdentity("/path/to/.ssh/id_rsa");
If the key has a passphrase, you can add it to the addIdentity argument list:
jsch.addIdentity("/path/to/.ssh/id_rsa", myPassPhrase);
See Javadocs
I had the same issue while using sshexec task. I added passphrase attibute too and it worked fine. create a passphrase for your private key and add this as a attribute in your task. Also don't forget to convert your private key to open ssh format if you generated the key using puttygen on windows.
There is a brand new fork of Jsch out now. The exception handling is far more comprehensive. No more swallowing or defaulting. Head over to https://github.com/vngx/vngx-jsch to check it out. If something doesn't work the way you expect, please raise it as an issue, or send a pull request as we are actively maintaining it. We are also looking to get it up on the maven central repos soon.
I had a similar Issue today. So i thought i will share my solution aswell. I got the same exception but the problem was in fact that i had a umlaut within my password. after choosing a new password without it everything worked fine.