ant: intercept sshexec call when the host is localhost - ant

I was given a large set of project ant scripts. The script uses a number of sshexec and scp calls to remote machines. I need to change it to enable the program to run locally if needed, so i want to intercept these remote calls and replace sshexec with exec, scp with cp.
a sample call would be:
<sshexec host="${host}" username="${username}" password="${password}" trust="true" usepty="true" command="echo '${password}' | sudo -S ntpdate ${maintain.sync-time.server}" failonerror="false" />
I would need to be able to check if ${host} is remotehost or localhost, if localhost I use exec instead
now the problem is: is there a way to avoid changing all the calls one by one (they are littered in the project) or is there a way to intercept the call, check the variable then decide to use sshexec or exec?

Create a new Ant Task in java then do a regex replace on your build scripts ?

Related

Set line-buffering in container output

I use Java S2I image for a container running in Openshift (on premise). My problem is that the output of the image is page-buffered and oc logs ... does not show me the last logs.
I could probably spin up my docker image that would do stdbuf -oL -e0 java ... but I would prefer to stick to the 'official' image (just adding the jar to /deployments). Is there any way to reduce buffering (use line-buffering instead of page-buffering), or flush the output on demand?
EDIT: It seems that I could update deployment config and pass stdbuf in there, but that means that I'd have to compose all the args myself. Ideal solution would be passing --tty do Docker, but I can't see how a custom arguments could be passed that way in Openshift.
In your repo, try creating the file .s2i/bin/run. In it add:
#/bin/bash
exec stdbuf -oL -e0 /usr/local/s2i/run
I always forget where the S2I assemble and run scripts are in the Java S2I image, so you may need to replace /usr/local/s2i with the correct path.
What adding this file does is that it will be run as the startup command instead of the original run script. You can then run the original script with stdbuf. Ensure you use exec so that the sub process replaces the current one, else signals will not be propagated through properly.
Even though this might work, am surprised logging isn't working in an unbuffered mode already. I expect there would be a better way of controlling it through some Java config instead.

fpm is not recognised if executing script with jenkins and ssh

I am trying to execute a script over ssh connexion with Jenkins. I am using the SSH plugin and it is well configured. I arrive to execute the first part of the script, but when I try to execute a fpm command it says:
fpm: command not found
If I connect to the instance and run the same script that I call via Jenkins it runs and there is no error (fpm is installed).
So, I have created a test like a script test.sh:
#!/bin/bash -x
fpm
but, with Jenkins, I get the same error: fpm: command not found, while if I execute it I get a normal "parameter needed":
Missing required -s flag. What package source did you want? {:level=>:warn}
Missing required -t flag. What package output did you want? {:level=>:warn}
No parameters given. You need to pass additional command arguments so that I know what you want to build packages from. For example, for '-s dir' you would pass a list of files and directories. For '-s gem' you would pass a one or more gems to package from. As a full example, this will make an rpm of the 'json' rubygem: `fpm -s gem -t rpm json` {:level=>:warn}
Fix the above problems, and you'll be rolling packages in no time! {:level=>:fatal}
What am I missing? Why it cannot find fpm if it is installed?
Make sure fpm is in /usr/bin..
It seems that the problem came because the fpm was installed in the /home/user2connect/bin/, and the command was not recognised. For fixing this I had to call it wit the whole path:
/home/user2connect/bin/fpm ...
I have chosen to reinstall the fpm using sudo, so now it works.

In ant exec task can one supress the \t[exec] prefix added to all output from the child process?

In ant when running a command with the exec task anything written to stdout or stderr in the child process has " [exec] " prepended to every line written to both the console and the log file. Is there a way to suppress this behavior or explicitly supply the prefix? (ie: to "" or maybe just an indent)
This is because an ant build run in an IDE the prefix scrambles the ability of the IDE to jump to source files by clicking on the output error messages from javac and other compilers
You may run ant with -emacs option.
However in this case it will suppress the prefix for all tasks.
Otherwise you may implement your own log handler.
In an interactive terminal on MacOS, I successfully sidestepped the Ant log-wrapping mechanism on the exec task by means of the /dev/stdout and /dev/stderr devices, as follows:
<exec executable="python" output="/dev/stdout" error="/dev/stderr">
<arg line='myscript.py' />
</exec>
This will probably also work on Linux, though I havn't explicitly tested it.
On Windows it also works using output="con" error="con" (though in my case, tty codes from my script won't work in a Windows cmd terminal).

How can I make the Ant task "SSHExec" run a command and exit before it's completed?

I have an ant build script that connects to a remote server and kicks off a local build using SSHExec. This build takes approximately 1 minute and it sends me an email when it's completed, but the ant task I have:
<sshexec
host="${deploy.host}"
username="${deploy.username}"
trust="true"
keyfile="${user.home}/.ssh/id_rsa"
command="~/updateGit.sh"/>
Will wait until the script is completed. I tried passing a & like so (I assume I have to escape it in a build.xml):
<sshexec
host="${deploy.host}"
username="${deploy.username}"
trust="true"
keyfile="${user.home}/.ssh/id_rsa"
command="~/updateGit.sh &"/>
But this doesn't seem to make a difference. In case this extra detail helps anyone, the script generates a lot of output and a slow internet connection can cause the script to take a lot longer (as it's output is being piped back, the assumption with this approach is that I only care about the output after its done, so if I pack it up into an email I can monitor my inbox as builds get kicked off, basically it's a poor-man's Continuous Integration)
Using information from this answer (and the nohup command) I updated my task as follows:
<sshexec
host="${deploy.host}"
username="${deploy.username}"
trust="true"
keyfile="${user.home}/.ssh/id_rsa"
command="nohup ~/updateGit.sh > /dev/null 2> error.log < /dev/null &"/>

Display the output of an exec task in ant

I am scping files using the exec Ant task. It is working fine, but the output of the scp command is not displayed.
Below is the code
<target name="scp-jar" depends = "jar">
<exec executable="/usr/bin/scp">
<arg value="my.jar"/>
<arg value="myserver:dir"/>
</exec>
</target>
What changes I have to make to display the file progress output of the scp command?
By default, the output of the command gets written to the stdout and you can specify an output attribute to change it to a file. More details here: http://ant.apache.org/manual/Tasks/exec.html
It is difficult to redirect SCP's output though. You might probably want to use the flag -v in your case.
The ant SCP task can show that information. User verbose flag.
This task require additional jars ( jsch.jar 0.1.42 or later)
As #Tanuki Software mentioned, scp wouldn't print the progress bar if stdout isn't tty.
So the problem was more with scp and not with the Ant task.
I tried using the -v option of scp, but it is displaying debugging information and the progress bar.
So there are only two options
Use Exec task and miss out the progress bar. (or)
Use Scp task, but it requires extra jar, doesn't work properly in mac and very difficult to make it use the default settings from .sshconfig file.
I ended up choosing the first option.

Resources