-catalog silently fails with Saxon, how to fix? - saxon

I'm running Saxon-HE 9.8.0.8J together with xml-resolver 1.2:
$ java -cp saxon.jar:resolver.jar \
net.sf.saxon.Transform \
-t \
-warnings:fatal \
-catalog:this-file-is-absent.xml \
-s:a.xml -xsl:a.xsl
I'm getting:
Checking XML and XSL files...
Loading catalog: file:/this-file-is-absent.xml
Saxon-HE 9.8.0.8J from Saxonica
Java version 1.8.0_40
...
Is this how it is supposed to work? Just silently continue if the file is not found? Am I doing something wrong?

I've stepped this through the Apache catalog resolver, and it emits a message "Catalog does not exist" at debug level 3, but the default debug level is 2, so the message disappears.
In fact it's Saxon that sets the level to 2 (when -t is set) so this is under our control. But it's not easy to find out what the appropriate settings are. Also, I'm very reluctant to modify the resolver's configuration because the Apache code puts configuration details in static variables, which means that any changes you make affect other unrelated parts of the application that happen to be running the same Java VM.
Of course if the failure to find the catalog results in failures to locate some source file, you'll get diagnostics for this.
There are many ways I would like to improve the Apache catalog resolver but I don't particularly want to fork it, because I imagine that testing it after changes is a major undertaking.

Related

How to force Nix to "install packages" by building them locally instead of downloading a pre-built binary?

By "install packages" I mean to evaluate Nix build expressions (using nix-env, nix-shell -p, etc.) to build from source instead of using a substitute.
Also cross-posted to Unix& Linux because, as Charles Duffy pointed out, it is more on topic if it is about command-line tools or configuration. Still leaving this here because I assume forcing a package to always compile from source is possible by only using the Nix language, I just don't yet know how. (Or if it is in fact not possible, someone will point it out, and then this question does belong here.)
Either set the substitute option to false in nix.conf (the default is true) or use --option substitute false when invoking a Nix command.
nix-env --options substitute false -i hello
nix-shell --options substitute false -p hello
Might not be the droids you are looking for
As Robert Hensing (comment, chat), Henri Menke (comment), and Vladimír Čunát (comment) pointed out, this may not be the thing that you are really after.
To elaborate: I have been using the most basic Nix features confidently, but got to a point where I need to maintain and deploy a custom fork of a large application written in C, which is quite intimidating at the outset.
Tried to attack the problem the simplest way to just fetch my fork and re-build it with the new source, so I boiled it down to this question. Although, I suspect that the right direction for me is something along the lines of Nixpkgs/Create and debug packages in the NixOS Wiki.
Only re-build the package itself
Vladimír Čunát commented that "disabling substitutes makes you rebuild everything that's missing locally, even though I suspect that people asking such a question often only want to rebuild the specified package itself."
(This is probably achieved with nix-build or "just" overriding the original package but could be wrong. The latter is mention (maybe demonstrated even?) in the NixOS wiki article Development environment with nix-shell but haven't been able to read it thoroughly yet.)
Test for reproducibility
One might arrive to formulating this same question if they want to make sure that subsequent builds are deterministic. As Henri Menke comments, one should use nix-build --check for that.
The --check option is easy to miss; it's not documented in man nix-build or at nix-build in the Nix manual, but at nix-store --realize because (as man nix-build explains it):
nix-build is essentially a wrapper around nix-instantiate (to
translate a high-level Nix expression to a low-level store derivation)
and nix-store --realise (to build the store derivation) [and so] all
options not listed here are passed to nix-store --realise, except
for --arg and --attr / -A which are passed to nix-instantiate.
See detailed examples in the Nix manual at 18.1. Spot-Checking Build Determinism and the next section right after it.
The relevant parts for the substitute configuration option under the nix.conf section from the Nix manual:
Name
nix.conf — Nix configuration file
Description
Nix reads settings from two configuration files:
The system-wide configuration file sysconfdir/nix/nix.conf (i.e. /etc/nix/nix.conf on most systems), or $NIX_CONF_DIR/nix.conf if NIX_CONF_DIR is set.
The user configuration file $XDG_CONFIG_HOME/nix/nix.conf, or ~/.config/nix/nix.conf if XDG_CONFIG_HOME is not set.
You can override settings on the command line using the --option flag,
e.g. --option keep-outputs false.
The following settings are currently available:
[..]
substitute
If set to true (default), Nix will use binary substitutes if available. This option can be disabled to force building from source.
(Formerly known as use-binary-caches.)
Notes
Setting substitute to false (either with --options or in nix.conf) won't recompile the package if the command issue multiple times. That is, hello above would be compiled from source the first time, and then it will access the already present store path if the command issued again.
This is where it gets fuzzy: it is clear that no recompilation takes place because unless the package's Nix build expression doesn't change, the store output hash won't change either, making the next compilation output equivalent to the previous one, hence the action would be superfluous.
So if one would do some light hacking on a package, and just wanted to try it out locally (e.g., with nix-shell) then one would have to use -I nixpkgs=a/local/nixpkgs/dir to pick up those changes - and eventually do a recompilation? Or should one use nix-build?
See also question How to nix-build again a built store path?

Multiple Ant properties files

Ant seems to be ignoring one of my properties files.
<property file="local.properties" />
<property file="build.properties" />
build.properties contains the typical properties my team wants to use. I'm introducing local.properties which contains overrides for my specific workstation. We're using Eclipse for this project (I'm using Kepler), but regardless of whether I build in Eclipse or build via the command line the build fails because it is using some values in build.properties even though local.properties contains overrides.
In my specific case, my version of Java is newer than the other developers/environments. Despite specifying the version I have in local.properties, it still tries to use the compiler for the version in build.properties.
I know the values are fine because if I put my local properties in build.properties everything works.
Eclipse doesn't care about your build.xml or your properties files. That's only with Ant.
Try running ant with the -d flag, and capture STDOUT and STDERR. This will show you whether or not the local.proeprties is being read in and what values are set. It will say whether or not it's attempting to read local.properties, whether it found local.properties, and if so, what properties are being set.
Also remember that properties are set first come/first serve. You didn't say where in your build.xml you're reading in local.properties. It could be that this is being read in a target while other properties are set outside of targets. Even if they appear later in the build.xml file, properties set outside of any target are set first. If these are set, and you read in local.properties, local.properties isn't going to over ride them. I mention this because it was a problem I ran into here. Someone had a bunch of <property/> tasks placed at the end of their build.xml,and they didn't realize that these would be set before any target was run.
Again, try this:
Unix and Mac:
$ ant -d 2>&1 | tee ant.out # Allows you to see and capture the results
Windows
$ ant -d > ant.out 2>&1 # There's no "tee" command in Windows.
The output of ant.out will be thousands of lines long, but it'll help you figure out what's going on. What you post looks correct.

How to use luadoc in ubuntu/linux?

As the title says, how to use luadoc in ubuntu/linux? I generated documentation in windows using batch file but no success in ubuntu. Any ideas?
luadoc
Usage: /usr/bin/luadoc [options|files]
Generate documentation from files. Available options are:
-d path output directory path
-t path template directory path
-h, --help print this help and exit
--noindexpage do not generate global index page
--nofiles do not generate documentation for files
--nomodules do not generate documentation for modules
--doclet doclet_module doclet module to generate output
--taglet taglet_module taglet module to parse input code
-q, --quiet suppress all normal output
-v, --version print version information
First off, I have little experience with Luadoc, but a lot of experience with Ubuntu and Lua, so I'm basing all my points off of that knowledge and a quick install that I've just done of luadoc. Luadoc, as far as I can see, is a Lua library (so can also be used in Lua scripts as well as bash). To make documentation (in bash), you just run
luadoc file.lua
(where file is the name of your file that you want to create documentation for)
The options -d and -t are there to choose where you want to put the file and what template you want to use (which I have no clue about, I'm afraid :P). For example (for -d):
luadoc file.lua -d ~/Docs
As far as I can see, there is little else to explain about the actual options (as your code snippet explains what they do well enough).
Now, looking at the errors you obtained when running (lua5.1: ... could not open "index.html" for writing), I'd suggest a few things. One, if you compiled the source code, then you may have made a mistake somewhere, such as not installing dependencies (which I'd be surprised about, because otherwise you wouldn't have been able to make it at all). If you did, you could try getting it from the repos with
sudo apt-get install luadoc
which will install the dependencies too. This is probably the problem, as my working copy of luadoc runs fine from /usr/bin with the command
./luadoc
which means that your luadoc is odd, or you're doing something funny (which I cannot work out from what you've said). I presume that you have lua5.1 installed (considering the errors), so it's not to do with that.
My advice to you is to try running
luadoc file.lua
in the directory of file.lua with any old lua file (although preferably one with at least a little data in) and see if it generates an index.html in the same folder (don't change the directory with -d, for testing purposes). If that DOESN'T work, then reinstall it from the repos with apt-get. If doing that and trying luadoc file.lua doesn't work, then reply with the errors, as something bigger is going wrong (probably).

TFS build partially succeeded when calling a batch file, but no error in log

I’m building a solution which requires a batch file to be run after the build (there's a sequence in the workflow for this). TFS flags the build as partially succeeded, but there’s no error in the log even in full verbose mode ("diagnostic"). I’m checking the errorlevel after each line in the batch file and it’s always 0. I’ve also tested redirecting stdout and stderr in a file after each line and there’s no clue there.
It’s got nothing to do with unit tests because I’m skipping them for the time being.
I’ve noticed that usually when an error occurs in a batch file (e.g. file not found) there’s a visual cue to indicate the error and this matches the partially succeeded status. But I don’t see any visual cue.
So how can TFS decide that the build is only partially succeeded?
Thank you,
Solved.
It turns out the GetImpactedTests activity is throwing an exception (I can see it in the event viewer of the TFS machine), but it doesn't show at all in the build log.
I'm guessing that this exception makes the build partially succeeded (because the compilation part succeeded) but I couldn't see the assignment explicitly in the buid log. When I bypass the impact analysis (either by setting Analyze Test Impact to False or by removing the GetImpactedTests activity altogether), the error does not occur.
We experiment something similar here using the Lab Workflow (to kick our CodedUI tests). Different build template, same symptoms.
I have noticed that the build process reports that it partially succeeded, highlighting what seems to be a successful step in the deploy script (batch file).
The command is question is a command to install our mobile app on a mobile device (in order to test it at night):
adb install -d -r test.apk
I thought about looking the errorlevel right after running the adb command but the errorlevel was 0.
Then I thought that maybe the command is sending its output to stderr and found out this article on the android open source project, which confirms my hypothesis.
Following is my fix:
adb install -r -d test.apk 2>&1
Appending 2>&1 simply redirects stderr to stdout and now my deploy script does not report an error anymore and the build now succeeds (when all tests pass!).
Conclusion: When a script writes anything to stderr, the build workflow will report it as an error (partial success since it does not prevent execution of the workflow).
I know this is not your particular issue but since we had the same symptoms, I thought the stderr information could help somebody else find out the reason why their build process is reporting a partial success even though everything seems to work.

Intellij TFS plugin and TEE using different workspaces

I'm attempting to sync Intellij's built in TFS plugin workspace with the one used by TEE's command line 'tf' command on OSX Mountain Lion and failing miserably.
This question appears to be very similar to mine, however it has no reference to what one should do when the computer name reported by each tool is different.
Intellij says my computer name is the fully qualified domain name (ex: hostname.domain.com) whereas the 'tf workspaces' command reports the computer name to be just the the hostname (ex: hostname). Consequently, they are unable to use the same workspace. I do know that you can change the computer name of a workspace, but I'd like to use both at the same time as we have some ant tasks using the 'tf' command locally. Our Windows users in the group are able to do this just fine.
Is there any way to make these tools report the same thing for the computer name? I believe I could then use the 'tf workspaces' command and enable me to use both at the same time in the same workspace. Much obliged.
It's not supported (according to the responsible developer). Please submit a request and we'll see what can be done to make it work.
Team Explorer Everywhere allows you to override your local hostname with the computerName system property. You can edit your tf launcher script to match what IntelliJ is using. You can change the last few lines of the file to be:
exec java -Xmx512M -classpath "$CLC_CLASSPATH" \
-DcomputerName=`hostname -f` \
"-Dcom.microsoft.tfs.jni.native.base-directory=$BASE_DIRECTORY/native" \
$RANDOM_DEVICE_PROPERTY com.microsoft.tfs.client.clc.vc.Main "$#"
If hostname -f does not actually report the same hostname that IntelliJ is determining, of course, you can simply hardcode that instead.

Resources