Why is File::Spec dropping './' on files in sub directories? - pathname

File::Spec is a core module to "portably perform operations on file names". This behavior makes perfect sense to me,
# returns ./foo
File::Spec->catfile(".", "foo");
However, this has my mind boggled.
# returns foo/bar
File::Spec->catfile(File::Spec->catfile(".", "foo"), "bar");
File::Spec->catfile("./foo", "bar");
For a library that supposed to standardize this stuff, why is my ./ gone? Is there a reason for this behavior?
Just weird to have this returned,
CP-JobApp/t/lib/Testing.pm
CP-JobApp/t/pod.t
CP-JobApp/t/find.t
./h
./problem1.pl
Is it normal to post-process file names the module returns?

File::Spec::Unix canonizes the directory part of the path.
Why doesn't it canoninize the whole path (so that catfile('.', 'file') produces file)? No idea. This seems like an oversight to me. Feel free to file a bug report.

Related

Nix Hydra throws errors on imports with <symbol>, where symbol is not <nixpkgs>

There is something that is not completely clear to me on hydra. The following jobset:
{ nixpkgs ? import <nixpkgs>
{ config.allowUnfree = true;
config.allowBroken = true;
}
, my_package ? path/to/package/default.nix ## working expr
}:
let
jobs = {
jobA = import ../path/to/jobA/default.nix {inherit my_package;};
};
in
jobs
with 2 build inputs:
ciSrc
nixpkgs
evaluates without errors, and then is built.
BUT, when I change the working expr to:
my_package ? import <my_package> ## problematic expr
and add a third build input:
my_package, Local Path, path/to/package/default.nix
I get the following error:
hydra-eval-jobs returned exit code 1:
error: undefined variable 'foo' at /nix/store/somehash-my_package/.../default.nix:61:11
(use '--show-trace' to show detailed location information)
Why do I get it? what am I missing here?
My NIX_PATH contains both <nixpkgs> that works, and <my_package>, which isn't. This is the only change i did that produces the error.
btw both versions are built by nix-build, as recommended by the hydra-manual
on the same machine and by the same user that the hydra uses.
can anyone please shed light on it?
I doubt that the undefined variable error message is directly caused by swapping the build inputs. It's more likely that the problem has been lurking for a while but never triggered, and swapping the inputs like this has caused it to surface. If that's the case, it's impossible to say what's causing the problem since you've stripped out all of the relevant information. To get better help in the future, I suggest that you post a minimal, complete example of code which encounters this problem. What you've posted is indeed minimal, but it's incomplete (the problem seems to be with package/default.nix, which you haven't included), and also doesn't look like code which encounters this problem (based on things like somehash, path/to/package, etc. I imagine that running this code would hit a syntax error first).
All we know is that a variable has been used without an accompanying binding. Your error message says that the variable is called foo, but I assume that's not the real name. Given this scant information, I would guess that the problem is in your package/default.nix file.
There are a few gotchas to keep in mind with paths in Nix:
Path values used by a derivation (like /tmp/project/foo.nix) will be copied to the Nix store and those values (e.g. /nix/store/...-foo.nix) will be used instead of the original path. This can break relative paths, e.g. if foo.nix references ./bar.nix, then the latter will resolve to /nix/store/bar.nix which doesn't exist. This can be managed by getting the directory added to the store, e.g. "${/tmp/project}/foo.nix".
String values, like "/tmp/project/foo.nix", do not cause things to be copied into the store.
It's easy for calculations to turn paths into strings, but hard to keep them as paths. One useful trick is to use + with a path as the first argument, e.g. /tmp + "/project" will produce the path /tmp/project. We can use this with "/.." to go up a level. As an extreme case, we can convert a string containing an absolute path to a path value by doing e.g. with { myString = "/foo/bar"; }; /tmp + "/..${myString}", which will give the path /tmp/../foo/bar which resolves to /foo/bar.
When a mutable local path gets inserted into the Nix store, it's only an immutable "snapshot". If the contents are changed later, it can be a little unpredictable whether the old, cached snapshot will be used or whether a new snapshot will be made. For this reason, it's important to look at the paths given in error messages, e.g. take a look at /nix/store/...-project/foo.nix rather than /tmp/foo.nix, since they may not contain the same thing.

How to create and load a configuration file in dxl

I have a script which saves some files at a given location. It works fine but when I send this code to someone else, he has to change the paths in the code. It's not comfortable for someone who does not know what is in that code and for me to explain every time where and how the code should be changed.
I want to get this path in a variable which will be taken from the configuration file. So it will be easier for everyone to change just this config file and nothing in my code. But I have never done this before and could not find any information on how I can do this in the internet.
PS: I do not have any code and I ask about an ultimate solution but it is really difficult to find something good in the internet about dxl, especially since I'm new with that. Maybe someone of you already does that or has an idea how it could be done?
DXL has a perm to read the complete context of a file into a variable: string readFile (string) (or Buffer readFile (string))
you can split the output by \n and then use regular expressions to find all lines that match the pattern
^\s*([^;#].*)\s*=\s*(.*)\s*$
(i.e. key = value - where comment lines start with ; or #)
But in DOORS I prefer using DOORS modules as configuration modules. Object Heading can be the key, Object Text can be the value.
Hardcode the full name of the configuration module into your DXL file and the user can modify the behaviour of the application.
The advantage over a file is that you need not make assumptions on where the config file is to be stored on the file system.
It really depends on your situation. You are going to need to be a little more specific about what you mean by "they need to change the paths in the code". What are these paths to? Are they DOORS module paths, are they paths to local/network files, or are the something else entirely?
Like user3329561 said, you COULD use a DOORS module as a configuration file. I wouldn't recommend it though, simply because that is not what DOORS modules were designed for. DOORS is fully capable of reading system files in one line at a time as well as all at once, but I can't recommend that option either until I know what types of paths you want to load and why.
I suspect that there is a better solution for your problem that will present itself once more information is provided.
I had the same problem, I needed to specify the path of my configuration file used in my dxl script.
I solved this issue passing the directory path as a parameter to DOORS.exe as follow:
"...\DOORS\9.3\bin\doors.exe" -dxl "string myVar = \"Hello Word\"
then in my dxl script, the variable myVar is a global variable.

Gulp change working directory for entire task

I'm working on a gulp file that contains tasks for both the frontend and the backend of my site.
The task below for example will concat my scripts into app.js:
gulp.task 'frontend:scripts', ->
gulp.src frontendPath(scriptsFolder, scriptsPattern)
.pipe sourcemaps.init()
.pipe coffee()
.pipe concat 'app.js'
.pipe sourcemaps.write('.')
.pipe gulp.dest frontendPath(tempFolder, scriptsFolder)
As you can see I've created a helper to provide the correct frontend path:
frontendPath = (dirs...) -> path.join.apply null, ['frontend'].concat(dirs)
But I have to be really careful that all the steps of my task (especially .src and .dest) are executed in the frontend folder.
I know that you can use the { cwd: 'frontend' } option to change the working directory for .src and .dest. But is there a way to change the whole working directory for a task?
Use process.chdir to change the working directory. We can make the change anywhere in a gulpfile, or in your situation, change it within a task.
gulp.task('frontend', function(){
process.chdir('...');
gulp.src(...)
});
gulp.task('backend', function(){
process.chdir('...');
gulp.src(...)
});
Make sure using the latest version of gulp,
this feature seems to be added recently.
With the way that gulp works, it doesn't make sense to "change directories" in the steps between src and dest.
gulp.src reads files from disk into memory. The .pipes thereafter just pipe the contents of those files (and metadata related to the files - see Vinyl file objects for more info) to other javascript functions, each of which just operates on the in-memory versions of the file. That means they don't know about a working directory, or care about one. They just see incoming data about files come in through pipe, do stuff to that data, and send it out through the next .pipe.
So I think that if you're truly paranoid about being in the parent director, a cwd to src and dest is the best you can do.
All of that to say, you could look at shelljs, which allows you to change directories for the current node process. I think this might be considered an anti-pattern when piping things around, but might be worth a shot.

How to generate custom URLs with Docpad?

I'm looking to remove redundant directories from my output URLs. This seems like it would be straightforward, but I can't seem to figure it out. Specifically:
This: .com/tmj/recipes/cocktails/rye/toronto.html
Should be more like this: .com/cocktails/rye/toronto.html
I've got a bit of a funny set up using a git submodule that requires the actual src documents to be organized a special way. Anyone know how I can get around this?
I was able to get the URLs I wanted by reworking the src config. This was a bit counter-intuitive to me, but now that I see it working it makes sense. Essentially I told DocPad to ignore the extra directories, and generate the site with out them. Here is the code I used, to be placed in the docpad config (docpad.coffee.)
documentsPaths: [ # default
'documents/the-mason-jar/recipes', 'pages'
]

Lua: require fails to find submodule, but searchpath succeeds?

usually when I have a question about something remotely software related I find that someone else has already asked the very same thing, and gotten good answers that works for me too.
This time, though, I've failed to find an answer to my predicament.
Here we go:
I'm currently trying to move up my Lua-programming a notch or three and want to use modules. So, I've got a structure like this:
main.lua
foo/bar.lua
Now, in main.lua I do
require("foo.bar")
which fails,
main.lua:1 module 'foo.bar' not found:
no field package.preload['foo.bar']
no file 'foo.bar.lua'
no file 'foo.bar.lua'
no file 'foo.lua'
Ok, something might be wrong with my package.path so I use package.searchpath("foo.bar", package.path) to see what I', doing wrong.
The problem is that package.searchpath resolves foo.bar to foo/bar.lua which is exactly right.
As I've understood it, package.searchpath tries to find the module in the same way as require, but there seems to be som glitch in my case.
What strikes me as odd is the repetition of the no file 'foo.bar.lua' in the error output
Have I misunderstood the use of require?
I'm using LuaJIT-2.0.0 to run my chunks
Update:
I'm using LuaJIT-2.0.0 to run my chunks <- This was the reason for my problem, stock Lua-5.2.2 behaves as expected
package.path = debug.getinfo(1,"S").source:match[[^#?(.*[\/])[^\/]-$]] .."?.lua;".. package.path
require("foo.bar")
This line causes require to look in the same directory as the
current file when asked to load other files. If you want it to instead
search a directory relative to the current directory, insert the
relative path between " and ?.lua
Here is part of require description:
[...] Otherwise require searches for a Lua loader using the path stored in
package.path. If that also fails, it searches for a C loader using the
path stored in package.cpath. If that also fails, it tries an
all-in-one loader (see package.loaders).
Default path for package.path is always the .exe that executes specified script.

Resources