Path definition in makefile - path

I have a doubt about indicating a path in makefile and I'd like to have a clarification.
Suppose my structure is made this way:
/home/machinename/softwarefolder/mainfolder
--------------------------------------------> /subfolder1
--------------------------------------------> /subfolder2
This means that both subfolder1 and subfolder2 are at the same nesting level in /mainfolder.
Now I'm compiling something inside subfolder 2 (this means that I cd to that folder) that uses a configure file with a macro pointing to a path that, in my case, it's in subfolder1.
This configure file used by the program in subfolder2 to compile is generated automatically by the program itself after running ./configure
The automatically generated configure file has the macro defined this way
MACRO = ../subfolder1
Do the two dots (..) indicate, as in the cd command, "go back one step" (and, therefor, the configure file is pointing to the right folder)?
If the answer to the first question is "no", then why substituting the aforementioned macro with
MACRO = /home/machinename/softwarefolder/mainfolder/subfolder1
generates a "missing separator" error in compile-time?
Sorry for the probably trivial question and thanks for the help!

Make doesn't interpret the content of variables in any way, for the most part. The question of how the .. will be interpreted depends entirely on where the variable is used. If it's used in a place where a path like ../subfolder1 makes sense, then that's how it will be interpreted. If not, not.
Since you don't show how $(MACRO) is used, we can't help. But in general the answer to your question is "yes, it means go up to the parent directory".
As for your second question, there is no way I can envision that changing just that one line will result in a "missing separator" error. Maybe your editor "helpfully" made other changes to the file such as removing TABs and substituting spaces, or adding TABs? TAB characters are special in makefiles.
If you want help with the second question you must provide (a) the exact error you received (cut and paste is best), and (b) the exact text of the rule in the makefile at the line number specified in the error message.

Related

With Jenkins Job Builder (JJB) what's the preferred way to inject values into a static set of job configuration files?

This bounty has ended. Answers to this question are eligible for a +100 reputation bounty. Bounty grace period ends in 14 hours.
frans is looking for an answer from a reputable source.
I have a working set of JJB YAML files successfully creating jobs and folders.
I now want to make certain values I use inside those YAML files configurable i.e. when running jenkins-jobs test|update -r jobfolder I want to set values for folder prefixes (to not damage existing production jobs), names for branches, nodes etc.
I don't want to use JJBs defaults approach for this since I'm already using it for configuration at a different place and it results in conflicts when used in projects and jobs together.
The ideal way of doing this I can think of would be a way to call JJB like this
jenkins-jobs test|update --define "folder-prefix=experimental/,node=test-node" -r jobfolder
Giving me variables I can use in the actual job definition files.
Since this option seemingly doesn't exist, I'm currently trying to provide files which contain those variables and somehow 'inject' them in my project.
Those are the approaches I can think of:
1 - having different configuration folders with YAML files inside, I would use like this:
jenkins-jobs test -r experimental-config:jobfolder
jenkins-jobs test -r production-config:jobfolder
with experimental-config and production-config being folders with additional files containing my configuration I can switch between.
But unfortunately I don't know how I would reference values I've defined in different yaml files. Is that even possible?
2 - having include files like described in the documentation
While that sounds promising I didn't manage to actually make this run. I tried to turn the following 'configuration header' I'm already using:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: "experimental/"
node: "test-node"
[Rest of the file making use of dynamic-config]
into something making use of the !include statement like this:
!include: dynamic-config.yaml.inc
[Rest of the file making use of stuff defined in dynamic-config.yaml.inc]
giving me a seemingly unrelated parser error:
yaml.parser.ParserError: expected '<document start>', but found '<block sequence start>'
in "/home/me/my/project.yml", line 11, column 1
so I tried this snippet, which looks more like the example by putting it inside an existing element:
- dynamic-config: &dynamic-config
name: "dynamic-config"
!include: dynamic-config.yaml.inc
giving me a different error but still an error:
yaml.scanner.ScannerError: while scanning a simple key
in "/home/me/my/project.yml", line 7, column 5
could not find expected ':'
in "/home/me/my/project.yml", line 8, column 5
In both cases it doesn't make a difference whether or not the specified include file exists or not, which makes me doubt you can just 'include' a file like this at all.
What am I doing wrong here? Is there a more obvious / straight forward way to customize a jenkins-jobs run?
Update:
I somehow managed to use the !include tag for individual items now, like this:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: !include: job-configs/active/folder-prefix.inc
branch-name: !include: job-configs/active/branch-name.inc
node-name: !include: job-configs/active/node-name.inc
But I wasn't able to put the whole dynamic-config element (with the anchor) into an include file yet.
2nd update:
Looks like I'm trying something similar as the guy from this question.
Can someone confirm, that this is currently still a problem? What's the JJB way of handling this?

Isabelle's document preparation

I would like to obtain the LaTeX code associated with this theory. Previous answers only provide links to the documentation. Let me describe what I did.
I went to the directory of Hales.thy and executed isabelle mkroot, followed by isabelle build -D ., which generated a file named document and a *.pdf file which was suspiciously (nearly) empty. Modifications of this command by adding Hales.thy as a parameter didn't succeed.
I would appreciate if someone could describe briefly the commands needed.
As a precaution, copy the file Hales.thy into a new directory that does not contain any other files and run isabelle mkroot again.
If I understand correctly, your theory contains sorry. In this case, for the build to succeed you need to enable the quick_and_dirty mode. For this, before the first occurrence of sorry in your theory file, you need to insert declare [[quick_and_dirty=true]].
Your theory contains raw text that is not suitably formatted. Try replacing the relevant lines with the following: text‹The case \<^text>‹t^2 = 1› corresponds to a product of intersecting lines which cannot be a group› and text‹The case \<^text>‹t = 0› corresponds to a circle which has been treated before›.
Once this is done, you should be able to use the ROOT file in the appendix below. As you can see, I have specified the theory file explicitly and also added the relevant imported sessions.
Appendix
session Hales = HOL +
options [document = pdf, document_output = "output"]
sessions
"HOL-Library"
"HOL-Algebra"
theories
"Hales"
document_files
"root.tex"

iOS App Contains Developer Path Information

Inspecting an archived app, I can see the full path listed for a few source code files in the app binary. Not all source code files are listed.
strings - the_binary_app | grep "\.m"
reveals
/Users/bbarnhart/myPath/myPath/App/path/path/SourceCodeFile.m
as well as a few others. I can not determine how the full paths for a few source code files are embedded in the app binary. I would like to remove them. Any ideas? Is this a build setting or is the project file slightly corrupted?
Some belong to a lib and others are files that belong to the project.
The __FILE__ macro expands to full path to the current file. This is one likely way you might be getting the paths into your executable. For example, the expansion of the assert macro includes the __FILE__ macro.
Look at the output of your strings | grep pipeline. For each of those files, go into your project in Xcode and open that file. Then go to the Related Files doodad and choose “Preprocess”:
Then search through the preprocessor output for the file's path. You will find lots of false positives, because there will be lots of # line number/path directives. You can ignore these, because they only produce debug output, which is not included in your executable file (unless you've done something weird with your build settings). You might find it faster to save the preprocessor output to a file, then open that file and pipe it through grep or use a regexp search/replace to delete all lines starting with #.
Find the other instances where your path appears as a string constant. For example, if you used the assert macro, you will find something like this:
(__builtin_expect(!(argc > 0), 0) ? __assert_rtn(__func__, "/Volumes/b/Users/mayoff/TestProjects/textViewChanged/textViewChanged/main.m", 16, "argc > 0") : (void)0);
That's a case where the path will end up embedded in your executable.
If that doesn't find all the places where you're embedding your path, try selecting “Assembly” from the Related Files doodad. The assembly will be full of comments containing your path; everything after # is a comment in the assembly output, so ignore those.
You will also see your paths in .file directives. I believe these only produce debug symbol output, which doesn't go into your executable, so you can ignore those too.
You will also see your paths in .asciz directives shortly after .section DWARF,... directives. This is more debug symbol stuff that you can ignore.
Look for the remaining cases where your path appears in the assembly output. You need to figure out how to eliminate these cases. How you do that will depend on the context in which the paths appear, so if you need more help, update your question with what you find.
Sounds like your code contains the __FILE__ macro somewhere.

VIM folding for ERB files?

Vim noob here. I have code folding working in most places, via indent mode, but for some reason I cannot get Vim to fold .html.erb files in ruby... even with indents.
Here's the relevant region of my vimrc. Is there something else I need to do to make Vim aware of the erb files? Is it possible to customize my folding per file type?
I'm running all the Janus plugins, so have rails.vim, etc. all installed.
let ruby_fold=1
set foldmethod=indent
set foldcolumn=0
set foldlevel=99
nnoremap <space> za<cr>
It's a difficult question, because there's probably something in your vim configuration that inhibits folding and I, for example, can't reproduce it. But I can suggest a few things you could try.
First of all, check what the values of those settings are in the actual buffer. Meaning, open up an erb file and check if the settings are correct. In order to do that, you can type, for example, set foldmethod, which will echo the current value of foldmethod to the screen. If one of the settings doesn't match the ones in your .vimrc, then that might be the problem.
Also, see if the file really does have the "eruby" filetype. If it's not displayed in your statusline, you could check that with set filetype.
Most importantly, one way of customizing settings per filetype is by creating a file with the filetype's name inside the ~/.vim/ftplugin directory. In your case, you can create the file ~/.vim/ftplugin/eruby.vim and put any filetype-specific settings in it. Setting them with setlocal instead of set will keep them local to the file. If it turns out the settings for erb are off, you can "fix" them by putting the values you want there.

Examples of getting it wrong first, on purpose

I just caught myself doing something I do a lot, and wanted to generalize it, express it, share it and see who else is following this general practice, to find some other example situations where it might be relevant.
The general practice is getting something wrong first, on purpose, to establish that everything else is right before undertaking the current task.
What I was trying to do, specifically, was to find examples in our code base where the dojo TextArea widget was used. I knew (because I had it in front of me - existence proof) that the TextBox widget was present in at least one file. So I looked first for what I knew was there:
grep -r digit.form.TextBox | grep -v
svn
This wasn't right - I had made a common (for me) mistake of leaving off the star, so I fixed that:
grep -r digit.form.TextBox * | grep
-v svn
which found no results! Quick comparison with the file I was looking at showed me I had misspelled "dijit":
grep -r dijit.form.TextBox * | grep
-v svn
And now I got results. Cool; doing it wrong first on purpose meant my query was correct except for looking for the wrong thing, so now I could construct the right query:
grep -r dijit.form.TextArea * | grep
-v svn
and be confident that when it gave me no results, it was because there are no such files, and not because I had malformed the query.
I'll add three other examples as answers; please add any others you're aware of.
TDD
The red-green-refactor cycle of test-driven development may be the archetype of this practice. With red, demonstrate that the functionality doesn't exist; then make it exist and demonstrate that you've done so by witnessing the green bar.
http://support.microsoft.com/kb/275085
This VBA routine turns off the "subdatasheets" property for every table in your MS Access database. The user is instructed to make sure error-handling is set to "Break only on unhandled errors." The routine identifies tables needing the fix by the error that is thrown. I'm not sure this precisely fits your question, but it's always interesting to me that the error is being used in a non-error way.
Here's an example from VBA:
I also use camel case when I Dim my variables. ThisIsAnExampleOfCamelCase. As soon as I exit the VBA code line if Access doesn't change the lower case variable to camel case then I know I've got a typo. [OR, Option Explicit isn't set, which is the post topic.]
I also use this trick, several times an hour at least.
arrange - assert - act - assert
I sometimes like, in my tests, to add a counter-assertion before the action to show that the action is actually responsible for producing the desired outcome demonstrated by the concluding assertion.
When in doubt of my spelling, and of my editor's spell-checking
We use many editors. Many of them highlight misspelled words as I type them - some do not. I rely on automatic spell checking, but I can't always remember whether the editor of the moment has that feature. So I'll enter, say, "circuitx" and hit space. If it highlights, I'll back up over the space and the "x" and type another space - and learn that I spelled circuit correctly - but if it doesn't, I'll copy the word and paste it into a known spell-checker to see whether I did.
I'm not sure it's the best way to act, as it does not prevent you from mispelling the final command, for example typing "TestArea" or something like that instead of "TextArea" (your finger just have to slip a little for such a mistake).
IMHO the best way is to run your "final" command, but on two sample files first : one containing the requested text, another that doesn't.
In other words, instead of running a "similar" command, run the real one, but over "similar" data.
(Not sure if this would be a good idea to try for real!)
For example, you might give the system to the users for testing and tell them the password to get started is "Apple".
You know the users are fully up and ready to test (everything is installed and connections to databases working) when they contact you and say the password doesn't work (it's actually "Orange").

Resources