Yesterday i was working on FreeBSD jails. According to the documentation, I ran command make buildworld and it compiled lots of files using cc.
In logs i saw something like:
cc ... -pipe ... file.c
Now I'm curious about -pipe flag. I also searched in manual page but did not find anything about this flag.
Do you know what this flag exactly does?
Assuming your cc is Clang, a detailed man page was added in later versions of Clang that are not available on your FreeBSD version. The -pipe is described as:
-pipe, --pipe
Use pipes between commands, when possible
See https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang-pipe
I sent an Email to Salvatore Sanfilippo (author of Redis) and asked above question and he replied with:
Hello, it simply will use Unix pipes instead of files in order to
"chain" the different stages needed for the compilation process. When
-pipe is used, as GCC starts to emit the assembler code, the assembler
will start to read from the pipe and emit the machine code and so
forth. It should optimize compilation speed, but in practice it helps
very little AFAIK.
Thanks to him.
Related
I am trying to determine my testcoverage. To do this I compile my program with a newer version of gcc:
CC=/usr/local/gcc8/bin/gcc FC=/usr/local/gcc8/bin/gfortran ./configure.sh -external cmake -d
After compiling this with the --coverage option I run my tests and this creates *.gcda, *.gcno and *.o.provides.build files. And if I run something like:
> $ /usr/local/gcc8/bin/gcov slab_dim.f90.gcda [±develop ●]
File '/Local/tmp/fleur/cdn/slab_dim.f90'
Lines executed:0.00% of 17
Creating 'slab_dim.f90.gcov'
Which shows me, that gcov runs fine. However if I try to run lcov on these results:
lcov -t "result" -o ex_test.info -c -d CMakeFiles/
I get error messages like these for every file:
Processing fleur.dir/hybrid/gen_wavf.F90.gcda
/Local/tmp/fleur/build.debug/CMakeFiles/fleur.dir/hybrid/gen_wavf.F90.gcno:version 'A82*', prefer '408R'
/Local/tmp/fleur/build.debug/CMakeFiles/fleur.dir/hybrid/gen_wavf.F90.gcno:no functions found
geninfo: WARNING: gcov did not create any files for /Local/tmp/fleur/build.debug/CMakeFiles/fleur.dir/hybrid/gen_wavf.F90.gcda!
This is the same error message I get when I use the systems standard /usr/bin/gcov
This leads me to believe that lcov calls the old gcov rather than the new one. How do I force gcov to use the new version?
The simplest solution I found was to run /usr/bin/gcov-8 instead of /usr/bin/gcov.
The $PATH environment variable needs to be to extended by /usr/local/gcc8/bin/
The source of the error is clear, from the fact that you get the same result by using /usr/bin/gcov. /usr/bin/gcov should be a link to a binary from the installed compiler, but in your case the link doesn't point to a binary within gcc 8.2 installation.
You can delete the link and re-create it to point to the correct gcov or you can setup something like update-alternatives to change the version of gcov when you change the default compiler.
The previous answer should work as well if you have a binary called gcov in /usr/local/gcc8/bin, because if you add that path, into your environment PATH first, it will be selected first.
Some background: I have a project based on ESP-IDF which has a complex builtin building system which you plug into with your own makefile (their documentation on using it).
This works fine (apart from occasional horrendous build times), but now I wanted to add a build target for unit tests for a component, which requires building this component against another project (the unit-test-app).
So, I need another build target that calls another make with another makefile and directory. Something like this works fine:
make -C $(path to unit-test-app) \
EXTRA_COMPONENT_DIRS=$(my component directory) \
TEST_COMPONENTS=$(my component name) \
ESPPORT=$(my serial port) \
-j clean app-flash monitor
But only if I execute it from bash. If I try to execute it from another makefile, it breaks either not finding some header files (the include path is different between the main and unit test project) or ignores the change of project (-C argument) and executes the main project build.
What I tried:
using $(MAKE), $(shell which $(MAKE)) and make in the custom target
using env -i $(shell which $(MAKE) ) -C ... with forwarding required environment arguments to the child make
using bash -l make -C ... and bash -c make -C ...
What works but is a dirty hack: using echo $(MAKE) -C ... in the make target and then running $(make tests) from the command line.
As far as I know, this is an issue of the parent makefile setting something up in the environment that I did not separate the child makefile from. What else can I do to separate these two?
UPDATE: I have created an example project that shows the issue more clearly, please look at the top Makefile of https://github.com/chanibal/esp-idf-template-with-unit-tests
I reproduced your situation as you are describing it and everything works fine, both if I call the inner make from bash or from the outer make.
So there is something you are not telling us that is causing the failure.
On the other hand, I feel there are several irrelevant details in your description.
So, I suggest you try to further isolate the problem, removing irrelevant stuff, and reproducing the problem only from the description in your question, and then when you are doing it you will probably find out what is breaking. If not, then post here the minimal setup with all the other details that are needed for the failure to occur.
By the way, what you are doing is not good practice, so maybe just avoiding it would solve your problems.
What I mean is, there is one case and one case only, where recursive make is good practice: make -C ${directory}
where in directory you have a completely self-contained build, not using anything from the outside.
It seems this is not the case for you, because you seem to be passing some outside location variables. This kind of recursive make is bad practice and should be avoided.
I would like to build a project always using the --pedantic flag. Right now I'm using the
stack build --pedantic
command. But I would like to use this flag always for this project (thus, not globally). Is there a way to configure this?
Currently, pedantic pretty much just means to build with --ghc-options "-Wall -Werror". So, in stack.yaml you can do that with:
ghc-options:
"*": -Wall -Werror
In the future, --pedantic may do more than that, see https://github.com/commercialhaskell/stack/issues/1323 and https://github.com/commercialhaskell/stack/issues/3166 . At that point it may become an option in stack.yaml configuration.
Quick question: what is the compiler flag to allow g++ to spawn multiple instances of itself in order to compile large projects quicker (for example 4 source files at a time for a multi-core CPU)?
You can do this with make - with gnu make it is the -j flag (this will also help on a uniprocessor machine).
For example if you want 4 parallel jobs from make:
make -j 4
You can also run gcc in a pipe with
gcc -pipe
This will pipeline the compile stages, which will also help keep the cores busy.
If you have additional machines available too, you might check out distcc, which will farm compiles out to those as well.
There is no such flag, and having one runs against the Unix philosophy of having each tool perform just one function and perform it well. Spawning compiler processes is conceptually the job of the build system. What you are probably looking for is the -j (jobs) flag to GNU make, a la
make -j4
Or you can use pmake or similar parallel make systems.
People have mentioned make but bjam also supports a similar concept. Using bjam -jx instructs bjam to build up to x concurrent commands.
We use the same build scripts on Windows and Linux and using this option halves our build times on both platforms. Nice.
If using make, issue with -j. From man make:
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (commands) to run simultaneously.
If there is more than one -j option, the last one is effective.
If the -j option is given without an argument, make will not limit the
number of jobs that can run simultaneously.
And most notably, if you want to script or identify the number of cores you have available (depending on your environment, and if you run in many environments, this can change a lot) you may use ubiquitous Python function cpu_count():
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count
Like this:
make -j $(python3 -c 'import multiprocessing as mp; print(int(mp.cpu_count() * 1.5))')
If you're asking why 1.5 I'll quote user artless-noise in a comment above:
The 1.5 number is because of the noted I/O bound problem. It is a rule of thumb. About 1/3 of the jobs will be waiting for I/O, so the remaining jobs will be using the available cores. A number greater than the cores is better and you could even go as high as 2x.
make will do this for you. Investigate the -j and -l switches in the man page. I don't think g++ is parallelizable.
distcc can also be used to distribute compiles not only on the current machine, but also on other machines in a farm that have distcc installed.
I'm not sure about g++, but if you're using GNU Make then "make -j N" (where N is the number of threads make can create) will allow make to run multple g++ jobs at the same time (so long as the files do not depend on each other).
GNU parallel
I was making a synthetic compilation benchmark and couldn't be bothered to write a Makefile, so I used:
sudo apt-get install parallel
ls | grep -E '\.c$' | parallel -t --will-cite "gcc -c -o '{.}.o' '{}'"
Explanation:
{.} takes the input argument and removes its extension
-t prints out the commands being run to give us an idea of progress
--will-cite removes the request to cite the software if you publish results using it...
parallel is so convenient that I could even do a timestamp check myself:
ls | grep -E '\.c$' | parallel -t --will-cite "\
if ! [ -f '{.}.o' ] || [ '{}' -nt '{.}.o' ]; then
gcc -c -o '{.}.o' '{}'
fi
"
xargs -P can also run jobs in parallel, but it is a bit less convenient to do the extension manipulation or run multiple commands with it: Calling multiple commands through xargs
Parallel linking was asked at: Can gcc use multiple cores when linking?
TODO: I think I read somewhere that compilation can be reduced to matrix multiplication, so maybe it is also possible to speed up single file compilation for large files. But I can't find a reference now.
Tested in Ubuntu 18.10.
I want to use clang for cross compiling. I've found out that it seems very easy, I can specify architectures/includes etc. just as I invoke clang directly. However, I don't want to keep passing those flags, I'd rather compile clang so that it would have these by default. That is, when I invoke clang just as clang++ main.cpp I'd like it to become clang++ -i686-w64-mingw32 -target-isystem=/usr/some/path main.cpp etc, how can I do that?
You can use a response file to do this sort of thing, it's also how you'd avoid command lines that are too long for your OS.
Something like:
clang #target_cmds.inc -c foo.c
will likely work for you.
(In addition to the earlier comments of some build system hackery or an alias, you could also define clang as a wrapper script that you invoke that does the same thing, e.g.:
#!/bin/sh
clang -target i686-w64-mingw32 -target-isystem=/usr/some/path $#
Use a makefile instead. Or create an alias in your bashrc. Everything else are crude hacks which I wouldn't use.