Can't get to exec one makefile in another one - environment-variables

Some background: I have a project based on ESP-IDF which has a complex builtin building system which you plug into with your own makefile (their documentation on using it).
This works fine (apart from occasional horrendous build times), but now I wanted to add a build target for unit tests for a component, which requires building this component against another project (the unit-test-app).
So, I need another build target that calls another make with another makefile and directory. Something like this works fine:
make -C $(path to unit-test-app) \
EXTRA_COMPONENT_DIRS=$(my component directory) \
TEST_COMPONENTS=$(my component name) \
ESPPORT=$(my serial port) \
-j clean app-flash monitor
But only if I execute it from bash. If I try to execute it from another makefile, it breaks either not finding some header files (the include path is different between the main and unit test project) or ignores the change of project (-C argument) and executes the main project build.
What I tried:
using $(MAKE), $(shell which $(MAKE)) and make in the custom target
using env -i $(shell which $(MAKE) ) -C ... with forwarding required environment arguments to the child make
using bash -l make -C ... and bash -c make -C ...
What works but is a dirty hack: using echo $(MAKE) -C ... in the make target and then running $(make tests) from the command line.
As far as I know, this is an issue of the parent makefile setting something up in the environment that I did not separate the child makefile from. What else can I do to separate these two?
UPDATE: I have created an example project that shows the issue more clearly, please look at the top Makefile of https://github.com/chanibal/esp-idf-template-with-unit-tests

I reproduced your situation as you are describing it and everything works fine, both if I call the inner make from bash or from the outer make.
So there is something you are not telling us that is causing the failure.
On the other hand, I feel there are several irrelevant details in your description.
So, I suggest you try to further isolate the problem, removing irrelevant stuff, and reproducing the problem only from the description in your question, and then when you are doing it you will probably find out what is breaking. If not, then post here the minimal setup with all the other details that are needed for the failure to occur.
By the way, what you are doing is not good practice, so maybe just avoiding it would solve your problems.
What I mean is, there is one case and one case only, where recursive make is good practice: make -C ${directory}
where in directory you have a completely self-contained build, not using anything from the outside.
It seems this is not the case for you, because you seem to be passing some outside location variables. This kind of recursive make is bad practice and should be avoided.

Related

Compile inside Docker container without huge container sizes

I'm creating an auto-testing service for my university. I need to take student code, put it into the project directory, and run tests.
This needs to be done for multiple different languages in an extensible way.
My initial plan:
Have a "base image" for each language (i.e. install the language runtime on buildpack-deps:stretch)
Take user files & pre-made project structure
Put user files into the correct location in the project
Build an image of the project extending the base image
Run the container. It will compile the project and run tests.
Save test results to the database, stop & delete the image
Rinse repeat for every submission
When testing manually, the image sizes are huge! Almost 1.5GB in size! I'm installing the runtime for one language, and I was testing with Hello World - so the project wasn't big either.
This "works", but feels very inefficient. I'm also very new to Docker – is there a better way to do this?
Cheers
In this specific application, I'd probably compile the program inside a container and not build an image out of it (since you're throwing it away immediately, and the compilation and testing is the important part and, unusually, you don't need the built program for anything after that).
If you assume that the input file gets into the container somehow, then you can write a script that does the building and testing:
#!/bin/sh
cd /project/src/student
tar xzf "/app/$1"
cd ../..
make
...
curl ??? # send the test results somewhere
Then your Dockerfile just builds this into an image, without any specific student code in it
FROM buildpack-deps:stretch
RUN apt-get update && apt-get install ...
RUN adduser user
COPY build_and_test.sh /usr/local/bin
USER user
ADD project-structure.tar.gz /project
Then when you actually go to run it, you can use the docker run -v option to inject the submitted code.
docker run --rm -v $HOME/submissions:/app theimage \
build_and_test.sh student_name.tar.gz
In your original solution, note that the biggest things are likely to be the language runtime, C toolchain, and associated header files, and so while you get an apparently huge image, all of these things come from layers in the base image and so are shared across the individual builds (it's not taking up quite as much space as you think).

What's the difference between RUN and bash script in a dockerfile?

I've seen many dockerfiles include all build steps in a RUN statement, like:
RUN echo "Hello" &&
cd /tmp &&
mv a.txt b.txt &&
...
and so on...
My question is: what's the benefits/drawbacks on replace these instructions by a single bash script that gives me highlight syntax, loop capabilities, etc?
Something like:
COPY ./script.sh /tmp
RUN bash /tmp/script.sh
and then
#!/bin/bash
echo "hello" ;
cd /tmp ;
mv a.txt b.txt ;
...
Thanks!
The primary difference is that when you COPY the bash script into the image it will be available for inspection in the running container, whereas the RUN command is a little more opaque. Putting your commands in a file like that is arguably more manageable for other reasons: changes in your VCS history will be a little more clear, and for longer or more complex scripts you will probably find it easier to format things cleanly with the script in a separate file rather than embedded in your Dockerfile in a RUN command.
Otherwise the result is the same (in both cases, you are executing the same set of commands), although the COPY and RUN will result in an extra image layer (vs. just the RUN by itself).
I guess running it off as a shell script gives you more control.
For instance, you can do if-else statements to check whether a command has failed or not and provide a code path to handle it. Whereas RUN is more straight forward and when the return code is not 0 it fails the build immediately.
Obviously the case you have there is a relatively simple one and it would not have had a huge difference. The only impact I can see here is the code readability aspect. Someone would have to read the shell script to know what is happening, comparing to having everything on a single file.
I guess it all comes down to using the right tool for the right job. If it is a simple command and you don't need complex logic handling then do RUN.

Override rpmbuild build location when called from command line

I am making some config packages which are built by Jenkins, then checked out whenever they are needed. The package itself is built and runs fine. My problem right now is the directories that rpmbuild uses for actually building the project. When I call rpmbuild SPECS/package.spec from my working directory, rpmbuild makes a new directory at /home/user/rpmbuild. This was fine when I was running tests but I would rather that I just be able to build from whatever file it is called from for the Jenkins process.
I see online people saying to make a ~/.rpmmacros file to overwrite the $_topdir variable. That approach isn't really working for the Jenkins build. Is there some way to simply call rpmbuild and build in the current directory? The structure is all there and it would work better for what I am trying to do. Thanks.
Yes, just override _topdir directly.
rpmbuild -D '_topdir /new/value/for/_topdir'
or
rpmbuild --define='_topdir /new/value/for/_topdir'
those should be identical but I've learned that they aren't always for some reason (and in quick tests rpm -D '_topdir /opt/tnstmp' --showrc | grep _topdir doesn't show the modified value but rpm --define '_topdir /opt/tnstmp' --showrc | grep _topdir did).

iOS - UI Automation multiple scripts with reset application

I am looking for solution where I can set my javascripts with order and when each script would start it would be independent on previous scripts. So I can run just one script or group of them and it would be working same.
I find that I can create one script file and use #import keyword, something like this:
#import "AddStaticContentMissingName.js"
#import "AddStaticContent.js"
It's working and both scripts are running but second one starts where first one ends and that is what bothers me. I can set first one to end when the second one needs but I don't like it. I just one to script do what should test and then end. So is it possible to before each test restart application or something like that? I want to have UI testing automate as possible so what or you using? Or are you using another tool then UI Automation?
Bonus question: I was looking for solution how to run this from command line and/or with Xcode Server. I guess Xcode Server is problem but for command line there is a solution. Problem with solution which I found is that I isn't portable right? I don't have any way how can I add some script to my repository and if someone try use it there would be problems with paths. Example of command I found:
instruments \
-w your_ios_udid \
-t "/Applications/Xcode.app/Contents/Applications/Instruments.app/Contents/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate" \
name_of_your_app \
-e UIASCRIPT absolute_path_to_the_test_file
If you want to reset the application between scripts, you need to do it yourself with a combination of app code and UIAutomation code. (Apple will be replacing Instruments with something that works better, but for now this is the only way.)
For example, if your application doesn't use the "shake" gesture for anything, you could use that to trigger a restart within your app (not closing it, just returning it to a known state). Then at the top of every UIAutomation script, you could just call the method for the shake gesture.
In the testing framework we wrote, we set up our own RPC channel to allow us to expose non-UI functionality (like resetting the app) to automation scripts. It really doesn't matter what system you use to make it happen, as long as you can reliably get the app to a known state.
I might be too late for this but it's totally possible to accomplish what you want. Basically, create a bash script (or any other script) and include the commands to run your two automation scripts:
#!/bin/bash
instruments -w <UDID> -t <template> <app> -e UIASCRIPT <script1>
instruments -w <UDID> -t <template> <app> -e UIASCRIPT <script2>
Run that and your app will restart after the first script creating a trace file per run.

Nmake standard output to file

I'm looking for a way to write the standard output of my nmake call to a specified file. I tried something like "nmake target > file.log", but this won't work. Moreover I call multiple nmakes from within my MAKEFILE and may use multiple log-files to keep track of the output. I've only found the nmake option to write errors to a file but what's about the standard output.
Is there a simple way to do that (in Windows)?
#Cheeso
I've tried to built simple example and noticed that it doesn't work for me because the MAKEFILE must running in elevated mode. Consider a makefile like this:
default:
REM Test
and a batch-file like this:
cd /d "%~dp0"
nmake output.log
pause
When running the batch-file as administrator it doesn't redirect the stdout to my file and returns an error.
jom is really picky, and it's made based on nmake. Since that's the case, we're probably dealing with the same pickiness.
This works : jom -j 8 >> build.log
While this doesn't work : jom -j 8>>build.log
Add some whitespace, and you should be good to go. This was incredibly annoying for me too with Qt 5.6.1-1. I even tried using Powershell transcripts, but that ended up being a complete bust.

Resources